You are on page 1of 180

Chip Design for Non-Designers

_Carballo_Book.indb 1 2/20/08 4:53:46 PM

_Carballo_Book.indb 2 2/20/08 4:53:46 PM
Chip Design for Non-Designers:
An Introduction

Juan-Antonio Carballo

_Carballo_Book.indb 3 2/20/08 4:53:47 PM

The recommendations, advice, descriptions, and methods in this book are
presented solely for educational purposes. The author and the publisher assume
no liability whatsoever for any loss or damage that results from the use of any
of the material in this book. Use of the material in this book is solely at the risk
of the user.

Copyright 2008 by
PennWell Corporation
1421 South Sheridan Road
Tulsa, Oklahoma 74112-6600 USA


Marketing Manager: Julie Simmons

National Account Executive: Barbara McGee

Director: Mary McGee

Managing Editor: Stephen Hill
Production Manager: Sheila Brock
Production Editor: Tony Quinn
Cover Designer: Matt Berkenbile
Book Designer: Sheila Brock
Book Layout: Aptara

Library of Congress Cataloging-in-Publication Data

Carballo, Juan-Antonio.
Chip design for non-designers / Juan-Antonio Carballo.
p. cm.
Includes bibliographical references and index.
ISBN-13: 978-1-59370-106-2
1. Integrated circuits--Design and construction. I. Title.
TK7874.C355 2008

All rights reserved. No part of this book may be reproduced,

stored in a retrieval system, or transcribed in any form or by any means,
electronic or mechanical, including photocopying and recording,
without the prior written permission of the publisher.

Printed in the United States of America

1 2 3 4 5 12 11 10 09 08

_Carballo_Book.indb 4 2/20/08 4:53:47 PM

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Who Should Read This BookAnd Why . . . . . . . . . . . . . . . . . ix
Focus Areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Chip Design Methodologies . . . . . . . . . . . . . . . . . . . . . . . x
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv

1 The Chip Design Flow . . . . . . . . . . . . . . . . . . . . . . . . . . 1

Complexity in Chip Design . . . . . . . . . . . . . . . . . . . . . . . . 1
Design Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Chip Design Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2 Specifying a Chip . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
The Chip Specification . . . . . . . . . . . . . . . . . . . . . . . . . 13
Developing Specifications through Languages . . . . . . . . . . . . . . 14
How Are Specifications Developed? . . . . . . . . . . . . . . . . . . . 16
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

3 System-Level Design . . . . . . . . . . . . . . . . . . . . . . . . . . 19
System-Level Designa Growing Discipline . . . . . . . . . . . . . . . 19
Design Sub-flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Design Entry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
What about the software? . . . . . . . . . . . . . . . . . . . . . . 27
On-chip interconnections: Buses and networks . . . . . . . . . . . . 28
What about the chip package? . . . . . . . . . . . . . . . . . . . . 31
A custom model using general-purpose languages . . . . . . . . . . 34
System-Level Design to Logic-Level Design: High-Level Synthesis . . . . . 38
Chip Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Advanced planning of an SoC and the players involved . . . . . . . . 42
Emulation as a means to accelerate high-level design
while keeping accuracy . . . . . . . . . . . . . . . . . . . . . . 43

_Carballo_Book.indb 5 2/20/08 4:53:47 PM

4 RTL/Logic-Level Design . . . . . . . . . . . . . . . . . . . . . . . . . 45
RTL/Logic-Level Designfrom Art to Science . . . . . . . . . . . . . . 45
Design Entry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Coding languages . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Graphical languages . . . . . . . . . . . . . . . . . . . . . . . . 49
Fundamental Blocks in RTL/Logic-Level Design . . . . . . . . . . . . . 49
Combinational blocks . . . . . . . . . . . . . . . . . . . . . . . 49
Sequential blocks . . . . . . . . . . . . . . . . . . . . . . . . . 52
Logic Design Entry Tools . . . . . . . . . . . . . . . . . . . . . . . . 54
Logic Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Logic Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Logic Timing Verification . . . . . . . . . . . . . . . . . . . . . . . . 65
Impact of manufacturing on timing . . . . . . . . . . . . . . . . . 69
Logic Power Verification . . . . . . . . . . . . . . . . . . . . . . . . . 71
Basic concepts in power analysis and optimization . . . . . . . . . . 72
Impact of manufacturing on power . . . . . . . . . . . . . . . . . 80
Signal Integrity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Cross talk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Supply voltage noise . . . . . . . . . . . . . . . . . . . . . . . . 83
Electromigration . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Substrate coupling . . . . . . . . . . . . . . . . . . . . . . . . . 83
Testability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Test synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Test verification . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Pre-PD Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Impact of Manufacturing on Logic Design . . . . . . . . . . . . . . . . 91

5 Circuit and Layout Design . . . . . . . . . . . . . . . . . . . . . . . . 95

Transitor (Circuit) and Layout DesignBack to Basics . . . . . . . . . . . 95
Circuit Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Design entry . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Basics of circuit design . . . . . . . . . . . . . . . . . . . . . . . 98
Styles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

vi Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 6 2/20/08 4:53:47 PM

Circuit verification . . . . . . . . . . . . . . . . . . . . . . . . 109
Analog Circuit Design . . . . . . . . . . . . . . . . . . . . . . . . . 110
Design entry . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Circuit verification . . . . . . . . . . . . . . . . . . . . . . . . 125
Layout Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Layout verificationDRC . . . . . . . . . . . . . . . . . . . . . 132
Layout verificationLVS . . . . . . . . . . . . . . . . . . . . . 133
Layout verificationlayout extraction . . . . . . . . . . . . . . . 136
Characterization . . . . . . . . . . . . . . . . . . . . . . . . . 138
Analog Layout Design . . . . . . . . . . . . . . . . . . . . . . . . . 140
Chip Physical (Layout) Design . . . . . . . . . . . . . . . . . . . . . 141
Clock planning . . . . . . . . . . . . . . . . . . . . . . . . . . 142
Power planning . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Impact of Manufacturability on Design . . . . . . . . . . . . . . . . . 144

Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147

Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Digital design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Analog design . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Overall chip design . . . . . . . . . . . . . . . . . . . . . . . . . . 150
System-level design . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Logic designtiming analysis . . . . . . . . . . . . . . . . . . . . . 151
Logic designpower analysis and optimization . . . . . . . . . . . . . 151
Logic designtestability . . . . . . . . . . . . . . . . . . . . . . . . 151
Layout design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Analog design . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Digital design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Manufacturability/yield . . . . . . . . . . . . . . . . . . . . . . . . 152

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

Contents vii

_Carballo_Book.indb 7 2/20/08 4:53:47 PM

_Carballo_Book.indb 8 2/20/08 4:53:47 PM
Who should read this bookand why
This book provides a practical introduction to modern chip design
methodologies. It is intended for manufacturing-oriented and other non-design
professionals with an interest in the pretape-out (pre-manufacturing) side of
design. This book concentrates on functional, logic, circuit, and layout design
using state-of-the-art methods and tools. More focus is given to the most popular
design styles, including semicustom design. Many practical and useful examples
are included throughout.
Readers who complete this book can expect to acquire a solid working
knowledge of modern chip design methodologies. The examples, exercises, and
bibliographic references provide an excellent supplement for application of the
concepts learned. Readers will become truly fluent with how to design modern
chips for varied applications.
Several types of readers will be interested in this book. First, it is intended
for chip industry professionals and academics who need a practical introduction
to modern chip design, including manufacturing-oriented and other non-design
professionals, plus business practitioners who need technical background
in design. Second, it can be leveraged for improved performance in many
semiconductor-related jobsfrom those held by research and development
professionals to technical marketing managers, technology strategists, business
development professionals, and technical sales professionals.

Focus areas
The focus is on the fundamentals of chip design, which should survive the
evolution in design technology. As a result, the reader should be able to fully
benefit from this book for at least five years. There is a lack of books providing
similar value that are currently in publication. While there are good books on
chip design, they tend to have different objectives and are detailed and lengthy.
This book does not try to compete with or outdo those books. It is intended for
non-design professionals who need a practical and useful introduction.
This book may also be used as an academic text, for a course that could
be called Introduction to Chip Design or Introduction to Design Methodologies,
especially in a manufacturing-oriented degree program. It is also a suitable source
of material for internal seminars, and indeed, when appropriate, I use it for my
own seminars, with audiences ranging from 30 to 200 people.

_Carballo_Book.indb 9 2/20/08 4:53:48 PM

The contents of this book have been evaluated by professional experts in
the field, including companies such as IBM, Intel, Cadence, and Synopsys, and
by professors from universities such as University of Michigan, Carnegie Mellon,
and Stanford. Top-tier technology publications have reviewed this book, and
their opinions can be found on the Web.
It is my hope that this book will provide the user with the knowledge needed
to do a job or complete a degree better, quickly and reliably, even though the
user is not a full-time expert (yet) in this topic. Some would consider this to be a
handbook. Practicality, industry relevance, up-to-dateness, and ease of use have
been primary goals in writing this book.

Chip design methodologies

A typical integrated circuit, a chip, is one of the most complex engineering
objects on earth. What is a design methodology? It is an important concept
that underpins and enables modern chip design. A design methodology can be
defined as the set of structured steps followed to complete a chip design. As can
be imagined, chip designs follow a strict, structured process to completion, just
like any other complex engineering artifact. As such, methodologies are often
composed of the following: guidelines, rules, tools, and design techniques used
for each step. Design methodologies will be taught in this book.
How does the design methodology of a chip relate to the method used to
manufacture it? Design obviously precedes manufacturing. A chip design has
to be completed before the design is sent out for production in the fabrication
facility. However, the story does not end there. After manufacturing, the chip is
tested, distributed, and used in the field. Maintenance, in-field debugging, and
other items complicate the chain of events. One can imagine, therefore, the
importance of correct, predictable designs; the time and economic investment
spent after design is so enormous that any loop-back action grows exponentially
in cost as we move further into the subsequent steps.
Fortunately, as will be clarified later in the book, manufacturing process
effects have always been modeled during the design phase by use of abstraction
and simplification, so that a designer can predict with a degree of accuracy
the impact of their actions. However, more complex manufacturing effects are
surfacing, making this simplified approach obsolete. As a result, these new and
more complex effects are being slowly inserted in the methodology flow through
painstaking research and development.
Design methodologies come in different flavors, just as a race car is not
designed in the same way as a pickup truck.

 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 10 2/20/08 4:53:48 PM

First, methodologies are domain specific:

Analog versus digital. Analog design is the process by which a circuit in

a chip is developed where its inputs and outputs are analog signalsfor
example, voltages proportional to the amplitude of audio or video
signals. The word analog refers to the proportional relationship that
exists between the signal itself and a voltage or current that represents
the signal. For example, to represent a certain voice level, a voltage of
0.25 volts may be recorded by a microphone. Analog circuits feature
a continuously variable signal, while digital circuit signals can take only
one of two different levels (which are assigned the numbers 1 and 0).
Information in digital circuits is thus expressed as combinations of 1s and
0s, or bits.

Hardware versus software. Chips increasingly require designing the

circuits inside at the same time as the software that runs on their
embedded processors. Consider as an example the processor that runs
the applications on your cell phone. This processor needs to be designed
somewhere, in house or by a third party. The circuit blocks around
itfor example, memory, peripherals, and video/audio transmission and
codingalso may need to be designed or outsourced to a third-party
vendor. The job is done, however, only when the basic software that runs
on the processor is delivered in such a way that it works smoothly with
the rest of the circuit (hardware) blocks.
Second, methodologies can be high or low level:

High level. In this type of methodology, only the initial, or high-level,

design steps are executed with direct input from the designers. The
rest of the process is either automated or outsourced. For example,
the designers may write code in one or more languages to describe the
functionalityfor example, the decoding of a video stream. Then, a
set of automated electronic design automation tools create the details of
the final chip design. This approach is fairly common in reconfigurable
chips (i.e., chips that can be configured to meet a specific function just by
changing some internal settings).

Low level. In this type of methodology, the chip is designed in detail, at

a low abstraction level. Because of the complexity of this level of detail,
the process may still leverage substantial automation. This approach
is most common in all analog chips, since analog circuits need to be
designed with extremely high accuracy. (While digital circuits need only
to distinguish between two values, 1 and 0, there are many continuous
input and output values that analog circuits can take.)

Introduction xi

_Carballo_Book.indb 11 2/20/08 4:53:48 PM

Third, methodologies can be market specific:

Computing. Computing circuits have a focus on raw computational

power. As a result, they are some of the most complex digital circuits
on the planet. The best-known computing chip is the microprocessor,
a circuit that runs many different types of instructions that can be
programmed by the software developer (more on processors later).
As a result, these methodologies have an emphasis on creating large,
complex, yet well-understood, digital functionsin particular, complex
arithmetic and memory blocks.

Communications. These types of circuits typically focus on high

bandwidth combined with low power consumption, as opposed to
raw computational power. They also require transmitting information
through a channel. As a result, these methodologies emphasize the
need for analog design (for signal transmission) and great expertise
in frequency analysis and information theory. An example is a high-
speed wired communications transceiver, which may need to accurately
transmit bits through a long cable at up to 10 gigabits per second (i.e., 10
billion bits per second). Another example is a switch chip, which switches
signals that need to be routed in a complex network. Having numerous
inputs and outputs, the switch decides which output a signal should go
to, when it comes from a given input.

Consumer. Consumer electronics have become a key driver of the

overall electronics industry. Examples include cell phone chips (e.g., the
digital-signal-processing chip that manages audio and video decoding),
and gaming console chips (e.g., the complex chip that simulates the
movements of the heros arms and legs in an adventure computer
game and creates the inputs for the graphics chip that will then render
them in the screen). The increasing demand for performance in these
applications has turned some of these into very complex chips. However,
consumer electronics is a highly commoditized industry; thus, low cost
concurrent with high unit volume is the main focus. As a result, these
methodologies tend to focus on minimizing chip area and use of older
manufacturing technologies whenever possible.

Automotive. Automotive electronics is one of the expected key drivers

of the electronics industry over the next decade. With hundreds of chips
inside, todays cars are some of the most complex electronic systems
available for salerepresenting as much as 80% of the value of the car
in some cases. Applications include air bags, entertainment systems,
and brake controls. Because many functions in a car are critical to the

xii Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 12 2/20/08 4:53:48 PM

safety and normal functioning of the car, reliability is targeted by these

Fourth, methodologies can be implementation specificthat is, a specific

function can be implemented in various ways, from designing each line in the
chips layout to programming an existing chip:

Processors. A microprocessor is a programmable digital block that

includes all basic computer functions (or central-processing-unit
functions) on a single chip. One or more microprocessors typically serve
as the central control unit in computer, consumer, and communications
systems. Processors can be chips by themselves, or they can be
implemented as cores (logic building blocks that can be wired together
to perform many different applications) and embedded into larger
chips with other components. Programmable microprocessors have a
set of mostly well-known blocks, such as arithmetic logical units and
instruction units. Because of their broad use, processor blocks may
be heavily customized to extract as much area efficiency as possible,
especially for high-performance processors used in personal computer
or server applications. It is thanks to microprocessors that computers
became manageable in size in the 1970s. Older computers were made
from much larger switching componentswith (discrete or integrated)
circuits containing the equivalent of only a few transistors. A typical
microprocessor may have hundreds of millions of transistors integrated,
reducing the cost of computational power enormously. Microprocessors
have been following Moores law, which has held for decades and is
now applied to most semiconductor devices, that the complexity of
an integrated circuit (e.g., in terms of number of transistors) doubles
every 24 months. This dictum has generally proven true since the early
1970s. This exponential increase in computational power has placed
microprocessor cores in virtually every system.

Application-specific integrated circuits (ASICs). ASICs are special-

purpose chips that contain a set of blocks arranged in a manner specific
to the application at hand. An ASIC is always customized for a particular
use; thus, they are different from microprocessors. For example, a chip
designed specifically to reside inside a cell phone is an ASIC. However,
many types of ASICs have evolved, including internal processors and
memory blocks, therefore becoming systems-on-chip (SoCs). These
complex ASICs or SoCs often include processors, memory blocks, and
even peripherals. An intermediate concept between ASICs and standard
processors is application-specific standard products. Thanks to more

Introduction xiii

_Carballo_Book.indb 13 2/20/08 4:53:48 PM

advanced technologies, smaller feature sizes, and improved electronic
design tools, ASICs are approaching very high complexity (and, thus,
functionality), including possibly hundreds of millions of basic functional
units, or gates (e.g., NOR and NOT). The unit volume for ASICs or even
SoCs is not generally as high as for processors, and they are not as
programmable. Thus, they are not as heavily customized by hand as are
major microprocessors.

Field-programmable gate arrays (FPGAs). An FPGA is a device that

includes programmable logic blocks and programmable interconnects
between these blocks. The key advantage is that programming can
be done after manufacturing the chip (i.e., in the field) by the designer
itself. The logic blocks can be programmed to emulate the function of
basic logic gates (OR, AND, NOR, etc.) or even more complex functions
(adders, multipliers, etc.) and often include memory components, such
as registers. Interconnects are also programmable, so that logic blocks
can be linked to each other as required by the design at hand. It is
similar to making manual connections (by soldering or with wires) on
an electronic board. There are also less expensive FPGA versions that
are not as programmable (they are programmed at manufacturing,
then left with a fixed configuration). Advantages of FPGAs include
fast design turnaround, ease of debugging, and lower engineering
cost overall. In terms of disadvantages, FPGAs are typically not as
fast as ASICs and fit smaller designs owing to the overhead involved
in all this programmability (i.e., a large number of small blocks and
interconnections, with many configuration options); furthermore,
the price for each chip is higher than for high-volume (high number
of units) custom chips. For small numbers of units with a need for
programmability one or more times, FPGAs are an excellent choice.
Other programmable logic devices also exist, but FPGAs are by far the
most popular and represent a growing industry.

Finally, methodologies can be technology specific. Advanced manufacturing

technologies allow the creation of very complex, area-efficient chips but also
require very complex design methodologies. Technologies tend to be referred to
based on the type of fundamental device that forms the basis of all circuits in that
technology (i.e., by combining millions of these devices, we assemble a chip), as
well as on the smallest possible dimension that defines such a device.
The most fundamental device in any modern technology is the transistor. As
will be described in more detail later, a transistor is a device with three functional
pinsgate, drain, and source (plus a fourth, which is fixed to the silicon substrate
or one of the other three pins)that can act as an electrical switch. When the

xiv Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 14 2/20/08 4:53:48 PM

switch is set to on by properly setting its three pins to specific voltages, electrical
current goes through its channel (i.e., the region between its drain and its source).
The length of its channel is a key factor in determining how easily and how fast
the transistor can be turned on. Thus, the basic transistors minimum length is a
key parameter in a silicon technology.
Digital design is completely defined by stitching transistors together to form
small circuits, or gates, that form basic logic functions, then combining millions
of these various gates to form a complex function. Analog design uses other
electrical devices as well, such as resistors and capacitors, but transistors are still
the most important device.
The most common technology family by far in current chips is complementary
metal oxide semiconductor (CMOS) technology. This refers to the type of transistors
used in the technology. That is the reason why one may hear, for example, about
a CMOS 0.13-micrometer (m) methodologythat is, a design methodology
for a CMOS technology that manufactures chips with transistors of a minimum
of 0.13 m in channel length. This is likely to be a simpler methodology than
a CMOS 0.065 m methodology, since with smaller geometries, new electrical
effects make design more complex; for example, transmission line effects may
need to be considered for long wires inside the chip to be accurately analyzed.
Exercise. Determine the meaning and definition of a chip design
methodology by performing an Internet search, looking at technical and business
publications. Find at least one mention or implicit example for each of the types
of methodologies described here: high/low level, market specific, implementation
specific, and technology specific.

By reading this book, one should expect to accomplish the following:

Attain a basic knowledge of modern chip design

Understand design process steps to manufacturable layout

Provide the specification of a chip or on-chip circuit

Know functional, logic, circuit, and layout design methods

Comprehend most popular design styles (semicustom design)

Understand computer-assisted design tools used

Appreciate manufacturing influence

Introduction xv

_Carballo_Book.indb 15 2/20/08 4:53:48 PM

Identify the impact of design techniques on manufacturability

Recognize the impact of manufacturing limitations on design

In doing so, we will focus on the pretape-out side; digital design, but
also analog content; computing, communications, and consumer applications;
ASIC (SoC) and semicustom design; hardware, not processor software; and key
parameters, such as power consumption. Principal assumptions regarding the
average reader are a general knowledge of chip design and some understanding
of how transistors, circuits, and logic work.
As standard disclaimers, examples should not be interpreted as part of any
product; trademarks expressed or implied belong to their respective owners; and
no endorsement of any product is intended.

xvi Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 16 2/20/08 4:53:48 PM

1 The Chip Design Flow
This chapter describes the typical flow of steps needed to complete a
design, together with the fundamental techniques that make this flow
generally effective in modern design.

Complexity in Chip Design

Todays chip designs can be immeasurably complex, given the demand
from customers to have highly integrated, ultrahigh-performance, low-power-
consumption, many-function devices. For example, a cell phone design is
diagramed in figure 11.

Fig. 11. Simplified block diagram of part of a cell phone board

_Carballo_Book.indb 1 2/20/08 4:53:50 PM

As can be seen in figure 11, the main board of a cell phone may incorporate
a mind-numbing multitude of functions, all of which need to work at the highest
levels of reliability and performance, including

Front-end and radio-frequency circuits, which perform all the functions

required in order to prepare the signal to be put in or read from the
antenna of the phone at very high frequencies and also bring the signals
from those very high frequencies down to more manageable (i.e.,
lower) in-phone on-chip frequencies, and vice versa. These tend to be
analog circuits (because they need to interface with the outside world)
that work at very high frequencies (e.g., often in the Gigahertz range; a
Gigahertz is a Billion cycles per second). Designing these circuits requires
considerable effort and expertise. The voltages and frequencies at which
they need to work may be determined by communications standards set
by an industry body.

Input/output circuits, which allow connections to and from fundamental

devicessuch as the keyboard, the screen, the microphone, the music
player, or even the computerthrough a cable. These circuits may be a
combination of analog (because of the interface with external chips and
systems) and digital (because they may need to perform mathematical
signal-processing operations on the information before interfacing).
These circuits often leverage standard interfaces, since they connect to
very common devices.

Power/battery management, which enables the phones energy to be

properly managed for all chips in the phone and a reliable connection to
the battery of the device. Power management has become
an enormous source of innovation in the overall chip industry owing to
(1) the need to power efficiency in hugely popular battery-powered devices,
such as cell phones; and (2) an energy crisis in most businesses because of
the huge cost of feeding energy to companies increasingly complex data
centers and keeping them cool enough to not exceed safety and regulatory
limits. Obviously, the latter case does not apply to cell phones.

Baseband processing, which performs all complex signal processing

at lower frequencies, digitally, so that the phone can manage signals to
eliminate interference, provide clarity, and so forth. Because this processing
may need to be done very fast, with very low energy consumption, and
may need to be applied to many different types of signals and protocols
(from cleaning a badly distorted input voice signal in a region of low
coverage to being able to understand the encoding of cell phone signals in
various countries around the world), this design is very complex.

 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 2 2/20/08 4:53:50 PM

Media processing, which is an increasingly important function that
enables phones to play music and video and, very soon, to relay location
via on-board Global Positioning System (GPS) devices (not shown in
fig. 11 for simplicity). While there has been a trend toward including
these functions in the baseband-processing chip, they are becoming so
complex that separate media-processing chips are now used. The trend
toward including everything but the kitchen sink in cell phonescoupled
with increasing demand for very-high-resolution media in a very slick,
thin, portable (thus, low-power-consumption) devicehas made the
design of these chips very difficult as well.

Each of these blocks can be composed of a number of integrated circuits.

There are two key trends to account for in this simplified example (fig. 11): First,
more and more functions are integrated in the device. Second, these functions
require an increasingly large amount of computational and signal-processing
powerthat is, they need to run faster, and they need to work at higher
frequencies, which are harder to detect and generate. As a result, chips that are
more complex and harder to design are being integrated. Given the well-known
pressures of the market to bring down device cost and area (the price and size of a
cell phone cannot grow with complexity; more often than not, it has to go down),
the complexity of each of the chips in the deviceand the complexity associated
with integrating them in the productis growing dramatically. Welcome to the
world of chip design, a fast-moving discipline that never relents, as it continuously
breaks performance, functionality, energy consumption, and size records.
Exercise. Find five chip products used for mobile applications, such as the
cell phone application described here, and determine what kind of functionality
each has and whether it matches the blocks in the diagram (fig. 11). If available,
list performance, power consumption, and size parameters; estimate whether
these parameters were easy or difficult to satisfy when the chip was designed.
Which two chips will be integrated into one in the next cell phone generation?
Provide a reason supporting your answer. Will that integrated chip be easier or
harder to design than the two prior, separate versions?
Given the sheer complexity of todays chip design, it is critical to understand
the two fundamental techniques of design complexity management:

Abstraction. By abstracting lower level details, designers can focus on

high-level critical design decisions first, with a manageable space of
decisions, and then move to lower level, more detailed and numerous,
decisions. Because designers start at a high level, though, it is important
to utilize an accurate model of what lies belowprimarily, models of the
basic blocks that are used at a high level, models that provide a rough
estimate of the result of the high-level design. Figure 12 depicts a set

The chip design flow 

_Carballo_Book.indb 3 2/20/08 4:53:50 PM

of abstraction levels in chip design, providing examples in the area of
communications circuits.

Decomposition. Work is accomplished by a divide and conquer strategy.

At a given abstraction level, designers can decompose the design
into multiple blocks plus the interconnections between those blocks,
then design each of the blocks separately (while accounting for the
interconnections), and finally assemble the blocks.

As figure 12 indicates, to deliver a chip that works within a communications

system, a designer must worry about key parameters or requirements that must

Abstraction Focus of Type of Example Number Total number

level design object of unique of objects
System A complete System-on-chip A communications 15 15
chip or large large cores router processor
core with many
Logic/ A logic block Logic blocks A video decoder 10100 10 million
function with a specific Logic gates block 100 million
function A AND function

Circuit A circuit that Transistor circuits A circuit 1001000 10 million

implements the schematic that 100 million
logic block performs
Physical A physical layout Transistors A layout for 1001000 100 million
that corresponds Cells a block that 1 billion
to the circuit performs
Manufact. A set of masks Lithography 10 masks N/A N/A
that allow to mask layers representing
fabricate the chip and features layout which
are sent to the
fabrication facility

Fig. 12. Abstraction levels in integrated circuit design

 Chip Design for Non-Designers: An Introduction

01_Carballo.indd 4 2/27/08 10:38:46 AM

be met. For example, the designer may worry about a chips bit error rate
(BER)that is, the number of bits that will be transmitted through the line
before an error occurs (a bit that is received with the wrong value; see top of
fig. 12). Unfortunately, the BER for this system depends heavily on the design
of the transmitter and receiver chips. In fact, it depends heavily on much lower-
level details of those designs. To emphasize the complexity and heterogeneity
of todays designs, this example requires producing both analog and digital
design blocks.
At the lowest level (from the designers perspective) is the manufacturing
layer, where all the details of the manufacturing process are considered. At this
level, variations in the manufacturing process ultimately make it impossible to
transmit all bits correctly in the system. Nevertheless, these variations are too
complex to be considered directly at the very beginning of the design process.
Thus, they need to be abstracted.
Immediately above this is the physical layer, where the design is described by
a complex drawing, or layout. The layout depicts each of the layers of materials
that compose a physical chip, similar to the layers in a cake. At this level,
manufacturing variations translate into dimensional variations in specific parts
of the chip layout. For example, the variability in the critical dimension (CD)
produces variations in the channel length of a chips transistors and ultimately
may have an impact on the ability to produce error-free signals.
The circuit layer sits atop the physical layer. At this level, the design is
described as a set of electrical devices connected by a number of wires. (There
is a trend toward the use of nonelectrical devices, e.g., micro-electromechanical
devices [MEMs], for such applications as automotive air bags and remote-
control gaming devices.) For digital circuits, these devices are transistors;
for analog circuits, these devices include other linear and nonlinear electrical
devices, such as resistors, capacitors, inductors, and diodes. The dimensional
variations below, in the physical layer, translate into variations in the electrical
characteristics of these devices. Most communications chips include circuits that
require matching pairs of transistors in order to correctly transmit and receive
noisy signals in the channel. Electrical device variations result in a mismatch in
transistor characteristics.
Atop the circuit layer sit the logic and functional levels, where the design
blocks are described as a set of functions at several levels of detail: a signal receiver
function, a NOT function, an ADD function, a multiplier, a memory block, and so
forth. Electrical variations at the level below may result in functional variations
that is, variations in the functions provided by these blocks. Specifically, in the
communications example, transistor device mismatch can effectively produce
jitter in the receiver circuit that needs to understand the signal coming across the
transmission channel.

The chip design flow 

_Carballo_Book.indb 5 2/20/08 4:53:51 PM

Finally, at the system level, the entire system that needs to satisfy customer
specifications is described. The characteristics of the functional blocks translate
into overall system characteristics, including the chip under design and,
potentially, a number of other components. At this level, modeling the system
accurately, including these other components, is essential to a successful design;
key instruments for this process include models that describe the behavior of
hardware blocks and models that describe the software that increasingly runs
on some of these blocks, such as small microprocessors embedded in the chip.
In the communications example, the model includes a way to compute the
BER, since that is a critical parameter in assessing whether the overall system is
correctly designed.
The bottom of figure 12 depicts how abstraction layers help to manage
complexity. The number of unique objectsand total objectsthat need to be
designed and/or placed and/or verified grows enormously as we move from
high levels of abstraction to low levels of abstraction. For example, an advanced
application-specific integrated circuit (ASIC) could easily have 25 million logic
gates (basic blocks serving basic functions, e.g., AND, OR, and NOT) and
hundreds of millions of transistors (the smallest electrical device that can be used
to start forming logic functions).
Except for full-custom design cases, the number of unique objects may be
much smaller than the total number of objects, at any abstraction level. While
this may seem like magic, the reason for this is simple. Modern digital circuits
repeatedly use a small number of important blocks to form complex functions.
For example, by assembling combinations of NOR, OR, NAND, AND, and
NOT functions or various physical sizes (to achieve flexibility in terms of power
consumption, speed, and size), we can build practically any logic function
imaginable. Furthermore, because useful information can always be expressed
as a combination of bits, we can build any function of any kind by using
this approach.
By contrast, a cell phone board or chip may have only 10 blocks or chips
that need to be assembled. At that high level, complexity is mostly relegated to
assembling these blocks properly and ensuring that the appropriate software is
written and verified to be run on the main microprocessor inside. Building a chip,
however, requires having all the details. Therefore, sooner or later, the design
team needs to create those details or import them from a third party.
Exercise. Pick a simple chip and try to determine all levels of abstraction,
from manufacturing/physical to functional/system. For example, take a chip that
performs compression and decompression for digital music and/or video. Search
information sources to find one or two examples of a media chip that includes this
functionality. What starting information would you need to commence designing
such a chip? In what language do you think it should be written?

 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 6 2/20/08 4:53:51 PM

Design Flow
The aforementioned set of abstraction levels helps deal with the complexity of
developing a chip. However, these abstraction levels are not enough. A structured
set of steps that flows through those abstraction levels is needed. This set of steps
is called design flow. Understanding this flow is critical to fully comprehending
the process and challenges involved in completing a circuit. Unfortunately,
as indicated in the introduction, design methodologies may vary with several
aspects, including implementation style or application domain. However, for the
purpose of this book, we will use a canonical design flowthat is, a basic design
flow that fits most applications, domains, and implementation styles. Due to the
focus of this book, this flow is necessarily simplified and typical, and will not fit
accurately with every detail of a commercial design flow.
Figure 13 depicts the canonical design flow. As figure 13 indicates,
designing a chip is a combination of a top-down processes, bottom-up processes,
and decision loops. (While the remainder of this discussion refers to a chip, the

Fig. 13. Canonical chip design flow

The chip design flow 

_Carballo_Book.indb 7 2/20/08 4:53:52 PM

same can be said of a large circuit block in a chip.) Design is not the end of the
story, though. The chip needs to be manufactured, tested, and assembled. The
chip is then inserted into a system that is manufactured, assembled, and put in
the hands of a customer.
Still, the focus of this book is the design side. Designers start at a high level
of abstraction and follow a set of steps toward a final description of the chipits
layoutsufficient to start manufacturing it. This is the top-down angle. However,
at each design step, designers need to use models, of basic blocks in the chip, that
abstract their lower-level details (e.g., an electrical model). These models need to
come from the bottom layershence, the bottom-up angle. When going from one
layer to the next one down, the lower-level details are first generated and are then
analyzed using these models, to ensure that they meet their specifications. Thus,
decision loops are needed; that is, if the verification returns negative, designers
need to change or regenerate the lower-level details. To understand this process
in a more concrete manner, let us go over the steps depicted in figure 13:

First, the specifications, or requirements, for the chip are defined at the
system level. In other words, we describe what the chip needs to do (its
function) and with what quantitative characteristics, such as speed, area,
cost, and power consumption.

Then, the chip or circuit is designed at a high level, the system level.
Here, we describe the chip and how it functions well within the entire
system, as the initial specifications require. At this level, we describe
both behavior and structurethat is, we describe the function of the
chip (behavior), as well as the set of blocks that compose the chip and
how they are interconnected with each other. Some of these blocks are
analog blocks (e.g., a circuit that generates a clock), and others are digital
(e.g., a multiplier).

Next, each of the blocks in the chip is then designed in one of three
forms: custom digital, semi-custom digital, and custom analog. (Note
that a fourth approach, using a pre-designed block, implies that another
team completed the block using one of the other three approaches.
Thus, there are only three possible approaches.) These forms are shown
as the three subflows from left to right in figure 13. Analog blocks are
designed in a custom manner (right-most subflow). Large digital blocks
are designed (middle subflow in fig. 13) by assembling a number of
custom-designed digital subblocks (left-most subflow in fig. 13).

Finally, once design ends, manufacturing per se begins (even if design

and manufacturing teams have been talking to each other before).
A mask set is created that faithfully reflects the layout of the chip.

 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 8 2/20/08 4:53:52 PM

Masks allow the chip to be manufactured, because integrated circuit
manufacturing is based on creating patterns on a silicon wafer through
an optical lithography process and applying these patterns to create each
of the functional, interconnection, and isolation layers on the silicon.
Every manufactured chip needs to be tested and assembled into its
package before it is ready to be shipped to the system maker.

Design, manufacturing, testing, assembly, and customer in-field usage may be

performed at the same company or at different companies (more often), whether
for chips or for systems. Between each of these phases, there are feedback loops.
For example, when the chip is tested and a mistake is found, a change in the
design may be needed. The chip is debugged with the proper equipment, and the
design team is involved to find and fix the underlying design problem.

Chip Design Tools

The final components needed to generate complex chip designs are design
tools, software tools that automate and aid one or more steps involved in the
design flow. Design tools are also called electronic design automation (EDA)
tools. In this book, design tools will be covered from the perspective of the user
namely, the design team. The discipline involved in creating and managing these
tools, the EDA discipline, falls outside the scope of this book.
Modern design flows have dozens of tools each. Very complex, custom
designs, such as microprocessors, may have well over 50 tools. Several kinds of
design tools may be employed, depending on the purpose.
Figure 14 depicts the differences between the types of design tools. There
are three main categories:

Entry/assembly tools. These are used to specify or construct a design

at a given abstraction level. For example, a logic designer uses an
entry tool to describe (literally, to type) the logic for a multiplier unit in
the very-high-speed integrated circuit (VHSIC) hardware description
language (VHDL). The process is similar to writing software code. Entry
tools do not perform analysis on the design, but rather are utilized to
create the design description and to make changes until the description
is satisfactory. Therefore, for the designers convenience, these tools
are often tightly linked with various types of simulation and verification
toolsnamely, the two types described next.

The chip design flow 

_Carballo_Book.indb 9 2/20/08 4:53:52 PM

Fig. 14. The difference between design entry, simulation, and verification tools

Simulation tools. These are used to simulate a function or a parameter.

For example, a simulation tool is used to estimate the power consumed
by a piece of logic, such as a multiplier unit, in watts. The designer
can then check the parameter against the specification (e.g., is
power consumption more or less than 0.1 watts?). Simulation is a
nonexhaustive processthat is, it can simulate only a specific case (or
a limited set of cases), not prove that the circuit will work in any case.
For simulation of a case, a specific workload is applied to the circuit; for
example, for an audio sound decoder, a workload may be composed of a
specific musical tune encoded as a set of bits.

Verification tools. These are used to check that a design meets certain
properties. For example, a verification tool can be used to check that a

10 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 10 2/20/08 4:53:53 PM

design layout for a wireless amplifier in a cell phone matches its circuit
schematic (e.g., are the transistors in the circuit correctly implemented
in the layout?). There are two possible results from a verification task:
a positive result, indicating that the design correctly matches the
specification; and a negative result, indicating what differences exist
between the design and its desired specification.

In this chapter, two key weapons have been described that design teams use
to manage the daunting complexity of designing a chip: a set of clearly defined
abstraction levels; and a design flow that extends from higher levels to lower
levels, until a manufacturable design is complete. The concept of design tools,
which provide automated support for a design flow, has also been introduced;
without design tools, modern chip design would be impossible.

The chip design flow 11

_Carballo_Book.indb 11 2/20/08 4:53:53 PM

_Carballo_Book.indb 12 2/20/08 4:53:53 PM
2 Specifying a Chip
This chapter outlines the very firstand increasingly crucialtechnical
step in designing a modern chip: coming up with and completing the
details of its specification.

The Chip Specification

Before a chip is designed in the conventional sense of the word, it needs to
be specifiedthat is, the team needs to know and document exactly what needs
to be designed. This is, in other words, the target of the whole project.
Definition. A chips specification is the set of requirements used as a starting
point to generate a top-level description of the design. A specification consists of
the following requirements:

Performance requirements. These represent how well the chip performs

against some specific metricfor example, power consumption
(preferring low to high), area (preferring low to high), and speed
(preferring high to low). They are usually expressed as quantitative
bounds in standard metrics, such as power consumption needs to be
less than or equal to one watt.

Functional requirements. These describe what function or functions

a chip should perform. They can include a certain communications
protocol (decoding Ethernet, Transmission Control Protocol/Internet
Protocol [TCP/IP], etc.) and a microprocessor instruction set (i.e., those
instructions that a computer must be able to understand and execute).

_Carballo_Book.indb 13 2/20/08 4:53:54 PM

Interface requirements. These describe how the chip interacts with
the outside world through a set of connections. As such, interface
requirements include the list of inputs and outputs, how they translate
into pins, and their purpose. For example, a network switching chip may
have thousands of input and output pins that correspond to the inputs
and outputs for the data flowing through the switch, plus a number of
pins to control the switching algorithm and switching table (determining
through which output pin certain data, from an input pin, will come).

As a corollary, a design at a given abstraction level is part of the specification

for the design at one or more of the lower abstraction levels.

Developing Specifications
through Languages
A chip design specification is a description of an artifact (i.e., a chip).
Therefore, a language is needed to develop such a description. There are two
ways to create a specification:

Informal specifications. These are documents written in a natural

language, such as English. There are two main types of informal
specifications: customer specifications and internal specifications.

Formal specifications. These are computer-based descriptions in

hardware-specific languages (e.g., SystemC, System Level Description
Language [SLDL], and unified modeling language [UML]), generic
programming languages (e.g., Java, C++, C, and Matlab), or hardware
schematics in a standard format. (Languages mentioned in this book,
e.g., Matlab, may be owned by specific companies, and, as such, are
used only as references, for learning and completeness purposes.)
Formal specifications tend to resemble not so much a specification as an
actual design at a high level and can sometimes be directly simulated.

Question. Why should design teams use formal languages, if English and
other natural languages are the most developed ways for humans to communicate?
Conversely, could we design a chip on the basis of only natural languages, such
as English?
Table 21 depicts a possible chip specification for a high-performance
interface communications circuit. Many exact copies of these circuits sit in a

14 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 14 2/20/08 4:53:54 PM

Table 21. A portion of an informal design specification in English

typical single modern networking chip. It is a very difficult type of circuit to design,
because it includes both analog and digital circuits inside, because it needs to run
at extremely high speeds (to enable increasingly high bandwidths), and because it
cannot consume a lot of power (owing to limitations in the overall chip).
As can be seen in the portion of the design specification displayed in table
21, the specification conveys a set of critical requirements that need to be met:

Performance requirements, in the form of its aggregate bandwidth that

needs to be provided (40 gigabits per second with four ports). Other
performance requirements include the following: the bandwidth range for
each pin can range between 500 megabits per second and 10 gigabits per
second; power consumption is guaranteed to be lower than or equal to
500 milliwatts; and jitter will not exceed two picoseconds of RMS.

Interface requirement, in the form of integration of up to 100 units for up to

eight terabits per second on a single chip. Other performance requirements
include the following: high-performance current-mode interface; logic
input/output with configurable output drive transmit emphasis settings; and
can be connected to backplanes, cables, and optical channels.

Functional requirements, in the form of compliance with 25 different

communications standards and adaptability to custom media with
low overhead, through five separate pre-emphasis and equalization
capabilities, together with an option for automatic adaptive equalization.

Specifying a Chip 15

_Carballo_Book.indb 15 2/20/08 4:53:54 PM

Fig. 21. A portion of a formal design specification in SLDL

Figure 21 depicts a portion of a formal design specification in SLDL. As

can be seen in figure 21, the chip or circuit under design cannot have a power
consumption over 100 milliwatts. This is expressed as a power constraint. In
addition, there is another constraint that is of a timing nature: a particular signal
event, inEdge, at an input pin triggers an output event, o, 10 nanoseconds later.
Although for simplicity, the description in figure 21 is not complete, the bottom
line is that the output needs to appear no later than 10 nanoseconds after the input
event happens. Thus, both constraints are essentially performance requirements.
Question. Why dont hardware and chip designers use the same languages
as software developers to create a chip specification? What is the key difference
between hardware (chips) and software (code that runs on processors)? Does it
make sense to start from an English-language description in both cases?
Exercise. Describe a specification for a chip that does two sums (of four
variables) in parallel, in a software development language that you know, such as
C, C++, Java, or Perl. Explain the issues that arise during development and why
a chip design language, as described in this section, might help.

How Are Specifications Developed?

The chicken-and-egg question that remains to be answered is, How exactly
do we come up with the requirements that make up a design specification? This
is a question that matters to the success of any chip project; without an accurate
and clear specification, the success of a project is in jeopardy.

16 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 16 2/20/08 4:53:55 PM

The development of the requirements in a specification, called requirements
analysis, requirements engineering, requirements gathering, or requirements
capture, is sometimes complicated. The goal of this process is to faithfully
represent the potentially inconsistent features requested by all stakeholders.
The main stakeholders are the customers of the chips, as well as partners and
suppliers of products with which the chip needs to work flawlessly. Customers
may be multiple teams that have a say in the requirements set, as opposed to a
single person.
There are several important aspects of successful requirements capture
teams. Requirements capture and the requirements themselves need to be

Accurate. Capturing the wrong requirements can be disastrous for

any project!

Exhaustive. Incomplete requirements or requirements lacking sufficient

detail can provide a false sense of accomplishment and can doom a
project even when the chip is already in use in the field.

Actionable. Requirements must be measurable and testable (otherwise,

how can we test against requirements?). They must also relate to actual
business requirements, no matter how technical.

Systematic. A systematic process is much more likely to result in high-

quality requirements.
As figure 22 depicts, the process of developing a specification starts with
the stakeholders: customers, partners, and suppliers. First, by establishment of
communication with the stakeholders, an initial set of requirements is derived.

Fig. 22. Simplified graphical depiction of the requirements development process

Specifying a Chip 17

_Carballo_Book.indb 17 2/20/08 4:53:55 PM

Second, those requirements go through a refinement processthat is, internal
and external dialog continues to ensure that the requirements are unambiguous,
clear, consistent with each other, representative of a real business and technical
need, accurate, and actionable. This second step may need to be repeated several
times with multiple stakeholdershence, the loop in figure 22. Third, once a final
set of requirements is listed, these requirements are developed into a specification
by using formal and informal languages, as described in this chapter.

This chapter has introduced the concept of the specification, a critical and
historically underserved part of a modern chips design. Informal (in natural
languages) and formal (can be processed by intelligent software) specifications
have been covered, both of which are necessary.

18 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 18 2/20/08 4:53:55 PM

3 System-Level Design
This chapter details the first technical step in designing a modern chip
after coming up with its specificationnamely, system-level design.

System-Level Designa Growing

Once a specification for a chip has been completed, the design is started in
earnest by the design team. Assuming a top-down methodology, the first step is to
generate the highest possible level of description of the design, implementing the
specification yet providing a broad perspective of the chip. As a result, the team
can estimate whether the specifications seem feasible and can delineate what
blocks need to be assembled and what functions they will perform. Because todays
chips are, in effect, complex systemspossibly including processors, memory,
high-speed communications, and even long embedded software programsit is
no surprise that this first design stage is called system-level design.
Definition. System-level design, or system design, is the generation of a
high-level functional description of a design. This description should include both
the system and its environment.
A modern system, whether on a single chip or on multiple chips, may include
large blocks, such as a processor (microprocessor); various types of memory
blocks; and even storage blocks, such as a hard drive (which is typically off chip).
In addition, a system should include critical environmental elements, such as the
package where the chip is embedded, the communications channel it needs to

_Carballo_Book.indb 19 2/20/08 4:53:56 PM

be connected to and work with, and the coding and/or patterns to which the
communicated data need to submit.
A system-level description should include the following three types of

Function. The functional description of the system describes its high-

level behavior. This may be accomplished in various mannersfor
example, through a software program that emulates this behavior.

Interface. The interface description of a system describes its high-level

inputs and outputs, including their name, function (see the preceding
bullet point), and data format, typically some sort of vector. For example,
a given input named clock represents the input clock signal, and it is a
periodic signal that goes from a fixed high voltage to zero voltage.

Structure. The structural description of a system provides a hierarchy of

blocks and subblocks that compose the schematic of the design, with the
proper interconnections between them that make it work according to
the specification.

Question. System design is not always pursued even in todays modern

chips. Why?
Pursuing effective system design is an increasingly critical task in todays
electronic industry. The enormous complexity of todays designs makes it
extremely important to evaluate feasibility and optimality as early as possible
in the design process. Otherwise, fatal errors will be found further down in the
design chain, and these errors will be orders of magnitude more costly to fix at
that point in time.
Unfortunately, system-level design is not pursued in earnest in a number of
designs, or it is pursued in a very nonstructured, quick manner. There are several
reasons for this. First, most of the EDA tools that are intended to be used at this
system level of abstraction are much more limited, less automated, and at an earlier
stage of development than the ones at lower levels of abstraction. While the EDA
discipline in general is several decades old, the system-level design discipline took
off with enough energy and funding only in the early 2000s, just as the design
community found that chips were essentially becoming complex systems, while
more and more complex functions were to be integrated in a single chip for cost,
power consumption, and performance reasons. The second reason for the lack
of true system-level design in some teams is difficulty in adapting the team culture
to devote enough time to high level design, given the time pressures on todays
product teams. The third reason is simple: in most cases, once the team comes
up with a system-level description of the design, it then needs to generate the

20 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 20 2/20/08 4:53:56 PM

lower-level description (after system design) by handthat is, possibly using EDA
tools but having to manually enter the schematic of programmatic description
of the design in those tools. A push-button tool that automatically generates
the details on the basis of the system-level description has not yet reached the
mainstream in electronic design.

Design Sub-flow
The system-level design flow is a subset of the overall design flowand
sits at the top of the design chain, as can be imagined. Figure 31 depicts a
simplified description of a typical system-level design flow. The input to this flow
is the design specification. The output is a register-transfer-level (RTL)/logic-
level description of the design. (For simplicity, assume that this flow includes
only digital design components. The case of analog or mixed-signal design flow
at the system level will be treated separately (see chapter 5).

Fig. 31. System-level design flow

System-level design 21

_Carballo_Book.indb 21 2/20/08 4:53:56 PM

Design Entry
The first step in system design is design entry. This is the process by which
a designer enters a description of the system at a very high level that he or she
believes will meet the overall specifications. This description could be a software
program or a schematic that describes the major components (e.g., a processor, a
piece of memory, a set of input/output transmitters and receivers) and how they
are interconnectedthat is, a structural description of the design. Alternatively,
it could be a behavioral description of the designthat is, a software program
or English-language description that provides the details of how the design works
to meet the specifications. A behavioral description closely resembles a standard
software program. Descriptions can also be mixed behavioral and structural.
System-level design entry has recently become a task of entering the description
of a system. Blocks that used to be on the system board now increasingly reside
inside the chip. Descriptions, whether behavioral or structural, are essentially
models of the system or part of the system.
The majority of current chips are systems on a single piece of siliconthat
is, systems on chip (SoCs). A typical SoC consists of various cores: one or more
microprocessors or digital signal processors (DSPs); memory blocks, such as
random-access memory (RAM), read-only memory (ROM), and electronically
erasable programmable read-only memory (EEPROM); timing or clock sources,
such as oscillators; peripherals, such as timers and reset generators; external
interfaces, including various industry standards; analog interfaces, including
analog-to-digital converters and digital-to-analog converters; voltage regulators
and power management circuits; and so forth.
Recall our initial example of a cell phone board (see chap. 1). The top of
figure 32 shows how the media-processing component is in effect a complex
SoC that needs to process multiple types of media (audio and video in various
formats, protocols, and speeds) very efficiently. Sometimes it is not possible to
build an SoC for a specific application. An alternative is a system in package
(SiP), which is a system composed of a number of chips fitting in a single
package. SoC tends to be more cost-effective in most cases, because the yield
(the number of chips that come off the manufacturing line working correctly) is
higher; several smaller chips are easier to manufacture than one large chip (e.g.,
if a dust spot falls randomly on a surface, the bigger that surface is, the more
likely it falls on the functional part of the chip), and their package will also be
easier to manufacture.
The blocks in figure 32 are connected by either a proprietary or an
industry-standard bus, such as the AMBA bus from ARM. Direct memory

22 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 22 2/20/08 4:53:56 PM

access (DMA) controllers may be used to route data directly between memory
and external interfaces, bypassing the processor core and thereby increasing the
data throughput of the chip.
Figure 32 zooms in on the media-processing block to depict it as an SoC.
This block is in charge of performing complex yet increasingly popular tasks,
such as audio and video encoding and decoding, in various formats and at various
speeds. This SoC includes one processor core (increasingly, there is more than one),
memory, and custom logic blocks, which are necessary since not all the complex
media coding and decoding can be done as software running on the processor.

Fig. 32. Design of an SoC through the use of various system-level models

System-level design 23

_Carballo_Book.indb 23 2/20/08 4:53:57 PM

The design of this SoC necessitates the creation or importation of models
for each of these blocks in one or more languages. As such, the example SoC in
figure 32 incorporates models from the processor itself, the software that runs
on the processor, the memory block, the custom logic functionality, and buses (a
bus is the most popular type of interconnection in current SoCs; it consists of a
communications channel shared by a set of blocks connected to it, which have to
follow a certain access protocol) and input/output connections.
Block models in system-level design may be developed by hand or imported
from internal or external libraries. For example, the hardware model for a
processor is usually provided by the supplier of the processor intellectual property
(IP)that is, the processor core.
One or more languages can be used to develop system-level models during
design entry. Because a system-level description is so high level, it tends to look
more like a software program. Thus, various languages are used, represented by
the following types:

General programming languages. Java and the C++ and C languages

(proprietary languages mentioned here only for reference) are two
examples of software development languages that, while used by
millions of programmers around the world for various purposes, are also
sometimes used to describe hardwarespecifically, to provide system-
level chip descriptions. For example, for the software running on the
processor in figure 32, general programming languages can be used;
however, special software libraries may need to be used and linked to by
that software, so that it works properly with the processor and the bus
attached to it.

Special-purpose design languages. There are also languages that are

used only for specific purposes, although not exclusively to design
hardware or chips. Mathematical languages, such as those in the Matlab
or Mathematica commercial programming environments, are becoming
ever more popular as signal processing is increasingly integrated into
chips for many applications (e.g., consumer electronics and automotive
applications), because of the importance of audio and video signals in
those applications. While these languages can be used for a number of
purposes, they are becoming increasingly popular for hardware design.

Hardware/system design languages. There are languages that are

used exclusively to design hardware or chips, sometimes for broader
applications. Well-known languages of this kind are SystemC, SpecC,
and SystemVerilog.

24 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 24 2/20/08 4:53:57 PM

Graphical languages. As a separate category, there are languages that
are embedded in development environments yet appear as graphical
interactions. While we call them languages, the user draws, drags,
and drops graphical entities, such as boxes and wires, to accomplish
the design. These languages can be part of the same development
environment as the nongraphical languages. An example is Simulink, a
graphical development environment within Matlab.

An interesting characteristic of these languages is that, even though they are

high level, they can sometimes model the impact of the manufacturing process
and technology. This is possible with well-fed input from the manufacturing and
low-level design sides.
Example: SpecC. Figure 33 depicts an example portion of a system-level
design description in the SpecC language. In this example, we describe the
function as follows:


by combining an add hardware block, f2, and a square hardware block,

f1, and wiring them to each other and to inputs in1 and in2 and output out3
as shown in figure 33. At this level of abstraction, the language provides the
constructs F1 and F2 to simplify design and abstract details from the designers.
Further, f1 represents an instancethat is, an actual block, of type F1, that is
used to calculate the square of a given input. Similarly, f2 is an instancethat is,
an actual block, of type F2, that is used to calculate the sum of its two inputs. Like
most languages, SpecC is based on types of blocks and signals that process or
transmit information, and instances of those types are created for each particular
design that a team is facing.

Fig. 33. A block of code written in SpecC

System-level design 25

_Carballo_Book.indb 25 2/20/08 4:53:58 PM

Example: SystemC. Figure 34 depicts an example portion of a system-level
design description in the popular SystemC language. In this example, a more
complex function is described using more complex blocks that are predefined
elsewhere or that will be defined later in the design flow. Because of its complexity,
only a portion of the core is described, even at this level of abstraction. A design
instance is being used, and its basic template, called SmallCore, is described in
figure 34. An instance of this type will be a core inside a chip. A core is a large
block inside a chip that is pulled from a given library, internal or external to the
company or institution designing the chip.

Fig. 34. Portion of a system-level code description in SystemC

The code is organized in different sections with different purposes. The

top rectangle in figure 34 represents inputs and outputs to this core, including
an input wire, clockInput, that carries the clock signal necessary to make all
core functions work in synch (more on the clock signal later in chapters 4 and
5); another input, called dataInput, a set of 64 unsigned integer (sc_uint) wire
signals, which together form a 64-bit input to the core (a bit is a variable that can
take a 1 or 0 value; in the electronics world, all information ends up encoded as a
set of bits; more on this as well in chapter 4); and an 8-bit integer (sc_int) output,
called controlOutput.
The bottom rectangle in figure 34 represents the actual design, where
internal signals and blocks, including the aforementioned core, are used: sigData
carries internally 32 bits; a clock signal of type sc_block, called CLOCK, with a
period of twice (2) the standard period (here, assume that it is two gigahertz),
represents the chip clock; and an instance of type SmallCore, called SPC, is
used. The actual detailed design is called lowPowerProcessor, and its internal
details are not provide here. (They are predefined at a high level in another code

26 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 26 2/20/08 4:53:58 PM

file, and the details will be defined later in the design flow or borrowed from a
third party.) The clock input for this core (i.e., its clockInput) is the CLOCK signal
just mentioned. The data input for this core is connected to the sigData 32-bit
signal just defined (i.e., only 32 of 64 bits of input to this core are assigned at
this point).
The last line in this example is a simulation statement to indicate that the
clock starts ticking at 50 time units. (A time unit is typically one nanosecond [109
seconds] or one picosecond [1012 seconds] for more modern, faster designs.)

What about the software?

We have mostly focused on the development of the hardware in our design
in other words, everything but the actual software that runs in the one or more
processors typically embedded in the chip. Many modern chips under design
include one or more microprocessors inside, each of which may have a different
programming model and set of basic instructions. Software was traditionally built
in-house. More recently, owing to its complexity, software may also be purchased
from third parties other than the chip or hardware provider.
In most cases, software applications run on an operating system, which
forms the layer of software that touches the hardware and abstracts its
complexity from applications. An operating system is basically a software
program that manages the resources of a system (which themselves can be
hardware or software). The operating system is in charge of basic management
functions like memory allocations, requests for access to various resources (e.g.,
an external hard drive), and access to input/output devices, networks, and
files. It also provides an interface (graphical or not) to direct these functions.
For example, a cell phone may include a chip with a processor that runs the
Linux operating system. Many applications may run partially or completely
on top of that operating systemfor example, voice calls, Web browsing, and
video recording. Open software-programming environments and operating
systems (e.g., Linux) may include a broad selection of third-party software, thus
addressing a larger market.
Question. How is software developed, and should it be developed in
conjunction with the hardware?
The ranks of SoC designers currently include software developers. These
designers are called embedded-software developers, since this is software that
needs to be designed as the chip is completed and will be provided as an integral
part of (i.e., embedded in) the product. Software development tools are provided
by specialized companies, by the companies providing the chip or system,
and by the community of developers themselves (in which case, the software
is often open for modifications; this is called open-source software). The tools

System-level design 27

_Carballo_Book.indb 27 2/20/08 4:53:58 PM

and languages used for embedded-software design are also used by conventional
software designers:

Development environments. These are graphical or textual interfaces

that allow entry of the code and facilitate direct application of the
following tools within the same environment.

Compilers. From programming, compilers create code that can be run

directly on the hardware. Custom compilers and linkers may be used to
improve optimization for the particular hardware.

Assemblers. From low-level assembly language (where each command

represents an single instruction in the processor!), assemblers create
code that can be run directly on the hardware.

Debuggers. These simulate the behavior of the software, to find bugs.

Some processors have an emulator device inside that helps to quickly
load and debug the code. Other processors include a signature at the
end of the software, so the chip can check if the program is valid. For
DSP systems, a math simulator such as Mathematica or Matlab may be
used. Debugging may be pursued at assembly level (low [detailed]) or
source level (high [the level at which most programmers prefer to work]).
Sometimes a personal computer may be used if the processor in the
computer is similar or can emulate the target processor.

On-chip interconnections: Buses and networks

One nonapparent element in the preceding example is the need to
model interconnections in SoCs. Figure 35 describes the two most popular
types of intercore communications on chip: bus interconnections and
network interconnections.

28 Chip Design for Non-Designers: An Introduction

03_Carballo.indd 28 2/20/08 5:10:27 PM

Fig. 35. Comparison of chips based on bus interconnections with networks on chip

Most modern chips connect large cores via an interconnection structure called
a bus. Buses are the shared highways for on-chip (or off-chip, which is beyond
the scope of this book) communications signals, with limited capacity. Because
of its limited capacity and the target cores limited attention span, when a core
(e.g., a microprocessor core) wants to talk to another core (e.g., a memory core,
for writing on it or reading from it), it may need to apply for access to that bus
beforehand. If the bus is not being accessed by another core to use that memory,
then the bus can be used to make the connection and transfer the data.
Buses can be categorized into blocking and nonblocking. A blocking bus is
used in complete bursts of data. Once the burst of data is transmitted, the bus is
released and can be used by a different core or set of cores. Nonblocking buses
are used on a cycle-by-cycle basisthat is, for every few clock ticks, some data
may be transferred, but others may be able to access the bus in between. Here is
a simplified example of blocking bus operation in SystemC:

Status read_in_bursts (start_address A, data *D,

length = 6, lock = FALSE, priority = 1)
{...wait(transmission_complete); ...}

In this example, the operation performed is a blocking read operation from

memory, starting at memory address A, then reading six blocks of data, with
top priority. Until this operation is finished, we cannot directly access this bus to
perform other operations. Status is the value returned and indicates when the
operation is complete. The wait statement indicates this blocking characteristic.
A similar function can be written for a nonblocking read that returns
immediately. Thus, it does not block access to the bus and lets the simulation
continue for the designer.

System-level design 29

_Carballo_Book.indb 29 2/20/08 4:53:59 PM

Major chip or silicon core companies feature their own bus, which they
propose as an industry standard. Specifically, companies that provide their
own processor cores, either general or special purpose (e.g., DSPs), often also
provide an open bus specification, standard, and model so that chips including
these processors can easily connect them with off-the-shelf cores, such as image
decoders, memory cores, and peripherals. Examples include the AMBA bus
from ARM, which connects with ARMs processor core family; the CoreConnect
bus from IBM, which connects with the PowerPC processor core family; and the
MIPS bus, which connects to the MIPS processor core family (these names and
their corresponding trademarks are owned by their respective company owners
and are used here only for reference purposes).
Modern chips are starting to consider an emerging technology, networks
on chip (NOC)that is, an interconnection network with many point-to-point
direct connections, which eliminates some of the blocking concerns of buses.
The information in this case is decomposed into packets, each of which is routed
using that network, possibly going through several switches or intersections.
Thus, these networks work very much like communications networks outside
chips and systems, using packet-type protocols and mini-switches or routers to
communicate (as the Internet does).
Question. NOC presents important advantages as technology advances and
chip sizes increase. What is the downside of using emerging technologies like
NOC, if any?
NOC may seem like high-overhead solutions to the interconnect problem
in SoCs. In addition, like any other change, they imply a modification in the
techniques used to design and possibly to utilize the resources in a chip.
However, adoption of NOC is growing for several reasons. First, as
silicon technology moves ahead, interconnects are starting to dominate
performance and power consumption, because signal propagation through
the chip wires requires longer timessometimes multiple clock cycles. A
network can reduce the complexity of designing these wires and can help
predict and control their speed and power consumption. Networks are regular
and modular and are thus predictable structures (lengths are fixed). From the
perspective of system-level designand as the number of cores in a chip
increases (even processor cores only)it becomes rational to utilize NOC.
Reusing silicon cores, and synchronizing them with each other, becomes an
easier task. The productivity of design tasks should improve as networks
become more mainstream.

30 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 30 2/20/08 4:53:59 PM

What about the chip package?
The chip package, also called the carrier, is the most important
encapsulation in which a chip is embedded. Packaging provides protection,
control, and connection with the outside worldin particular, with the board
in which it is embedded. For example, an application processor is plugged into
the main electronic board in a cell phone. To physically insert a chip into its
package and to physically plug the packaged chip into its board is a matter
of extreme precision (todays chip packages may have thousands of pins) and
thus is done in an automated fashion. There are many types of chip packages,
primarily based on the architecture of the package and the materials used,
including the following:

The conventional dual in-line package (DIP), which features pins on the
left and right sides of the chip

Ceramic pin grid array (CPGA), which features pins on a grid and uses
ceramic materials

Organic pin grid array (OPGA), which uses organic materials

Flip-chip pin grid array (FC-PGA), which enables a chip to be flipped

onto another (like a sandwich), thereby enhancing density

Because of the complexity of todays packages and their impact on the

cost, performance, and energy dissipation, there is an increasing need for chip/
package codesignin other words, packaging decisions need to be made as
soon as possible in the design process. Thus, todays methodologies increasingly
account for packaging aspects as early in the process as system-level design.
Figure 36 depicts details on an example chip/package codesign process.
As the left side of figure 36 shows, the chipin this case, the unpackaged
chip or silicon itselfis embedded in a package that has a multitude of pins,
each of which has to be connected to a pad in the silicon chip, for input or
output purposes. The packaged chip is then embedded in an electronic board
(or whichever type of form factor the system takes). Thus, the chip/package
codesign needs to account for three concepts: the unpackaged chip, the package,
and the system board.

System-level design 31

_Carballo_Book.indb 31 2/20/08 4:53:59 PM

Fig. 36. A chip/package codesign process

As this example demonstrates, the designer uses a set of system-level tools

to perform a series of analyses and design choices based on these analyses. Note
that loops (at the top of fig. 36) exist between each of these design tasks, to
converge to the appropriate chip/package combination, and the order of these
tasks may change depending on the design methodology at hand.
First, an extraction is performed, to extract the equivalent circuitthat is,
the combination of chip, package, and board equivalent to a certain electrical
circuit. When analyzed, the equivalent circuit can provide tremendous insight
into performance, power, and so forth. This circuit may include fundamental
electrical components, such as

Resistors. Resistors are electrical devices that transmit current linearly;

that is, the voltage across the device is always a constant multiple of the
current through the device, whose value is the resistor. In chip/package
codesign, resistors represent wires inside the chip, such as those that
transport the power supply voltage from outside to feed the chips
circuits through the chips pads. To explain why resistors are important,
consider the fundamental equation for a resistor:

V = IR (3.2)

where I is the electrical current, R is the value of the resistor, and V is

the voltage. As equation (3.2) shows, voltage is proportional to current,
and vice versa. In other words, for a given electrical current, as a voltage
goes through a resistive wire/pad/network of value R, it will lose size
proportionally to the value of that resistor.

32 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 32 2/20/08 4:54:00 PM

Inductors. Inductors are magnetoelectric devices whose voltage is
proportional to the change in current over time. In chip/package codesign,
inductors (sometimes combined with other devices, e.g., resistors) can be
utilized to model package pins and wires, external board wires, and long
and thin wires (which increase inductance). To explain why inductors are
important, the fundamental equation for an inductor follows:


where I is the electrical current, L is the value of the inductor, and V

is the voltage. As equation (3.3) shows, voltage across the inductor is
proportional to inductance and to the time derivative of current. In other
words, small changes in electrical currentwhich are common in todays
chips, to ensure that they run fastmay cause quite large changes in
voltage, especially if the inductor is large. Hence, inductive package
pins and board wires may affect negatively the stability of the voltages
supplied all over our system.

Capacitors. Capacitors are electrical devices whose current is

proportional to the change in voltage over time. Capacitors store charge
between their plates, and that charge sets a voltage across the device
(charge = capacitance voltage). To change voltage, the amount of
charge needs to be increased or decreased, and either process takes
time. In chip/package codesign, capacitors (sometimes combined with
other devices, e.g., resistors) can be utilized to model internal chip
wires and pads, used for voltage supplies or data signals. To explain
why capacitors are important, the fundamental equation for a capacitor


where I is the electrical current, C is the capacitance, and V is the

voltage. As equation (3.4) shows, current is proportional to capacitance
and to the time derivative of voltage. In other words, for a given electrical
current, if a capacitor is large, the voltage cannot change very fast with
time. Hence, large capacitors help maintain stability in voltage supplies.

Next, a package is tentatively chosen to satisfy the design requirements.

(Alternatively, this choice could be made before the first step, i.e., before
extraction). For example, a DIP may be less expensive, but may not have enough
pins and/or cannot tolerate such high signal frequency. Note that if the package

System-level design 33

_Carballo_Book.indb 33 2/20/08 4:54:00 PM

choice is changed, designers need to re-create the equivalent circuits used to
model the chip/package combination.
Power distribution is analyzed. A map of the levels of power supply voltage
across chip, package, and board is obtained, to ascertain whether there are hot
spots or noisy spots that can affect reliability, integrity, and speed and lead to
power dissipation. A typical map is a colored graphical chart describing the level
and variability of voltage across the interconnections on- (most important) and
off-chip. Red or a darker color means higher levels, while blue or a lighter color
means lower levels.
Speed and frequency is also analyzed with all the data at hand, to evaluate the
actual performance the system board will experience, which is critical to satisfy
ultimate customer requirements. A typical result is a real-time simulation of the
signals across package and board interconnects, to evaluate signal integrity
that is, how the shape of these signals gets distorted across the wires and results
in a maximum performance with correct functionality. For example, as the signal
moves through the given package and across a long wire in the board, it gets
distorted and may require that maximum signal frequency be no more than one
gigahertz (one billion cycles per second).
Finally, the power supply distribution networkand, potentially, other clock
and data signalsis analyzed to determine how much capacitance is needed at
each point in the network. Areas with lots of variation in the signal tend to need
more capacitance (more details on capacitance as a key circuit characteristic
will be given later), but they occupy more space and consume more power. For
example, in the areas of the chip closer to the package, the voltage supply size
and quality may be lower; thus, large capacitors may be needed.

A custom model using general-purpose languages

An alternative to using languages specifically intended for hardware and chip
development is to create a custom model for the type of chip or circuit under design
by using general-purpose or mathematical languages. Here, an example is presented
of a design flow used to develop high-speed communications link circuits. However,
let us look first at the chip under design. This example includes various aspects of
system design, including analog design, package design, and digital design.
A communications link circuit sits just before the inputs and outputs to a chip
and delivers digital data to or from the outside world (a wire or a board) into the
chip at very high speeds. A graphical depiction of these circuits and how they fit
within an overall hardware communications system is provided in figure 37.

34 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 34 2/20/08 4:54:00 PM

Fig. 37. How a link core/circuit fits within a communications system

As figure 37 indicates, there are transmitter link circuits and receiver link
circuits. Transmitter circuits, implemented as cores within the chip that transmits
the data, send information over the channelthat is, the medium where
communication happens in the hardware system. A channel can be a copper
wire, a fiber-optic wire, or a computer board, for example. Because channels,
chip packages, and connectors are not perfect transmission media, the signal
transmitted becomes distorted in the frequency and time domain as it moves out
of the chip through the package, across the channel, and then into the receiver
chip through its own package. As a result, the signal that the receiver obtains is
quite different from the one sent, as can be seen at the top of figure 37.
This distortion effect, combined with the very high speeds that the signal goes
at, presents a difficult design task. Note that the receiver circuit needs to perform
complex signal recovery functions, first using analog operations to receive the
signal (the world is always analog before it enters a digital chip), then converted
to digital signals, then perform complex DSP operations to recover accurately
the information initially sent on the other side. Designing these transmitter and
receiver circuits or cores requires accurate modeling of the entire system that
produces these distortion effects, and a similarly accurate approach to model the
speed, power consumption, area, and overall error performance of the system.
Enter the custom modeling system provided in figure 38.

System-level design 35

_Carballo_Book.indb 35 2/20/08 4:54:01 PM

Fig. 38. A system-level design flow for a serial communications link chip

The system design (at left in fig. 38) operates a design environment to
perform estimations on

What kind of BER performance is possible (i.e., how many times per
second we would receive a signal and misinterpret it, e.g., decode it
as 0 when it is 1), which, for example, can be modeled in the design
environment as


where J is a variable called jitter, an expression of the uncertainty of the

signal around its expected timing (like a vibration around the vertical
signal lines at the top left of fig. 37); X is the signal value itself; and V
is the value level that determines whether we see a 1 or a 0. In other
words, the BER is the result of a probabilistic or statistical calculation
around the mean value of the timing of the signal. The actual jitter
depends on various variability conditions associated with the noise in the
chip devices, the variation in the supply voltage, and even the capability
of the circuit design to deal with high noise levels.

What amount of power consumption is needed to achieve this

performance. In a first approximation, power consumption can
be expressed as the sum of the power consumption estimated for
each block in the chip, plus a factor that accounts for the power
consumption of the interconnections between the blocks (this factor
may be small for analog circuits, since they tend to have small numbers

36 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 36 2/20/08 4:54:01 PM

of large blocks with few interconnections, but may be quite large for
digital chips).

How much chip area it will take to achieve such performance and power
consumption. Similar to power consumption, in a first approximation, the
total area number can be computed as the sum of the areas of all blocks,
plus the estimated area for the interconnections between these blocks.

Let us focus on a receiver circuit in this example. The performance of these

estimations requires that a link configurationthat is, a set of variables that define
the communications link receiver and transmitter at a high levelbe entered.
For example, how complex is the equalization needed on the receiver to fix the
signal that comes from the wire and package into the chip? It could be three-bit,
four-bit, or eight-bit equalization. The parameters described here are part of a
parametric chip model. This is a model of the link that simulates all of its signal-
processing characteristics, calibrated with real-world data; the parametric chip
model is simple to use, as it uses only a few parameters (like knobs).
A generic system simulatorthat is, simulation software that can deal with
the parametric chip modelthen can run the model, using the parameters,
to obtain performance. The simulator could be a commercial off-the-shelf
mathematical program, such as Matlab, if the model is written in a corresponding
language. Alternatively, it could be the basic operating system in any computer, if
the model is compiled from a general-purpose language like C into an executable
program. Estimator functions can be used to estimate power consumption and
area for the circuit.
The system designer can use the design environment in two ways. First, it
enters parameters for the link; it runs a simulation and estimation; and based on
the results (power, area, and performance), it keeps modifying the parameters
until it is satisfied with the results. Second, it can ask the system to perform
parameter modifications intelligently and automatically to satisfy optimization
parametersthat is, a set of parameters to guide the search for a solution. For
example, no more than one square millimeter of area or no more than 0.250
watts of power consumption per one-bit receiver link.
For the design environment to work, however, somebody needs to enter
the parameters that describe the characteristics of the channel (and the chip
packages)so that signal distortions can be simulatedand the data patterns
exiting one chip and entering the other (e.g., 000101010). Design environments
like this can be created using various nonhardware languages: general-purpose
languages (e.g., C), mathematical languages, or even custom-made languages
(e.g., an extension of C++ that includes signal-processing functions and
vector variables).

System-level design 37

_Carballo_Book.indb 37 2/20/08 4:54:01 PM

System-Level Design to
Logic-Level Design:
High-Level Synthesis
Once a system-level description of the design has been generated and
validated or simulated to the teams satisfaction, it is time to generate and evaluate
a lower-level description of the design. The next level is the RTL/logic level. The
process by which we generate an RTL/logic-level description of a design from a
system-level (higher-level) description is named high-level synthesis.
By far the most common approach to high-level synthesis is manual
performance, because of the relative lack of fully automated EDA tools in
the market. In other words, in most cases, a design team who has written a
system-level description in a language has to rewrite another description, from
scratch, at the lower, RTL/logic level, usually in a different language. The
alternative, automated approach is a very hot area of EDA innovation, even
while many of the available offering are sometimes still in research or at the
prototype/beta phase.
Language is also an important component of the translation. In addition to
moving to a lower level, with small blocks like logic gates, adders, and registers,
the design team typically also needs to move to a completely different language.
The main reason for this change is the ability to use languages specifically
intended for chip and hardware design, such as VHDL and Verilog. As a result,
there are various combinations of language changes as we move down from one
level to the one belowfor example,

C code to VHDL code

Matlab code to logic code, in Verilog, or VHDL, for field-programmable

gate array (FPGA) devices

SystemC code to Verilog or VHDL code

Java code to Verilog code

The next chapter describes the logic level of design in detail.

38 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 38 2/20/08 4:54:01 PM

Chip Planning
A parallel aspect of designing a chip at a high/system level is the chip
planning or prototyping process. While it is important, as already demonstrated,
to generate a diagram of high-level blocks and their individual functions, their
overall interconnections, and how they work together to implement the overall
functionality, a team cannot move ahead without completing an additional task.
It is critical to evaluate whether this initial draft can provide not only functionality
but also the rest of its important quantifiable requirements, such as area,
performance, and power consumption.
Early chip planning facilitates an efficient overall design process, since it is
vastly more effective to find mistakes and over- or underdesigns when it is early in
the calendar cycle. Therefore, chip planning focuses on estimating these design
characteristics at this early stage. While these estimations cannot be fully accurate,
their roughness is compensated by the enormous advantages that they bring.
The most common approach used to generate chip estimations at this stage
is fast synthesis and physical implementation of the entire chip. In other words,
planning is aimed at re-creating the entire design process that will come downstream,
but in a much faster, automated, and approximate manner (for simplicity, we assume
only digital blocks here). This synthesis process typically consists of

Floorplanning. This is the arrangement of blocks across the space on

the chip, to optimize overall area and performance. For example, blocks
that are critical to performance and need to work together often and have
many wires between them should be arranged very close to each other.

Clock synthesis and power planning. Clocking is the blood of a chip,

and its blocks may need many different frequencies and types of clock
signals. Sizing and routing the wires that carry these signals is a whole
design task by itself. For example, blocks that share the same clock signal
should connect to the clock source via wires that are roughly the same
length, when measured from the block to the clock source.

Logic synthesis (including partitioning). This is the generation of the

myriad of atomic logic blocks in the chip, such as ANDs and ORs,
needed to implemented the very large amount of functionality in each
block. This synthesis needs to be performed later in a detailed fashion.
Here, a very rough and approximate job is done, mostly focused on the
amount of gates needed and not so much on whether the exact functions
are represented.

System-level design 39

_Carballo_Book.indb 39 2/20/08 4:54:01 PM

Placement and routing. This is the detailed placement and
interconnection of the small blocks just generated, as well as the
connections between those blocks. Again, accurate implementation of
the functions (e.g., does this wire go precisely to the right pin in this
block?) is not needed.

We will return to this topic later (see chap. 5), when the detailed version of these
layouts is generated, with special emphasis on the generation of the clock and
voltage supply network.
To summarize, chip planning allows a design team to establish feasibility of
the chip quantitative requirements by generating a rough sketch of the design,
emulating the back-of-the-envelope approach that leads to a set of numbers, plus
a prototype that provides background for these numbers and an idea of what
the design looks like. If these numbers are completely out of any reasonable
boundaries (e.g., area is 10 times the maximum established requirement), then a
redesign or replanning is necessary at this stage, when the cost of fixing mistakes
is manageable. The same situation later on can cost many times more in schedule,
cost, and revenue misses.
A graphical description of a basic chip design estimation process is provided
in figure 39. As depicted in figure 39, there are several key inputs to this
estimation process: a high-level block structure, technology parameters, and
optimization criteria.

Fig. 39. A graphical depiction and example of the chip design prototyping process

First, a high-level block description of the design (system level) is provided,

primarily composed of the main blocks forming the structure of the chip,
plus a description of their interconnections and their inputs and outputs. This
description typically comes directly from the system-level design process covered
in this chapter and is a portion of the output of this process. The most important

40 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 40 2/20/08 4:54:02 PM

aspects are as follows: which blocks are needed; the characteristics of each block,
especially size and functionality inside; the interconnections between blocks; and
the inputs and outputs of the overall design.
Technology parameters are necessary to characterize the technology in
which we expect to fabricate this chip. Without these parameters, it is not
possible to understand how large, fast, or power hungry the chip will be,
since these three aspects will depend significantly on the technology at hand.
Technology parameters include aspects such as minimum wire width, the
approximate delay introduced by a wire of unit length, the approximate size
of each type of logic gate (recall that a gate represents the minimum built-in
functional units in modern digital design, e.g., NOR, NAND, and inverters), and
the power consumed by each of these types of gates. The parameters may also
directly refer to larger blocks from a built-in or third-party libraryfor example,
how much area a memory of given size, for example, one megabyte (1,000,000
bytes, or 8,000,000 bits), will take up in this technology.
Optimization criteria are the objectives and constraints that are important
to the design team, and should generally be consistent with the initial design
requirements. Objectives are expressions of preference by the design team on one
or more parametersfor example, maximize performance or minimize power
consumption. Constraints are limitations on one or more parametersfor example,
no more than 5 square millimeters of area, or it must fit within a one-by-one-
square-millimeter rectangle to be consistent with the package pin arrangement.
In the example shown in figure 39, the design team is targeting a
complementary metal oxide semiconductor (CMOS) 90-nanometer (or CMOS
0.09-micrometer [m]) technology from a given supplier, which determines the
key atomic parameters needed to perform the estimations. The main optimization
criteria described in the figure are minimize area and target speed of at least
1.6 gigahertz. This example refers to a communications circuit core that needs to
satisfy communications standards up to a rate of 3.2 gigabits of data per second,
which implies a minimum (clock) speed to the core circuits of 1.6 gigahertz or
half that data rate. (The improvement doubling the data rate is achieved by having
twice the number of circuits working simultaneously in parallel at the receiver
over the same data flow.)
Figure 39 also shows that the other key input is a block diagram of the
design (details of it have not been provided, to simplify the figure). A planning tool
or set of tools is applied to these inputs. The result includes a prototypethat is,
a rough layout of the entire design that resembles what should come out months
later after detailed design. It also includes a set of output characteristics for the
prototype, including its estimated area (four square millimeters), average power
consumption (210 milliwatts), and maximum internal speed (1.6 gigahertz).

System-level design 41

_Carballo_Book.indb 41 2/20/08 4:54:02 PM

Please note that in some cases, prototyping and estimation cannot happen
at a high system level of abstraction but need to wait until an RTL/logic-level
description exists. This limitation often arises because of the difficulty of producing
accurate estimations from very-high-level descriptions. In any case, estimations
are always more precise at lower levels of abstraction than at higher ones.

Fig. 310. Advanced chip-planning process, showing the origin of each input

Advanced planning of an SoC and the players involved

Figure 310 depicts an advanced planning process for an SoC, including more
details on the various types of outputs involved, plus the entities or organizations
that need to collaborate to provide each of the inputs. As figure 310 indicates,
beyond the requirements and architectural schematic details of the design, various
other inputs are needed that often require external collaboration:

Providers of circuit cores of silicon IP need to ensure that there is a

library of cores that can be leveraged to implement the basic architecture
schematic desired. For example, libraries of various types of memories,
input/output blocks, or even microprocessor configurations are often
necessary, as these are blocks that most design teams do not want to
design from scratch.

Foundry companies and/or manufacturing organizations need to provide

details of the manufacturing process models, so that actual estimations
of area, power, and performance, and, importantly, manufacturing
yield (i.e., the percentage of chips that are likely to come off the

42 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 42 2/20/08 4:54:03 PM

manufacturing line correctly despite the imperfections in todays complex
manufacturing processes) can be provided.

Foundries, manufacturing organizations, and possibly design and industry

consortia need to provide economic models that allow estimatation of
the complex cost aspects involved in producing the chip. These models
may range from basic dollars-per-square-centimeter metrics to packaging,
testing, and other subsequent costs needed for a realistic estimate.

The outputs provided can include the layout or floorplan, power consumption,
yield/manufacturability, cost per chip, possible chip packages, chip area,
maximum frequency, maximum performance, and a logic/functional description
resulting from a rough synthesis step, as described previously in this chapter.

Emulation as a means to accelerate high-level design while

keeping accuracy
An alternative or parallel process is called emulation and is the process by
which the chip hardware is mapped to a platform based on actual hardware,
such as an FPGA that emulates the behavior of the SoC. This allows the
software to be loaded directly into the memory of the emulation platform, and
once it is programmed, this platform allows, for example, verification, testing,
and debugging of the hardware and the software at the same time at almost
completely real-time speeds, thereby providing a realistic, fast path to identify
any functional issues.
Question. The default approach to the verification of functionality in a chip
is simulation, not emulation. In what cases is it advantageous to use simulation
purely on software, if it is so much slower?
Simulation is clearly the easiest and lowest-cost path, even if it is not fast
enough for very large designs, especially when the software that runs in the
chips processor is also simulated. (Each small software program takes many chip
clock cycles to run, thus possibly taking days and days to finish a simulation.)
Emulation can mitigate the speed issues involved in pure simulation, by
mapping the design to special hardware that runs much faster, while any specific
inputs (the testbench) can keep running in the computer used for simulation. The
emulation hardware is connected to the computer via a high-speed connection,
to feed the proper data to the emulated design while ensuring that there is no
unavoidable bottleneck.

System-level design 43

_Carballo_Book.indb 43 2/20/08 4:54:03 PM

Question. Why not create a prototype of the design on a board, so it can be
simulated faster?
A prototype of almost any digital chip can be built by wiring a set of FPGAs
on a board, thereby resulting in a fast and cost-effective solution. Building such a
board and making changes on it (possibly rewiring the board and changing the
FPGAs configuration), however, may take a long time, is subject to errors, and
may not be easy to debug (because not all signals are accessible from outside).
Making prototypes is clearly very useful as the design nears its end and is in a
more stable state, after fixing most bugs. At this point, the focus is on simulating
the chip very rapidly to uncover remaining bugs and possibly to ensure that the
software runs well on the chips processor.
Although emulation is typically focused on function and not so much on
power, speed, or area, it can be used to create such estimates as well. After
emulation, the chip can be designed following the aforementioned method.
Function is not to be discarded, though; functional bugs found may account for
up to 70% of the time and people required in order to complete a chip and serve
it during its life cycle.

44 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 44 2/20/08 4:54:03 PM

4 RTL/Logic-Level Design
This chapter elaborates the first technical step in designing a modern chip
that is automated in mainstream chip design: RTL/logic-level design.

RTL/Logic-Level Designfrom
Art to Science
Once system-level design has been executed, it is time for RTL/logic-level
design. In general, this design phase is much more automated than system-level
design and details the digital portion of a hardware design, including its function
and its structure. RTL/logic-level design is done on all major digital blocks in a
design and can be defined as follows:

RTL. RTL design is based on higher-level components, including

relatively large logic blocks. Examples of these blocks include adders,
multipliers, and registers (hence the name).

Logic (or gate) level. Logic design is based on smaller components,

called gates, including small logic blocks (e.g., Boolean operators)
and one-bit latch (register) blocks. The name logic refers to the use of
the fundamental Boolean operators AND, OR, and NOT (i.e., binary
operators: true or false, denoted by 1 or 0, respectively).

There are two reasons for describing these two levels in a single chapter:
First, both currently fall under a single, increasingly merged level of abstraction;
that is, a designer can write code to describe registers in the same file (e.g., where

_Carballo_Book.indb 45 2/20/08 4:54:04 PM

it describes logic gates and clock buffers). Second, these are the first two levels
of abstraction at which there is a significant level of automation available with
todays EDA tools. Hereafter, logic design is used interchangeably to represent
RTL/logic-level design, for simplicity.

Fig. 41. The talks that need to be pursued during RTL/logic-level design

The logic design flow is depicted in figure 41, including the most important
tasks that need to be pursued during the phase of design. As figure 41 indicates,
during logic design, engineers have to pursue the following tasks:

Design entry. Designers enter the detailed description of the block by

using a specific language, typically combining and interconnecting RTL
and logic-level blocks. Low-level behavioral descriptions are also possible.

Logic synthesis. Synthesis is the process by which a designer applies

an EDA tool to its RTL/logic-level description and generates a detailed
description whereby each gate and RTL block is fully mapped to the
available library of blocks that the manufacturing organization can produce.

Logic simulation and verification. For both the synthesized and the
raw RTL/logic-level description, it is necessary to perform a number
of checks that provide the engineering team with practical assurance
that the design can be manufactured as indicated in the specification.
Because of the detailed level of abstraction, there are a large number
of complex simulations and verifications that need to be performed.
Items that need to be verified include (but are not limited to) power
consumption, performance, functionality, testability, and correctness.

46 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 46 2/20/08 4:54:04 PM

Physical synthesis/design. The detailed design is then ready to be
transformed into a layoutthat is, the full graphical description
representing manufacturing layers. Once the layouts for all blocks in the
chip have been generated and interconnected, the chip will go through
final checks before moving on to manufacturing.

In the following sections, each of these steps and their subtasks are described in
more detail.

Design Entry
During design entry, designers enter the detailed description of a digital
block by using a specific language that allows them to mix and to interconnect
various types of RTL and logic-level blocksand, optionally, certain low-level
behavioral descriptions.
Question. Logic design is practically always based on writing code in
hardware programming languages, with VHDL and Verilog being by far the
most popular. Why is a coding language (and not a graphical schematic tool) the
correct and most popular approach?

Coding languages
Logic design is always conducted with the support of coding languages
specifically developed for hardware design. There are several good reasons to
use coding languages. The number of logic gates and blocks in a design, even
in a partial digital block, is typically extremely large, from 50,000 to millions of
gates. An effective way to enterand, more important, to documentthis mass
of objects is imperative.
Coding languages, well proven in software development environments, can
handle very large numbers of objects in an organized manner, enabling engineers
to quickly and effectively enter and understand designs in a team environment.
Hundreds, or even thousands, of engineers can work to develop complex designs
taking thousands, or possibly millions, of lines of code. For this reason, logic
design has evolved into an activity largely based on writing and debugging large
amounts of code in special-purpose languages, such as Verilog and VHDL. Note
that both languages can be used for logic blocks in both custom chips and FPGAs.

RTL/Logic-Level design 47

_Carballo_Book.indb 47 2/20/08 4:54:04 PM

The two most popular languages both have been recently extended to describe
analog circuits as well; however, their digital-oriented versions are described
as follows:

VHDL. This language was originally created for the U.S. Department
of Defense to document structure and behavior of the chips included
in the hardware bought by the department (to forgo large manuals).
Soon simulator software was developed that could parse the language,
and next logic synthesis tools were created. In accordance with the
original customers requirement, to leverage past experience, VHDL was
loosely based on the existing Ada software-programming language (not
a hardware language) and as such is a strongly typed language. Being
a hardware description language, VHDL can describe the parallelism
inherent in hardware and includes a fundamental set of Boolean
operators (NAND, NOR, NOT, etc.). The language is an IEEE standard
and allows designers to include numbers (integer and real), logical values
(1s and 0s), characters, arrays of bits and characters, and time units.
Behavior statements can be inclusive, and modern simulators can simulate
their execution of all those statements in parallel, just as the hardware
would do. If the described design blocks are composed of synthesizable
Verilog, logic synthesis tools can be applied to synthesize the design.

Verilog. By contrast, Verilog has a syntax similar to that of the C

software-programming language, to make it attractive to the engineering
community. Some of the control statements, such as while or if, are
similar to those of C, and the language is case sensitive. Owing to
its purpose, though, the language does have certain key differences
from C. The most important difference is that time is expressed in
Verilog natively, since the main goal is to describe digital hardware with
clocks managing the parallel execution of various modules, which are
assembled into a hierarchy. Other, less critical differences include the
absence of pointers in Verilog and the use of terms such as Begin and
End to bound blocks of code (e.g., functions). Each block, or module,
includes inputs and outputs defined by ports. Sequential behavior
statements (bounded by Begin and End) are placed across the design. All
of these statements can be executed in parallel. If the described design
blocks are composed of synthesizable Verilog, logic synthesis tools can
be applied synthesize the design.

48 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 48 2/20/08 4:54:04 PM

Graphical languages
Although logic design cannot be pursued practically without coding
languages, graphical approaches also exist, either as basic beginner kits or
as a complement to a coding-based approach. Most modern schematic-based
tools from major EDA vendors actually generate language code anyway.
Graphical tools are generally based on schematicsthat is, computerized
engineering drawings composed of symbols representing logic components
and wires interconnecting those components. Schematics are useful to help
understand and convey the meaning of a circuit and can clarify the hierarchy
within a design (i.e., each symbol can contain a whole schematic inside, with
logic symbols and interconnections).

Fundamental Blocks in RTL/

Logic-Level Design
Logic design at its core is based on combining fundamental blocks into
larger blocks. Essentially, computation encompasses two operations: first,
performing the operation per se (e.g., addition of two numbers); second, storing
the result somewhere before the next computation is performed. Following this
line of reasoning, there are two types of fundamental blocks in digital design:
combinational blocks and sequential blocks.

Combinational blocks
Combinational blocks perform logic operations. Complex logic operations
are formed by combining basic logic operations is various ways. Any systematic
operation on numbers, text/language, or anything else that can be expressed as a
sequence of 0s and 1s should be able to be expressed as a logic operation. Several
fundamental logic operationswhich we call operators or, more commonly,
gates, with logic inputs and logic outputsare depicted in figure 42.

RTL/Logic-Level design 49

_Carballo_Book.indb 49 2/20/08 4:54:04 PM

Fig. 42. Fundamental blocks in digital logic design

NOT/BUFFER. These are the simplest possible operators in logic design.

A NOT operator literally negates the value of its inputthat is, returns a 1 if the
input is 0, and vice versa. A BUFFER operator is even simpler: it returns the
exact same logic value that appears at its input. Logic operators are represented
mathematically by specific symbols. For example, the NOT operator is typically
represented by a bar on top of the input variable. Thus, the equation describing
a NOT operator is written as follows:


The equation describing a BUFFER operator is much simpler:


50 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 50 2/20/08 4:54:05 PM

Question. If the BUFFER operator does not change the logic value of its
input, why do designers use BUFFER operators at all?
The utility of a NOT operator is obvious. However, it is not so readily apparent
why designers would use an operator that does not change a logic value. The
answer is simple: because it provides a different kind of change; it changes the
quality of the signal. As will be explained in chapter 5, voltage signals deteriorate
as they travel across wires and gates.
AND/NAND. An AND operator returns 1 only when both inputs are also
1 and 0 otherwise. A NAND operator is the combination of a NOT operator
with an AND operator. The equations describing AND and NAND operators,
respectively, are as follows:


OR/NOR. An OR operator returns 0 only when both inputs are also 0 and
1 otherwise. A NOR operator is the combination of a NOT operator with an OR
operator. The equations describing OR and NOR operators, respectively, are
as follows:


Multiplexer (MUX). A MUX operator comprises at least two types of

inputs. One type, the data inputs, are a set of inputs, only one of which will
proceed and be transferred to the output. The other type, the control inputs,
determine which one of the former inputs will be transferred to the output. Thus,
a MUX gate will always have at least one control input and two data inputs. The
equation for a MUX operator is as follows:


Question. What is the minimum number of control inputs in a multiplexer?

Since control inputs need the ability to select from all data inputs, the minimal
number of control inputs is as many bits as necessary to count the number of
data inputs.

RTL/Logic-Level design 51

_Carballo_Book.indb 51 2/20/08 4:54:05 PM

Exclusive OR (XOR). Although not shown in figure 42, the XOR operator
is important and can be formed on the basis of the fundamental operators
described previously. The output of an XOR operator is 1 if and only if the inputs
are differentthat is, if one input is 1 and the other input is 0, or vice versa.
For this reason, XOR is very commonly used in arithmetic and communications
operations, especially when checking whether two pieces of information are
equal (e.g., when transmitting information over a lossy channel). The equation
for an XOR operator is as follows:


Sequential blocks
Sequential blocks store data immediately atop a clock pulse (the clock signal
voltage goes up, then goes down, forming a pulse [or mountain peak]) or a
clock edge (the clock signal goes up or down). Sequential blocks can be classified
according to multiple dimensions, including

Clock behavioredge sensitive versus level sensitive

Ability to be easily testedscannable versus nonscannable

Sizein terms of the number of bits of information they can store: 1-bit
versus N-bit or registers

Logic functionalitynone (no change in value) versus embedded

functionality; that is, in addition to storing values, it also simultaneously
applies operations on them (NOT, NAND, NOR, etc.), which may help
with parameters such as speed or area.

The most important sequential operators, based on the preceding classification,

are graphically depicted in figure 43.

52 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 52 2/20/08 4:54:05 PM

Fig. 43. Fundamental sequential logic operators

Edge-triggered latch (single-bit). An edge-triggered latch is transparent

when the clock signal transitions from one value to another, typically from 0 to
1. When the clock signal has a fixed value, 0 or 1, the output maintains the value
that it had in the previous clock cycle.
Level-sensitive latch (single-bit). A level-sensitive latch is transparent
when the clock signal has a certain value, typically 1; otherwise, the output
maintains the value that it had in the previous clock cycle.
Scannable (testable) latch (single-bit). A scannable latch has extra
inputs and outputs, to ensure that the chip is testable. When the latch is in scan
mode (i.e., the scan input is 0), the latch input becomes scanIN, and the latch
output becomes scanOUT. As a result, multiple latches can be linked in a chain
mode, to facilitate first getting data inside the chip (scanning in), then having the
chip perform a series of operations, and finally getting the data out of the chip
(scanning out) to test whether the chip meets the specifications.

RTL/Logic-Level design 53

_Carballo_Book.indb 53 2/20/08 4:54:05 PM

Register (multiple bits). A register is formed by assembling a bit word as
a set of single-bit latches. For example, to describe a number between 1 and 10,
one needs four bits. These four bits can be stored and updated by assembly of a
four-bit register by grouping four one-bit latches. The output is the concatenation
of their outputs; the input is the concatenation of their inputs; and the clock is
the same for all.
Question. Why would one need four bits to describe a number between
1 and 10 when using a register?
The answer is simple: four bits can express a number as large as 15, while
three bits can only reach a number as large as 7. Therefore, the minimum number
of bits we need is four.

Logic Design Entry Tools

As mentioned earlier, while a number of tools exist that designers can use
to enter logic design descriptions, coding-style tools are the most popular since
logic design is most frequently a software-coding exercise. One popular design
entry tool is the publicly available Emacs suite, as depicted in figure 44, which is
an example of a basic logic design in the Verilog language.

Fig. 44. Verilog description of a comparator block (top) and a divider block (bottom)

54 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 54 2/20/08 4:54:06 PM

As figure 44 illustrates, the designer is working on two logic blocks
simultaneously: a comparator block and a divider block. Blocks are described
as modules in the language. In a manner similar to software programming
languages, these modules are first defined in terms of their name, their inputs,
and their outputs. For example, for the comparator module, on the one hand,
inputs include a, b, c, e, f, g, j, k, and lall of which are inputs of length (i.e.,
number of bits) FRAME_WIDTH, whose value is 10; outputs include d, h, i, and
m (see fig. 44). The divider module, on the other hand, has two inputs and
one output, all of which are of length N, which in this case is 40 (see fig. 44).
The other inputs (e.g., clr and clk), are control inputs with various functions,
such as clearing the input or providing a clock signal (remember that this is a
sequential block).
Beyond the definition of each module, there needs to be a description of
the details of each module. One useful construct is wires. Because we are trying
to describe a hardware block, a logic circuit, we need to connect components to
each other through wire constructs, many of which can be seen in the description
of the divider module (at bottom in fig. 44).
More generally, design entry tools should allow designers to describe at least
the following components:

Modules or blocks, small (logic gates) and large (adders, multipliers, and

Inputs/outputs for each of the blocks and for the overall design

Connections between each of the blocks, most frequently in the form of wires

Function or behavior, regardless of structure (additions, substractions, etc.)

Timing (arrival times for signals at specific points, wires, modules, etc.)

Example. Figure 45 shows another example in Verilog, focusing on two

separate blocks: an adder and a multiplexer. The first block is called F2 from type
Addcla (for adder of carry look-ahead type), and each of its inputs and outputs
has 32 bits. Its two inputs are the two data inputs, A and B, and its carry in, CIN
(carried from prior additions). Its outputs are the actual sum, SUM, and its carry
out, COUT (to be carried to subsequent additions).

RTL/Logic-Level design 55

_Carballo_Book.indb 55 2/20/08 4:54:06 PM

Fig. 45. Two simple blocks described in Verilog

The second block is a multiplexer called mux1 (this multiplexer is of type

mux3, because it has three inputs). It has three data inputs, IN0, IN1, and IN2;
two other inputs to define the selection, select0 and select1; and an output,
As figure 45 shows, each module input and output is often bound to a
variable, signal, or wire. For example, IN0 in our multiplexer is bound to signal
a. By binding the same signal a to another input or output of another module, a
connection between blocks is established.
Example. Figure 46 describes another example of a portion of a design,
this time in VHDL. This module, called tri_input_module, has three inputs, a,
b, and c, and two outputs, x and y. The modules function is as follows: First, a
AND b is computed and put into a temporary variable called temp; then, temp
AND NOT c is computed, which is then combined with c. Thus, the function
performed is a AND b AND NOT c.

56 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 56 2/20/08 4:54:06 PM

Fig. 46. Description of a module in VHDL

As figure 46 also shows, a library is imported. This means that a number

of functions and modules are defined elsewherein this case, in the IEEE library
(specifically, std_logic_1164)so that they do not have to be defined again
here. Typically, libraries include basic logic functions that would not need to be
defined again.

Logic Synthesis
Logic synthesis is the process by which a logic description is mapped to the
available fundamental logic blocks in a particular technology, called cells. These
are basic combinational and sequential blocks that belong to a library. The library
includes a fixed group of cells that provide commonly used functions and that can
be combined to form larger logic functions. Library cells are verified to work with
the manufacturing technology that will be used to fabricate the chip at hand. The
result of a logic synthesis execution is called a netlist, because it looks like a list
of gates interconnected by wires or nets.
Logic descriptions rarely match exactly the set of libraries in a cell, for several
reasons. Descriptions may be written at a high level to improve the productivity
of designers. Portions of the descriptions may be written in the form of behavioral
statements. Designers may not know exactly what cells the library contains.
Finally, it may be more effective to write a technology-independent description,
especially in cases in which the same chip is to be manufactured by multiple
suppliers over time. Logic synthesis provides this mapping.

RTL/Logic-Level design 57

_Carballo_Book.indb 57 2/20/08 4:54:07 PM

The basic mapping to a technology library is the dumb (although not
uncomplicated) part. Mapping needs to be done in an extremely efficient manner,
because todays circuits need to be very fast and have a very low power consumption.
For each logic operation, the best combination of logic library cells needs to
be selected.
Logic synthesis tools are semiautomated tools that perform a set of operations,
mostly sequential, including the following:

Simplify logic (no redundancy). Logic descriptions typically start out

as redundant. Signals and variables that could be shared are often
not identified. Logic synthesis identifies signals and logic gates that
can be shared among various functions, so that the area and power
consumption can be improved.

Reorganize logic (efficiency). Logic synthesis also reorganizes logic so

that it can be more effectively mapped to the gates in the library.

Decompose logic (so that it maps to existing library gates). As

mentioned earlier, logic descriptions are usually not expressed, so this
mapping needs to be done anyway.

There are many ways in which logic synthesis could map a logic description
into library gates. Some criteria need to be applied for what amounts to be an
optimization process. The most popular criteria include the following:

Timing. The netlist is generated such that it is either the fastest possible
or guaranteed to meet a certain speed requirement. This tactic usually
implies that the resulting circuit consumes more power and takes more
area than it would otherwise.

Area. Gates are optimized for area; thus, the smallest possible size for
each individual gate that satisfies all other constraints is chosen. As a
result, resulting chips coming off the fabrication line may be slower on
average yet consume less power than they would using a different tactic.

Power. Gates are optimized for power consumption. This usually

implies an area-efficient approach, as smaller gates tend to consume less
power. However, it is typically at the expense of speed. This approach
is more commonly used for low-power devices, especially in portable
consumer applications such as cell phones.

Figure 47 depicts graphically the logic synthesis process, including its various
inputs and outputs. As figure 47 indicates, one key input of the logic synthesis

58 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 58 2/20/08 4:54:07 PM

process was just discussednamely, the result of RTL/logic-level design entry. Such
logic description includes most of the information needed, as it faithfully describes
the circuit we are trying to synthesize, in a language such as Verilog or VHDL.
Another key input is a set of constraints. Constraints provide information to the tool
about what requirements must absolutely be met by the generated netlist. Traditionally,
constraints are focused on timingthat is, for each important pin, they describe
information about the times at which the signal needs to arrive. Thus, constraints are
statements of the form Signal A needs to arrive at time T at pin P of module M.
Other types of constraints, however, are definitely possible and increasingly popular,
such as power consumption constraints and size/area constraints.

Fig. 47. The logic synthesis process

A third critical input is (as you might guess) the library of gates and blocks the
use of which the technology permits. This library will generally include all basic
logic functions, with a number of sizes for each of these functions. For example,
for the NOR function, the library could include 10 different gates, from NOR1
to NOR10, with the latter being a much larger cell than the former. As will be
discussed later in chapter 5, the size of a logic gate depends largely on the size
of its internal devices, its transistors, and the difficulty in interconnecting these
internal devices.
The main output (as you might again guess) consitutes the mapped logic
gates or netlist. The logic netlist is a critical piece of information in digital design.
Modern design tools allow the performance of myriad operations on a netlist,
to verify a large number of aspects of a design (e.g., timing analysis or power
consumption analysis) and to generate the final details of the design (e.g., the
layout on the basis of which the manufacturing masks are constructed) before
sending it to the manufacturing team.

RTL/Logic-Level design 59

_Carballo_Book.indb 59 2/20/08 4:54:07 PM

As figure 47 also indicates, the logic synthesis process consists of two
internal steps:

Logic optimization. In the first, technology-independent step, the logic is

arranged to be as efficient as possible, regardless of the library to which
it will be mapped. For example, redundant gates of logic statements are
eliminated and/or shared, and the number of basic operations needed to
perform a complex operation is minimized.

Technology (library) mapping. The second, technology-dependent

step consists of mapping the resulting intermediate netlist onto the
technology library provided. At minimum, the size of each gate needs
to be determined to satisfy the various input constraints, such as timing,
power consumption, and possibly area.

In reality, logic synthesis software is currently extremely complex (possibly

involving many thousands of lines of software code) and does not easily match
this simplified two-step process, owing to technology and design complexity. For
example, it is useful to start considering technology-dependent aspects early in logic
synthesis, since technology largely determines how efficient a netlist is and whether
the final technology-mapped netlist meets the timing, power, and area constraints
provided. On the one hand, logic synthesis is a single process that produces a
netlist mapped to a technology library. On the other hand, it is a complex process
with many intertwined steps, some of which may require user intervention.
From the perspective of a designer who needs to utilize logic synthesis,
despite the level of automation of todays synthesis tools, the following tasks
need to be undertaken:

The input logic description needs to include constructs that can indeed
be mapped to the technology library. For example, the library may not
have edge-triggered latches; thus, those should not be included explicitly
in the description.

The input logic description needs to be provided at an appropriate

level of abstraction. For example, too high-level descriptions may not
be mapped by the synthesis engine or cannot be mapped efficiently
by such an engine. For example, while it may be possible to write a
complex algorithm with various types of loops and complex variables
(i.e., a high-level behavioral description) in Verilog or VHDL, it may not
be synthesizable.

The constraints need to be able to be met by the design. For example,

if an input timing constraint implies a frequency of 20 gigahertz for

60 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 60 2/20/08 4:54:07 PM

the chip, it is likely that many technology libraries (and, hence, many
manufacturing technologies) will not be able to satisfy this constraint,
almost independently of the design at hand.

The library needs to be chosen and set up appropriately. A library

intended for wireless, portable, low-power designs will hardly be useful in
most high-performance microprocessor-based SoC designs for console
gaming applications.
Once the netlist is generated, then what? As mentioned before, once the
logic netlist has been generated for a particular logic block, a substantial number
of tasks need to be undertaken to verify that the block will work according to
specifications once it is fabricated. Since the netlist includes significant technology
information, this stage of the design process can produce largely accurate
information. The key items that need to be verified are

Functionality. Will the chip perform the function it is supposed to


Speed/timing. Will the chip be fast enough?

Power consumption. Will the chip consume too much power?

Signal integrity. Will the chip be too sensitive to internal and external noise?

Testability. Will the chip be easy to test?

Other design checks. For example, are the names of pins and blocks
The following sections cover these extremely important tasks that need to be
performed before generating the final block and chip layouts.

Logic Simulation
In the process of logic simulation, designers figure out whether a created
logic block performs the functions it is supposed to. Figure 48 depicts the logic
simulation process.
Question. As figure 48 indicates, logic simulation can take the same
presynthesis logic description as an input. Why is that enough? Should it not take
the synthesized netlist?

RTL/Logic-Level design 61

_Carballo_Book.indb 61 2/20/08 4:54:07 PM

Fig. 48. Logic simulation process

Basic functionality does not depend on technology details: a logic description

is basically a functional description with additional details. Therefore, logic
simulation can generally take as input the logic description created during the
design entry phase.
Nevertheless, checking for functionality requires some reference. Thus,
there needs to be another critical input, which is the set of input patterns or,
more broadly, the testbench. In other words, to check functionality of a block,
one needs to check that the outputs of that block are what they are supposed
to be, given a certain set of inputs. Those inputs are necessary to perform the
simulation. The outputs, for comparison, will be needed at some pointeither as
part of the testbench or later, once the simulation is complete.
The result of logic simulation is often a mixture of graphical, textual, and
numerical information. Waveforms of digital signals (1s and 0s) are still a very
common viewing mechanism for inputs, outputs, and internals signals in a digital
block. Waveforms represent signals as lines on a horizontal scale, representing
time, with upward and downward steps representing increases and decreases,
respectively, in the value of the signaltypically from 0 to 1, or vice versa.
Debugging of the logic description functionality is often performed by
visually comparing the resulting waveforms with the desired waveforms. (The
term debugging is commonly used in software development to describe the
process of finding and fixing problems in the software code; because chip and
hardware design largely consists of developing code in languages like Verilog or
VHDL, the term applies in this context as well.) When a waveform appears not
to match the desired result at a certain time, the designer finds the root cause of
the problem and tries to fix it by changing the design description.

62 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 62 2/20/08 4:54:08 PM

Waveforms can be a powerful source of information, because they can
depict many signals over time in an intuitive manner; even groups of signals
can sometimes be represented with a single graphical waveform. However,
waveforms also have limitations when designs are very complex. As a result,
beyond detailed debugging, other techniques can be used that provide more
automation and require less human intervention. Regression tests, which
may be run nightly, can automatically produce a large number of simulations
and comparisons with desired outcomes. These results may be examined the
following day, to look for and address individual problems through design
changes. When addressing each of these problems, returning to graphical,
waveform-based evaluation and debugging may be of immense help. Thus,
a combination of automated, batch-based techniques, resembling those used
in classical software development, plus graphical, signal waveformbased
techniques is the most popular and effective logic functional verification
strategy in use today.
Example. Figure 49 depicts an example of designer actions when completing
functional verification of a specific logic block.

Fig. 49. Logic design loop based on debugging of a functional error

As figure 49 indicates, the designer first pursues design entrythat is, uses
an editor tool to enter a logic description in the form of Verilog code, plus the set
of input vectors or testbench, to enable verification of functionality. As with most
logic simulators, the code is first compiled into a software object that can be more
effectively simulated by the tool. (Verilog is text, representing certain structures.
Compilation will explicitly generate those structures into data types that can be
directly accessed by the simulator code.)
The simulator itself is then run. As a result, a set of waveforms can be
displayed by a viewer tool that shows logic values as a function of time for every

RTL/Logic-Level design 63

_Carballo_Book.indb 63 2/20/08 4:54:08 PM

available (simulated) signal. The designer compares the displayed values against
the desired values, especially for outputs. At this point, the designer finds a signal
for which the value is 1 when it should be 0; that is, the logic is simply wrong since
the results are wrong, assuming that the testbench/input vectors are correct.
The designer then invokes the editor tool to change the Verilog code. The fix
that the designer finds is to change a certain AND gate to a NAND gate, thereby
changing the output of that gate.
The designer then recompiles the Verilog code; otherwise, the same design
would be simulated again. The viewer tool is utilized again to view the waveform
results. The same problematic signal is displayed by the tool and examined by
the designer. The results turn out to be correct again. At this point, the designer
may decide to enter a separate set of input vectors to test another casethat is,
to change or expand the designs testbench.
After functional verification, which is most commonly pursued using
simulation, the following tasks are executed most commonly using verification:

Timing verification. The goal is to verify that the logic circuit is as fast
as required.

Power analysis. The goal is to verify that the logic circuit is as energy
efficient as required. A simulation is often used here as well.

Test insertion/verification. There are two key goals here:

 est synthesis. Add the necessary logic circuits (combinational and
sequential logic), if necessary, to improve testability. This is not really a
verification step. It is included here because, although in theory it may
be run at the same time as logic synthesis, it is often not.

Test verification. The goal is to verify that the logic circuit is easily
testable (i.e., to verify testability).

Prephysical design (pre-PD) checks. The goal is to ensure that design

methodology rules are met.

The next sections detail these tasks individually.

64 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 64 2/20/08 4:54:09 PM

Logic Timing Verification
Timing analysis has been a cornerstone of modern digital design for several
decades. Before design automation came of age, it was originally pursued by
hand, literally by examining a complex diagram of logic functions and finding
the longest and shortest paths. Todays timing analysis (finding timing problems)
and optimization (fixing those problems in the most effective way) is a complex,
difficult process, yet includes much automation, at least in the area of finding the
problems. (It could not be otherwise, since logic blocks routinely have tens to
thousands of logic gates.) Figure 410 depicts this process graphically.

Fig. 410. Key aspects of timing analysis

As in any other logic-related design task, timing analysis takes a set of inputs
including the actual description of the logic, which was entered during logic design
entry. While this input may have some timing annotations, timing information is
mostly contained in other files or inputs to timing analysis.
Specifically, timing analysis takes a critical inputnamely, timing constraints.
The most important type of constraints in modern digital design for decades
(recently, other constraintse.g., power constraintshave moved to the
forefront), timing constraints are intended to ensure that the entire logic block
enables the chip to go as fast as the initial performance specification indicates,
while leaving enough room for manufacturing imperfections and uncertainties.
Timing constraints often come with boundary conditions called arrival
times. A timing constraint typically refers to the latest and earliest possible
time a signal needs to arrive at a storage device (i.e., a latch) so that the signal
is properly stored when the next clock cycle opens up that latch temporarily.

RTL/Logic-Level design 65

_Carballo_Book.indb 65 2/20/08 4:54:09 PM

To determine whether that signal meets that constraint, we need to understand
at what time the signal arrives at the beginning of the logic that leads to that
storage device. That arrival time is given either by the userif it is an edge
of the chip and not connected to other blockor by the timing of the signals
coming from other blocks feeding this one. Even in the latter case, the designer
may need to enter a user-given number until those dependent arrival times are
actually calculated.
It should be apparent from the previous paragraph that timing analysis is
complicated by

The characteristics of the storage devices. For example, what time-

interval tolerance for an arriving signal is required for the latch to
correctly store the signal value when the clock ticks next time?

The dependencies between separate blocks that are interconnected by

timing-sensitive signals.

The key output that comes from timing analysis comprises timing violations.
For each timing constraint, the timing analysis output indicates whether the
constraint is satisfied; otherwise, it indicates how that constraint was violated
(e.g., was the signal too early? was it too late?).
Example. Figure 411 depicts a timing analysis applied to a simple logic
circuit in two cases: with no timing violations and with one timing violation.

Fig. 411. Timing analysis for a simple function

66 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 66 2/20/08 4:54:09 PM

As figure 411 indicates, the simple logic block being implemented contains
the equivalent of the following function on two numbers I1 and I2 (each of which
is expressed as a set of bits):

This function, however, is executed sequentially in two clock cycles. First, the
square function of I1 is computed and stored in a register on the first clock cycle:

Then, on the second clock cycle, as the second input comes in, the result is added
to the second input, I2, and is stored in the output register, O:

In the example, when the timing and functionality of the logic block are
correct, for inputs I1 = 8 and I2 = 4, the output produces O = 68 (i.e., 8 times
8 plus 4) two clock cycles later:

When timing is incorrect, a violation is produced by timing analysis. This

case is illustrated in the top-right corner of figure 411, where the result
arrives too late, after the output latch has sampled the signal. When this
sampling occurs, the latch may store an incorrect value (i.e., different
from 68).

When timing is correct, timing analysis returns no violations. This case

is illustrated in the bottom-right corner of figure 411, where the result
arrives on time, before the output latch has sampled the signal. When
this sampling occurs, the latch stores the correct value (i.e., 68).

The type of constraint in the example is called set-up constraintthat is, a

constraint by which the signal needs to arrive at the latch input before a certain
time, which happens to be the next clock edge. The latch needs to see the
signal at a certain interval before the clock edge, owing to a phenomenon called
metastability. If the signal arrives later than that time, the latch may sample an
incorrect value. As will be shown for circuit design (see chap. 5), latch circuits
need a certain amount of time to actually latch the data internally so as to remain
stable until the next clock edge. If enough time is not provided, the signal is
metastablethat is, may remain on some middle value or, worse, return to
where it was before sampling.
Conversely, there is a type of constraint called hold constraint, by which
the signal corresponding to the next clock cycle needs to arrive at the latch
input after a certain time; otherwise, it will overwrite the signal, from the prior

RTL/Logic-Level design 67

_Carballo_Book.indb 67 2/20/08 4:54:10 PM

clock cycle, that was correct. The latch should never see the next-cycle signal
until a certain time interval after the clock edge has passed, owing to the same
metastability phenomenon.
The output of timing analysis is a description, traditionally a text file, of
each of the timing violations encountered plus description of the constraints that
were actually satisfied, including the time interval by which they passed those
constraints. This interval is called slack and is a measure of time that sometimes
can be borrowed to make some of the logic gates smaller and thus save power
and area.
Example. Figure 412 gives an example of the process of fixing a timing
constraint violation. As usual, the designer creates a description of the logic block
in a logic design language (in this case Verilog). The code is compiled, in this case
for the timing analysis tool to operate on (possibly a different type of compilation
and definitely a different type of tool than the one used for functional verification).
The timing analyzerthat is, the timing analysis toolruns and produces its
output, including which constraints were violated, the type and amount of each
violation, which constraints were satisfied, and the type and amount by which the
constraints were safely passed.

Fig. 412. Simplified process of fixing a constraint violation

As figure 412 shows, the designer pulls up the output file with a text editor to
view the output information. The designer finds, among other items, that the first
output bit in the output register, O(1), features a constraint violation, specifically
a setup violation. The first output has arrived at the latch pin 5 picoseconds
(5 10-12 seconds, or 5 trillionths of a second) too late.
The designer then goes back to the logic block description and it finds that
there is a logic gate in the path that leads to the violation (i.e., the chain of
logic gates that ends in such a violating pin) that could be made a little larger,
potentially fixing the violation at the cost of area and power consumption. Thus,

68 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 68 2/20/08 4:54:10 PM

the designer changes the size of such a gate, from 3 to 5, effectively selecting a
larger gate from the technology library.
The designer then invokes the timing analyzer tool, thereby compiling the
code and verifying the logic block again. Once the run is complete, the designer
again pulls up the result with the text editor, and it goes down to the same
constraint to check whether it is still violated. Fortunately, the result indicates that
the constraint is not violated, but now it has 10 picoseconds of slack.
Question. Since there is so much slack, would it make sense to explore
decreasing the size by only a littlefor example, to 4? What are the risks of
doing so?
Sometimes slack can confuse a designer. Reducing the size of the gate to an
intermediate value may not actually work, since slack is a complex function that
does not vary exactly linearly with the size of the gate.
Conversely, it may not be possible, after all, to grow the size of gate all
the way to 5 from 3, since this might increase power consumption to a level
beyond what the designs power constraints allow. In other words, we need
to consider power analysis as well, which will be explained later (see Logic
Power Verification).

Impact of manufacturing on timing

Logic circuit timing is directly affected by manufacturing characteristics for
several important reasons. This statement will be made even more clear when we
examine the circuit design and layout design levels in this book. Unfortunately,
the manufacturing process has imperfections and hard-to-predict variations that
have a significant impact on timing. As a result, logic gates could be faster than
expected/average, or slower than expected/average. Therefore, it is common
for timing analysis to produce a number of outputsone for each case under
consideration. These cases are called corners. Multiple parameters can be varied
to form these corners. The most common parameters are process, voltage,
and temperature. When gates are fast, they tend to consume more power, and
vice versa.
Lets start by examining why the manufacturing process affects timing
results. Following are three important principles that describe the impact that
manufacturing has on timing:

Manufacturing affects the dimensions, the material content, and,

thus, the fundamental parameters of devices and wires. Chip
electrical devices are made by layering various materials by virtue of a
manufacturing process; furthermore, manufacturing imperfections and/or
variations affect all geometry dimensions and, thus, their fundamental

RTL/Logic-Level design 69

_Carballo_Book.indb 69 2/20/08 4:54:10 PM

electrical device parameters (e.g., the threshold voltage of a transistor,
i.e., the voltage that needs to be applied between its two key pins to be
able to turn it on).

Fundamental device and wire parameters determine their key

electrical characteristics. Fundamental electrical parameters of devices
and wires in a chip, such as their effective resistance, capacitance, and
inductance (details on these parameters are provided later), depend on
wire dimensions, transistor (device) dimensions, and the fundamental
electrical parameters of these devices (e.g., the aforementioned

The fundamental performance (i.e., timing) characteristics of a chip

depend on key electrical characteristics. Timing delay depends on
these fundamental electrical characteristics (resistance, capacitance, and
inductance) of devices and wires existing in every logic circuit of gates
and interconnections.

What about voltage? The higher the voltage applied on a digital logic gate is
(as we will see in chap. 5), the faster this gate may run. The voltage supply for a
chip has a certain level of uncertainty, owing to the various sources of variation
of this voltage:

As it travels from its power source through its wiring into the chip, then
through internal wires to each of the logic blocks, and, in turn, to each
of the logic gates, the signal gets distorted in various ways (this will be
discussed later).

Manufacturing imperfections and uncertainties affect this voltage since

this voltage is generated and transported through devices and wires.

What about temperature? Temperature, apart from posing a hard-to-

solve reliability problem, affects the performance of electrical devices in logic
circuits. For example, it affects the threshold voltage of transistorsthat is, the
voltage needed to turn them on. As a result, temperature represents another
corner. Temperature is an unknown because it is impossible to predict exactly
what temperature a chip will experience (e.g., consider your cell phone: what
temperature can you guarantee for it every day?); thus, chips are generally
designed to work within a large temperature range (e.g., 25100C).
Considering these three corners, a typical timing analysis run can produce up
to six separate results. Each result needs to be analyzed for constraint violations,
further complicating the timing analysis and optimization problem.

70 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 70 2/20/08 4:54:10 PM

Logic Power Verification
Power analysis and/or verification have not been a fundamental part of
modern digital design until relatively recently. What has changed is the enormous
growth of power consumptionrelated issues in contemporary designs, following
several key trends:

Customer demand. First, there is huge demand for computational

power and bandwidth in todays chips for wired applications (computers,
communications equipment, storage devices, etc.). Second, there is ever-
increasing demand for functionality and performance at very low cost for
portable devices (e.g., cell phones).

Technology limitations. First, limitations in the scaling of digital CMOS

technology have made chips consume ever larger amounts of power,
whether they are on or off, for every piece of silicon real estate. Second,
limitations in the packaging technology used to encapsulate chips result
in a hard limit for the amount of power a chip can consume while
utilizing a cost-effective package.

Just like timing analysis, before design automation came of age, power
analysis was originally pursued by hand, literally by examining a complex
diagram of logic functions and adding up the estimated amount of power
each function would consume. Todays power analysis (determining the
amount of power consumed) and optimization (finding ways to reduce power
to satisfy unmet power requirements) is becoming a complex, difficult process,
especially when combined with the need to meet difficult timing constraints
(in many cases, timing and power tighten the rope in opposite directions);
yet, much automation is included, at least in computing the amount of
power consumed. Even in that regard, however, the recent explosion of
static, or leakage, powerthat is, the power consumed when the chip is not
performing any useful workis producing great problems in chip design,
not only because it is difficult to reduce but also because it is very hard to
estimate accurately. Figure 413 depicts the power analysis and optimization
process graphically.

RTL/Logic-Level design 71

_Carballo_Book.indb 71 2/20/08 4:54:11 PM

Basic concepts in power analysis and optimization
Definition. Power consumption is the amount of energy dissipated by a given
circuit block per unit of time (i.e., per second). In an electrical/electronic circuit
(the scope of this book), power consumption at a certain instant is always given
by the voltage applied to the circuit times the current going through the device.
Thus, the most basic definition for power consumption is as follows:


where P represents power, in watts; E represents energy, in joules; t represents

time, in seconds; V represents electrical voltage, in volts; and I represents
electrical current, in amperes.
Unfortunately, things get a little more complex than this simple definition.
There are two types of power consumption in integrated circuits:

Dynamic, active, or alternating-current (AC) power consumption

Static, leakage, or direct-current (DC) power consumption

Dynamic, active, or AC power consumption. When a logic circuit is

active and producing useful work, its logic gates, latches, and wires are moving
up and down in terms of voltage. Moving these voltages up and down requires a
certain amount of energy, which results in the consumption of dynamic power.
The primary reason for the consumption of dynamic power is that moving
voltages requires charging and discharging capacitances, which is why this type of
power is often related to the so-called switching capacitance of devices and wires
in a chip. To understand this effect, we can look at the fundamental relationship
between charge and voltagenamely, capacitance:


where C denotes capacitance, measured in farads; Q denotes charge, measured

in coulombs; and V denotes voltage. For a given capacitance in a device or
wire, raising its voltage means filling it up with a certain amount of charge.
Conversely, bringing down the voltage requires emptying the charge in the
Question. Where is the connection with power consumption?
It takes a certain amount of current to provide the charge, following the
fundamental capacitance equation:


72 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 72 2/20/08 4:54:11 PM

In other words, the current is equal to the derivative of voltage with respect to
time, multiplied by the capacitance itself. Because it takes current over time to
charge a capacitance, it takes power consumption to do so. Designers call this
the CV2 component of power consumption, which can be expressed using the
following equation:


where f is the overall frequency of the chip, which in a first approximation is the
inverse of the period of its fastest clock cycle, T. In others words, the faster a chip
goes, the higher its voltage will be; and the larger its equivalent capacitance is,
the more dynamic power it will consume.
Still, a more precise yet simple equation is possible. Not all circuits in a chip
are switching constantly between 0 and 1. Otherwise, we would have a clock
signal in each wire. As a result, the percentage of time that a circuit is switching
fully from 0 to 1, and vice versa, needs to be added to the equation, as a multiplier
to the formula proposed in equation (4.15):


where is the average percentage of time that the circuit is switching fully
between 0 and 1, and vice versa.
This is a very powerful analytical model. Modern power-saving techniques
are very strongly based on leveraging this formula (eq. [4.16]). First, clock-gating
techniques focus on turning on the clock on a chips circuits only when needed.
When a block of a portion of a chip is not needed for a certain time, turning
its clock off is equivalent to reducing the parameter f to 0. Second, voltage
scaling is another very powerful technique, based on turning down the voltage
of a specific block when that block allows (i.e., without losing too much speed
and becoming nonfunctional). This technique requires voltage controllability
and granularitythat is, the ability to control the voltage (as opposed to its
being fixed by a constant external source) and the ability to control each block
independently, respectively. A block can also be turned completely off by turning
down its voltage supply all the way to 0. In any case, the parameter we are
playing with in equation (4.16) is V.
Finally, there is capacitance, C. There are numerous ways to reduce (and,
unfortunately, increase) capacitance in a chip. Wires have a capacitance whose
value goes up as their width and length increase, everything else being equal. Wires

RTL/Logic-Level design 73

_Carballo_Book.indb 73 2/20/08 4:54:11 PM

are used to transfer the clock signal and data signals across the chip to its various
blocks. By reducing capacitance, the speed of the chip can be improvedand
the power as well! Unfortunately, there is often a trade-off, since long wires tend
to be more resistive. (Both angles will be explored in chap. 5.) Transistors inside
gatesand, therefore, gateshave a larger input capacitance when they are
larger (higher width). By gating certain logic blocks and blocking them from being
seen by other blocks, designers can also reduce power by reducing the effective
capacitance. Another approach is to reduce cross-talk capacitance, which refers
to the coupling between two separate wires and/or devices. By keeping wires
separate from each other, capacitance can be reduced substantially.
Static, leakage, or DC power consumption. Static power consumption,
by contrast, is the power consumed by a circuit when it is not doing any useful work.
Voltages are not being changed in value dynamically (except those corresponding
to clock signals, if they are not turned off), yet the circuit is still consuming power.
The key reason is leakagethat is, the gradual loss of charge from the various
capacitors and channels inherent in a circuits components (devices and wires).
Capacitors include an insulator material to separate cleanly the two plates on
which the voltage is applied. One plate contains positive charge, and the other
plate contains negative charge. Because the material is not a perfect insulator,
a small current passes between plates through the insulator, even when there is
no change to the capacitors charge. Wires and devices include capacitors, since
there is always an insulator between separate wires and since there is an insulator
between the gate of a transistor and its other terminals or pins. Transistors also
have a channel between the drain and source terminals, through which a current
should flow if and only if the transistor is turned on. Consequently, there are
several types of leakage: on each transistor device, there is subthreshold leakage
(current that runs through the device even when turned off) and gate leakage
(current that goes through the insulator material that separates the gate of the
device and its other pins).
As a result, we can say that power consumption has two components,
expressed as follows:


where PS and IS denote static power and current, respectively, and PD and ID
denote dynamic power and current, respectively.
It is not easy to assign a single power number to each of these parameters. In
other words, power consumption depends on a number of factors, including

Input pattern vectors. Consider a circuit that receives absolutely no

inputs to compute on. Typically, the only power consumed by that circuit
will be static power. Conversely, consider the same circuit whose inputs

74 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 74 2/20/08 4:54:11 PM

change continuously. In this case, the circuit will probably consume much
more power. Clearly, the type of patterns and how fast they change
really affects the amount of power consumed by a logic block.

Environment. Environmental parameters (temperature, voltage, etc.)

influence power consumption. Temperature exerts a strong influence
on power consumption. As temperature rises, the characteristics of
each device change. High temperature makes devices leakier. Very high
temperature across the chip can also produce very hard problems with
its package. Hot spotsthat is, areas of the chip that have much higher
power density, even temporarilylead to additional problems, since
they imply a difference between the various regions in the chip. Such an
imbalance may result in the chips simply not working, since conventional
design assumes that all similar devices perform similarly across the chip.

Circuit/logic topology. Since there are many ways to implement a logic

function with a library (the whole purpose of logic synthesis), each of
these possible implementations may have a different power consumption
associated with it. For example, several redundant gates may be added
to speed up a design, or larger gate sizes may be chosen for the same
reason. In both cases, power consumption, dynamic and static, is likely
to be higher.

Process variations. This factor is less obvious. As circuits become

smaller, silicon manufacturing technologies become more complex
and thus more difficult to control. As a result, circuits present higher
variability. One consequence of higher variability is more power
consumption. Specifically, variability increases leakage power
consumption, as devices consume indeterminate amount of power, even
when they do not do any useful work.
Power evaluation happens all across the design flow, from top to bottom.
This is a very important strategy. If power consumption was evaluated only late
in the design project, it might actually be too late to fix it, or it could cost too
much, in terms of time and resources (i.e., money), to fix it. Numerous studies
have shown that design changes early in the design process can have a much
stronger impact on power consumption than later changes when lots of details
are set in stone. For example, we may find that we can turn off completely a
whole adder block for 80% of the time, when doing design at the system level.
This might save considerably more power consumption than deletion of a couple
of logic gates after performing a detailed study of the logic gate implementation
of that adder block.

RTL/Logic-Level design 75

_Carballo_Book.indb 75 2/20/08 4:54:11 PM

Power evaluation (verification) happens in various forms, including

Estimation (system level and RTL). At higher levels of abstraction,

power consumption is estimated. It could not be otherwise: at such high
levels, it is difficult to obtain very accurate numbers. Most of the focus
is on obtaining accurate comparisons between multiple options; that
is, relative estimations should be accurate, while absolute estimations
should be accurate only in order of magnitude.

Verification (RTL/logic level). At the RTL/logic level, the design can be

verified from the power consumption perspective. Reasonably accurate
estimates are possible at this level by leveraging of all the technology
information embedded in the technology logic library, combined with
advanced models to calculate power based on input data vectors. This
is referred to as verification, since the result can be verified against the
power constraints in the design (e.g., does this block consumer less than
0.5W? because that is the power constraint for this block!).

Simulation (circuit level). Finally, on the transistor level, which will be

explained in more detail in chapter 5, actual circuit simulations can be
run, which are the most accurate and the oldest (>30 years old) type of
simulation done in chip design. Circuit simulation computes voltages and
currents at every node in the circuit. Since power consumption is voltage
times current, power consumption can be readily and most accurately
obtained using circuit simulation.

Figure 413 depicts graphically the key aspects involved in pursuing logic
power analysis and verification. As figure 413 shows, the first input is, as usual,
the RTL/logic-level description of our design. This is the golden input of the
design at this abstraction level (i.e., the key input specification that we start from,
that is being implemented, and that will not be changed at this point). For the most
accurate results, logic synthesis needs to be performed beforehand. Otherwise,
the design flow generally needs to perform a default synthesis or make strong
assumptions to get an initial estimate, and a detailed synthesis is done later on
(including a careful synthesis, with detailed technology information from each
logic gate and possibly wiring information) for an accurate power estimation.

76 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 76 2/20/08 4:54:12 PM

Fig. 413. Power analysis/verification process

A second crucial input comprises the power constraints. Whether they are
implicit or explicit, constraints will guide the process during optimization. At
minimum, they need to be checked when power analysis produces a result. For
example, see the OK? box in figure 413. The power constraints for a chip can
be readily obtained; constraints for each logic block, however, are more difficult
to obtain. From the overall chip power budget, which block should take more,
and which block should take less? Designers, managers, and their tools need to
support power budgeting in a systematic and effective manner. In this sense, it
is similar to the problem of timing budgeting. Once block-level power constraints
have been obtained, each power analysis run can be checked against them.
The third key input, which is also featured in timing analysis, comprises
input patterns and arrival times for each signal. As discussed earlier, the amount
of power depends on the inputs a circuit has to process. Generally, the more/
faster these inputs change, the more power the logic block is likely to consume
specifically, more dynamic power. While latches may be switching continuously
if the clock is never turned off, the rest of the logic will usually switch depending
on the data on which it operates.
The main output of timing analysis and verification is an estimation of the
power consumed by the block at hand. Note that power is statistical. As a result,
the most common output includes at least three sets of numbers: typical, worst
case, and best case. In other words, average, high, and low values, respectively, are
provided. The explanation is simple. As with timing analysis, the manufacturing
process has imperfections and hard-to-predict variations that have a significant
impact on power. As a result, logic gates could be faster than expected or slower
than expected. When gates are fast, they tend to consume more power, and
vice versa. Faster gates, as can be recalled from timing analysis, represent the
best case from the timing standpoint, corresponding to the highest power
consumption number (11 milliwatts in fig. 413). Conversely, the worst case
actually corresponds to the smallest power consumption number (9 milliwatts).

RTL/Logic-Level design 77

_Carballo_Book.indb 77 2/20/08 4:54:12 PM

Example. Figure 414 illustrates a case in which a designer needs to fix a
power consumption problem at the RTL/logic level.

Fig. 414. Design case in which a power analysis problem is fixed

As figure 414 indicates, the designer starts, as usual, with the RTL/logic-
level description of the design. Once the designer is content with a specific
version of that description (e.g., timing analysis has been successfully run), the
designer will want to verify that the power consumption constraints are also met.
Alternatively, if those constraints are not available, the designer may want to at
least verify whether the design is power efficient and whether it is possible to
make it more power efficient.
To pursue power verification, as previously explained, the designer runs a
power analysis tool. Running this tool may entail compiling the logic description
(in Verilog code in this example) so that it is mapped to the technology
library; thus, all technology information is made available for a more precise
power estimation.
The result of running the tool is, in this case, an output file that can
be viewed with a text editor, as usual. Sophisticated tools exist that provide
a graphical depiction of the power analysis result, perhaps using colors to
indicate what parts of the block are hotter and which ones are cooler.
More frequently, though, text output that can be understood and analyzed in
detail is provided.
Unfortunately, the average-case power consumption is 14 milliwatts for this
particular logic block. Since the power constraint for average case was set to
12 milliwatts, the constraint is clearly violated. The designer needs to do
something about this issue.
The designer decides to look at the Verilog code again with an editor. He
or she does so and finds that the number of buffers is good. (Recall that a buffer
has no logic function; it is put in place to clean up and strengthen the signal

78 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 78 2/20/08 4:54:13 PM

as it travels across circuits and wires, to ensure that the block works and, more
important, to improve performance, i.e., make the overall circuit block faster.)
The designer notices that there is a chain of no less than 12 buffers that were
inserted some time ago, when it was thought that the design needed them to
satisfy timing constraints. It seems clear now, after timing analysis, that there
is a lot of slack that can be used in this chain. Thus, it is decided that 8 of those
12 buffers will be eliminated, to address the power consumption issue at hand.
The designer changes the Verilog code to reflect this decision.
The designer then compiles the code again, but the first task that needs
to be tackled is to repeat timing analysis. Clearly, eliminating as many as eight
buffers might have eaten most, if not all, of the slack in the design. The designer
runs timing analysis, and, fortunately, no new violations are returned. The code
is then compiled/synthesized for power analysisassuming that the compilation
for timing analysis was not sufficient. Power analysis is run, and the new power
number for the average case is eight milliwattsa very substantial reduction in
power with apparently no performance penalty.
Question. Why does power analysis need to be executed to check for
power constraints, when the prior logic synthesis is supposed to account for
power constraints?
Unfortunately, although in its implementation, logic synthesis attempts
to account for all constraints provided, the result may not be precisely as
desired. First, synthesis may utilize a rough model to estimate power that
is approximate in nature, especially since it needs to complete its run in a
reasonable time. The detailed power analysis executed later may be much
more precise and provide a different estimate. Second, the amount and
complexity of constraints (timing, power, and area) may make it difficult if
not impossible to satisfy all of them at the same time and by a wide margin.
As a result, the synthesis tool may try or do its best but cannot guarantee
that those constraints will be met. This is one more reason to pursue detailed
power analysis and optimization.
Question. In light of the give-and-take between timing and power, could it
happen that the designer moves into an endless loop of changes? How can this
issue be addressed?
Yes, this is possible for very difficult, high-end designs (e.g., a high-
performance microprocessor or an ultralow-power SoC). The issue may be
addressed through careful setting of the overall chip and individual block
requirements (which result in the constraints themselves), plus optimization.
Timing and power optimization perform changes on the design to optimize for
a given constraint. In other words, the steps shown in the preceding example
are performed, but possibly automatically. Power and timing optimization tools

RTL/Logic-Level design 79

_Carballo_Book.indb 79 2/20/08 4:54:13 PM

are slowly becoming a staple of todays designs, as a postprocessing step that
is executed after logic synthesis. An example of an optimization step is buffer
insertion, in which an automated set of tools takes a synthesized netlist and
optimizes it by smart insertion and sizing of buffers. By addition or enlargement
of buffers, performance is improved at timing-critical sections. Conversely, by
deletion or reduction in size of buffers, power consumption is reduced in non
timing-critical sections of the design. Finally, the other important function of
buffersnamely, maintaining signal integrity, or qualityis accounted for as
well during this process.

Impact of manufacturing on power

Logic power consumption is directly affected by manufacturing characteristics
for several important reasons. This statement will become clearer in chapter
5, in which the circuit design and layout design levels are examined. Because
of manufacturing process imperfections and hard-to-predict variations, logic
gates may be faster or slower than expected. When they are faster, they tend to
consume more power, since speed generally comes from higher levels of current
produced by the devices. This is not always the case, since sometimes higher
speed is given by smaller capacitances that need to be charged and discharged.
Therefore, power analysis tends to produce corner cases, just as power analysis
does. The three parameters that can generally be varied resemble those in the
case of timing analysis:

Process. Manufacturing process variations/uncertainties have a very

significant impact on power consumption for several reasons. If we
look at key parameters that determine power consumption, the impact
becomes evident:
 eakage current. Leakage is related to device dimensions and device
Capacitance. Capacitance is related to device and wire dimensions.
Voltage supply. The voltage distribution network depends on wire and
device dimensions/parameters. (This parameter will be analyzed in
more detail in the next point.)

Voltage. The higher the voltage applied on a digital logic gate is, the
faster this gate will run and the higher will be the power consumed. As
previously described, voltage supply has a level of uncertainty, owing to
sources of variation. In addition to manufacturing-induced variations, the
voltage signal becomes distorted as it travels from its power source into
the chip and through internal wires to each of the logic blocks and to
logic gates.

80 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 80 2/20/08 4:54:13 PM

Temperature. Temperature affects power consumption, not just timing.
Since temperature affects the threshold voltage of transistors, it affects
both dynamic and static power. In general, higher temperatures will
make a device slower, yet possibly make it consume more power, as it
increases its resistance. Therefore, temperature is also the cause of a
corner. As mentioned before, it is generally not possible to predict on-
the-field temperature; thus, chips are typically designed to work within a
large temperature range (e.g., 25100C).
Considering these three corners, a typical power analysis run can produce
up to six separate results. Each of these needs to analyzed for constraint violations
(in at least two cases: the worst case and the best case), further complicating the
power analysis and optimization problem.

Signal Integrity
Timing and power consumption, although perhaps the most important
performance-related parameters in the design, nevertheless are not the only
parameters for which to perform verification. Signal integrity is a very important
issue in todays chips, as voltage levels keep declining and addressing noise issues
becomes imperative. The key word here is noise.
Definition. Signal integrity verification is the design task aimed at verifying
and addressing noise issues in electronic circuit blocks.
What is noise? From the perspective of a chip designer, noise is the dark magic
that somehow makes voltages around the chip vary dynamically and unexpectedly
as time goes by. The primary signal integrity issues are cross talk and supply voltage
noise, but other key phenomena include electromigration and substrate coupling.

Cross talk
Imagine that a signal A, which should remain 1 until the next clock cycle,
suddenly bounces down so much that it practically becomes a 0. Why? For
example, a nearby signal B changes from 0 to 1, and as a result, signal A is
affected by a phenomenon called coupling or cross talk (figure 415). Coupling
happens because two signals that are close to each other, connected by a certain
wire or material, may share electrical charge or current over time and thus
influence each other. Typically, one signal will act as the aggressor and the other
as the victim.

RTL/Logic-Level design 81

_Carballo_Book.indb 81 2/20/08 4:54:13 PM

Fig. 415. Fundamental principles of cross talk

Coupling in chips is usually capacitivethat is, the connection between

the two devices is effectively a capacitor. Nonconnected devices and wires in
integrated circuits are separated by insulators, such as oxides. Because we are
effectively connecting two conductors by using an insulator, the result is a capacitor.
Capacitors follow a law of charge distribution, by which two capacitors in series
need to share the charge equally. Why? Because when they are connected, they
share a plate in common; thus, charges need to be equal.
Consider that a certain wire or device suddenly becomes a 1 and thus
acquires a voltage of, say, V3 volts, across its terminals. Assume that this wire,
the aggressor, is metal on top of an insulator and has an effective capacitance of
C3. As a result, the charge across its terminals is
Now, imagine that the capacitance between the aggressor and the victim is
C2 and the capacitance of the victim is C1. As a result, charge will get divided up
equally between the two capacitances:

Therefore, the voltage gets divided in an inversely proportional relationship with
capacitances (i.e., larger capacitances need smaller voltages to store the same
charge, given that Q = CV); thus, the voltage across the victim (assuming that it
starts from no voltage and no charge) would be expressed as


In summation, the higher the capacitance between aggressor and victim is,
the stronger will be the coupling. If C2 approaches a very large number, then
equation (4.20) shows that the voltages are the same, i.e., that there is 100%

82 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 82 2/20/08 4:54:14 PM

Supply voltage noise
Time-dependent movements in the supply voltage signal, commonly denoted
Vdd, also produce grave signal integrity problems. If the supply voltage declines
significantly (typically over 10%) for a certain amount of time, several issues arise.
First, the devices powered by that voltage will be much slower than they should
be. As a result, timing constraints may be violated, and the chip will not work
as specified. In fact, if voltage declines very significantly, the logic gates simply
will not perform at all, and no function will be executed correctly. Second, if the
supply voltage increases significantly, the exact opposite issue will arise: potential
hold timing violations may arise and cause the chip not to work at all.

Conductors in chips deteriorate primarily because of the current going
through them. As current passes through wires, actual material is transported
owing to the movement of the charges (which is precisely what electrical current
is about) in the wire. As the charges (electrons or ions) move through the wire,
they run into the atoms that really make up the wire metal. As charges run into
these atoms, their momentum gets transferred, especially when the density of
this current is high. In other words, metal wire atoms become displaced.
Electromigration can cause actual faults in a circuit. As atoms move, the wire
may become open, or else it may become connected to another wire, which is
referred to as a short. Electromigration becomes a more acute problem when
circuits, devices, and wires become smaller, as chip technologies become more
advanced. Why? Because it then takes fewer displaced atoms for a fault to occur.

Substrate coupling
Electrical signals going through wires and devices can couple with each
other via the silicon substrate. This phenomenon of substrate coupling noise has
characteristics similar to cross talk, but since it happens through the substrate
in every chip, it does not affect digital circuits as strongly as analog circuits. The
silicon substrate is a much longer path than the material between two adjacent
wires; furthermore, digital circuits are robust, since they only have two states,
0 and 1, between which they switch quickly at discrete clock cycles. However,
analog circuits, as shown in chapter 5, have continuous, infinite states or values
and are thus much more sensitive to noise, including substrate noise.
Analog circuits are increasingly included in the same silicon substrate (i.e., on
the same chip) as digital circuits. These digital circuits are switching at increasingly
faster rates and are becoming closer to themselves and to the analog circuits. As

RTL/Logic-Level design 83

_Carballo_Book.indb 83 2/20/08 4:54:14 PM

a result of closer interaction with wild and fast switching of digital circuits, analog
circuits are increasingly under attack, especially given the additional issue that
analog circuits have to work with increasingly lower voltages. A small bounce
in these voltages has, on a percent basis, a large impact on their signals. One
can think about the substrate in a chip as an electrical circuit itself. It is made of
silicon, a semiconductor with finite resistance.

Signal integrity analysis, as you may have guessed by now, influences the
signals that determine speed and power consumption in a circuit. For this reason,
signal integrity analysis is most often integrated in modern designs with the
following design tasks that determine performance and power consumption:

Timing analysis. Signal coupling may change the instantaneous voltages

in wires and gates. As a result, it can affect the speed at which signals
flow across wires and gatesand, thus, the timing of these signals.
Modern timing-analysis tools estimate coupling between separate
signals, especially once partial or complete layout information has been
generated; thus, it is clear which signals are physically close. Close
signals are likely to couple with each other electrically. Similarly, voltage
supply noise should be considered in timing analysis. For example, the
vibration in the voltage supply means that at some point in time, certain
gates and wires have to work with lower (higher) voltages and thus run
slower (faster) and can result in setup (hold) timing violations

Placement/routing. For similar reasons, the generation of layout

information needs to account for signal integrity. As discussed in chapter
5, chip layouts are generated through placement of blocks/gates and
routing between them with appropriate wires. When placing and routing
blocks, signal integrity should be accounted for, to reduce noise coupling
between sensitive signals. For example, additional space or isolation
can be added between multiple signals in a bus. Chips commonly
interconnect blocks using busesthat is, groups of bits (e.g., 32 bits) that
are routed together across the chip. Buses exist because operations in
a chip are performed on numbers, which can be expressed as a fixed
number of bits. Todays chips most commonly use 32-bit buses internally,
and high-performance processors may use 64 or 128 bits. It is important
that all signals in these wires arrive at similar times to their destination,
since they all belong to a single data unit. This is accomplished by use of
a number of design techniques, including inserting buffers for each wire
in the bus across the chip, among other techniques. Since cross talk can
affect synchronized timing between all bits, it should be minimized by
carefully routing these wires together and in parallel.

84 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 84 2/20/08 4:54:14 PM

Before a manufactured chip is considered to be working, it needs to be tested
as it comes off the manufacturing line. Testing a chip involves feeding a set of test
input patterns, having the chip run using these patterns, and verifying the output
to check that it matches the specified outputs and thus is consistent with the chip
requirements. Chips can fail for two reasons:

Design defects. The design was not correctly completed according

to specifications.

Manufacturing defects. The design, albeit correctly completed, was not

manufactured according to specifications.

Testing cost and testing time have become a huge portion of the overall cost
and time involved in producing a working chip. As a result, it is very important
to account for testing issues before the chip is actually builtthat is, when the
chip is designed. This task is hence called design for test (DFT), and it is closely
associated with a test plan, or strategy, to design the chip for optimal testability.
One might assume that testing implies directly checking that a chip performs
its function and does so at the speed and power consumption required. Although
this is essentially accurate, most of the testing effort has conventionally been
allocated to testing the function of the device. If one thinks about the complexity
of the function in any current chip, however, it is evident that the testing challenge
is an unsurmountable one.
Example. Consider a chip that provides encoding and decoding of high-
resolution (1,024 769) images. How long and how much would it cost to test
this functionality?
Clearly, it would be extremely difficult to test 100% of the functionality of a
complex circuit by checking all possible data and states with which it would need
to work. In our example, testing for functionality by checking for all possible
images that would need to be decoded and/or encoded would clearly be difficult.
In addition, if a failure were to occur, it would not necessarily be related to a
design issue. How do we know whether it is a design or a manufacturing issue,
while providing an effective testing method?
The answer is by use of fault models. A well-established theory of testing
shows how defects can be mathematically modeled as specific types of faults, and
these models can be leveraged to develop a clear, feasible, cost-effective test plan.
The most common defect or fault model is the stuck-at fault model. Under this
model, a defect is given by a certain node in the chip (i.e., a point touching one

RTL/Logic-Level design 85

_Carballo_Book.indb 85 2/20/08 4:54:14 PM

or more wires or device pins) that gets stuck at 1 or 0 permanently, regardless
of what the circuit is actually doing. The fault model refers to a certain type of
manufacturing defect, but theory has shown it to be a very effective way to provide
test inputs for combinational logic. If a design team can provide tests that check
for every node at which the circuit might get stuck, then the circuit is said to have
100% stuck-at fault coverage, a widely used benchmark in chip testing.
Other fault models exist. For example, shorts tests for unwanted short circuits
between two or more wires, and opens tests for unwanted separation between
two nodes that are supposed to be connected. DFT is most often focused on
producing the right logic and test inputs/outputs, to ensure that these faults are
tested for when the chip comes off the manufacturing line.
As with most other design tasks, DFT can be divided into at least two
subtasks: test synthesis and test verification.

Test synthesis
The goal of test synthesis is apparently simple: to make the chip (or circuit)
more testable. What exactly does this mean? Testing the chip is a complex process
that can vary widely, depending on the type of chip and its market application.
There are two key goals of test synthesis that will benefit any specific chip or
circuit block: test logic generation and test input generation.
Test logic generation. In this step, the designer, with the help of DFT
tools, generates extra logic that helps identify and/or diagnose faults. This task
is generally pursued in the early design stages. Key goals are to help feed test
vectors (data inputs) easily into the circuit and to help take out the results for
evaluation once the chip has run on these vectors.
The most common approach to test logic generation is scan-based DFT.
Scan-based DFT converts a certain number of the latches in a design into
scannable latches (defined and discussed earlier). By choosing scannable latches,
a sequential logic design is converted, for testing purposes, into something close
to a combinational circuit. With scannable latches, one can insert any input
vector desired into any of the latches in a very quick and efficient manner (i.e., by
scanning in the input data). Without scannable latches, one would have to insert
some data at the pins of the circuit, which would then propagate naturally, after a
number of clock cycles, to the latch where we want those data to be. The number
of cycles necessary for just that one test case could be thousands or millions,
which drives testing cost and time up significantlypotentially to thousands
of dollars per chip. Therefore, scan logic test generation makes a circuit more
testable. In addition to scan latches, other logic may need to be added to improve
the testability of the circuit.

86 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 86 2/20/08 4:54:14 PM

While complicated, generating scan logic is a well-structured problem.
Therefore, DFT tools generally automate this process by the use of tools called
DFT compilers. Scan design can be partial or total. Partial scan refers to the case
when only a portion of the latches become scannable latches, chosen using a
method that allows the rest of the logic to be tested the hard way, in a reasonable
amount of time; you can guess what total scan refers to. The best-known type of
clocking for scan testing is level-sensitive scan design (LSSD), initially developed
by IBM. In LSSD scan, different clock input phases are used when the latch is in
normal mode, as opposed to scan mode.
Built-in self-test (BIST) comprises a set of well-known design techniques that
allow circuits to test themselves, by generating test inputs, running them through
the circuit, and checking the outputs against the specification. BIST is much
more developed in memory circuits, given the simplicity of the function that they
implement (i.e., store a value of 0 or 1 in each bit, when it is written; read the
value out when required; keep the value inside otherwise). Thus, memory is a
whole different animal when it comes to testing.
Test input generation. In this step, the designer, with the help of DFT
tools, creates data inputs that can applied by the tester machine during the
manufacturing test. (This machine is also referred to as a tool, but that term
is avoided here to eliminate confusion with design tools.) Test input generation
produces a set of inputs vectors and outputs to check against for each of the
circuits in the chip. Two assumptions are generally used:

Scan testing is applied

A well-structured model (e.g., stuck-at) is used

As a result, DFT tools that generate these test data are called automated
test pattern generation (ATPG) tools. ATPG is a well-established field of work,
and ATPG tools are automated and integrated with other logic synthesis and
verification tools.
Figure 416 depicts a simplified description of test synthesis. In this
description, the designer uses a test synthesis tool that takes two types of inputs:
The logic itself
Input parameters and library to control the synthesis
In figure 416, the key parameter is the targeted test coverage.

RTL/Logic-Level design 87

_Carballo_Book.indb 87 2/20/08 4:54:14 PM

Fig. 416. Test synthesis design step

Test verification
As in other design tasks, for every synthesis step (i.e., generate something),
there is at least one verification step (i.e., verify that what was generated actually
meets specifications). In testing, there is a verification step, in addition to the
other functional, timing, power, and signal integrity analysis steps. In this case,
the designer verifies that the circuit is testable, according to a specification. The
word testable may have different meanings, depending on the specification. A
common specification is to be near 100% testablethat is, for each combinational
circuit to have near 100% stuck-at fault coverage. Such a design task is referred
to as testability analysis.
Definition. Testability analysis comprises the set of design steps focused on
analyzing the ease of manufacturing tests for a given logic design.
Testability analysis, although a verification task, may be performed before
test synthesis, even in early design stagesthat is, it may be used to help with test
synthesis. A design with high testability will make the test synthesis task easier
and will result in a test synthesis output that is more effective. Two key concepts
are used when evaluating testability:

Controllability. Testing for a certain fault requires reaching that fault and
trying to set it to its correct value. For example, if we are testing for a
certain wire to be stuck at 1, we need to be able to insert a test vector,
possibly using the scan test infrastructure, that attempts to position at 1 at
that node. Measuring testability requires measuring the ease with which
this is possible.

Observability. Testing for a certain fault also requires being able

to observe whether the node was actually set to its correct value.

88 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 88 2/20/08 4:54:15 PM

For example, if we are testing for the same wire to be stuck at 1,
we need to be able to observe whether, after inserting the test
vector using the scan test infrastructure, there actually is a 1 at
that node; otherwise, there is an actual fault. Measuring testability
requires measuring the ease with which this nodes value can
be observed.

Before test synthesis, early in the design, testability analysis provides

crucial feedback to designers that refers to the observability and controllability
of the logic nodes inside each of the circuits in a chip. After test
synthesis, testability analysis is concentrated around providing accurate
measures of fault coverage for each type of fault and for each circuit block in
the chip.

Pre-PD Checking
There is (at least) one more task that needs to be completed before commencing
the conversion of those many logic gates into a complete, interconnected layout
of polygons. Design teams need to ensure that certain additional constraints
are met by the circuit under design. While these constraints are not part of the
original specificationand thus are not compulsorysatisfying them facilitates
the subsequent lower-level design stages. These constraints are better named
guidelines, except that they are actually enforced by companies; anything that
reduces the risk and effort involved in completing a design is always welcome.
Pre-PD checks are performed to verify that these constraints are met and to
provide feedback to designers as to how to satisfy them better.
Pre-PD checks can vary substantially depending on several aspects of the
design project:




Company. Different companies often have different design methodologies,

and pre-PD checks are part of that design methodology. In fact, this is a design
task that may be less standardized among companies, as compared with the tasks
described earlier in this chapter. They may also be less fully specified by design

RTL/Logic-Level design 89

_Carballo_Book.indb 89 2/20/08 4:54:15 PM

automation providers. Therefore, these checks represent a potential source of
competitive advantage and thus vary widely among different companies.
Example. Consider a company that has its own internal manufacturing
technology and associated logic library. In this case, the technology library may
have a custom-made naming convention for the library gates and for the pins in
those gates, since the company is the only customer for that technology library.
A separate company that utilizes an external manufacturing technology and
technology library may require more detailed checks in the logic design for issues
with names.
Technology. Different manufacturing technologies may have different
detailed issues that need to be addressed and thus may require different sets
of pre-PD checks to be executed. As technologies become more complex and
advanced, it is common for these checks and guidelines to become more complex.
For example, older technologies do not apply certain power-saving logic design
techniques (because power consumption has become a much harder problem
recently, with advanced technologies). As a result, certain checks happen only on
certain technologiesfor example, at 90-nanometer technologies and beyond.
Example. Consider a design that is built into a 90-nanometer CMOS
technology. In this technology, the chip may have two voltage supplies for the
logic blocks: a 1.2-volt supply and a 1.0-volt supply. Speed-critical blocks use the
higher voltage, and the rest of the blocks use the lower voltage. A possible pre-PD
check consists of checking that if two blocks of different voltages are connected
by one or more wires, the connection is made appropriately as the signal needs
to cross a supply voltage boundary. If the signal goes from a lower-supply block
to a higher-supply block, a special-purpose buffer or gate, called a level shifter,
needs to be added in the middle of the connection. If the level shifter is not there,
the pre-PD check will notify the designer and potentially suggest a fix.
Chip. The actual chip can make a difference! Different chips have different
types of logic and overall architecture, making them more or less sensitive to
issues that require pre-PD checks. Therefore, there may be checks that need to
be addressed only for certain chips and not for others.
Example. A chip that includes analog blocks may need to pass certain
checks that purely digital chips do not need to pass. Consider the issue of cross
talk, which was addressed earlier in this chapter (see Signal Integrity). Despite
passing signal integrity analysis on both the digital and the analog sides of the
chip, a standard buffer may need to be added to optimize the conversion of
the signal as it moves from the digital to the analog side, or vice versa. Moreover,
this buffer needs to have a certain name for itself and its pins, so that the layout
system that will act subsequently understands that it needs to lay it out close to
the analog block, for best results.

90 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 90 2/20/08 4:54:15 PM

Admittedly, some of these tasks could be automated, so that these checks
would not have to be executed. However, as technologies become more complex
and designers require more flexibility to be able to optimize their circuit blocks,
a number of new pre-PD checks need to be incorporated, at least until a way to
automate the related fixes is found.

Impact of Manufacturing on
Logic Design
Manufacturing characteristics have a direct impact on the various parameters
that are examined during logic design, including timing, power, and area. At the
RTL/logic level, these parameters are estimated with a much higher accuracy
than at the system level of abstraction, based on models that are obtained from
manufacturing abstractions.
Figure 417 depicts a summary of the influence of manufacturing on RTL/
logic synthesis tasks. As figure 417 depicts, practically every part of logic design
is affectedincreasingly soby manufacturing characteristics.

Fig. 417. Influence of manufacturing characteristics on logic design

During logic design entry, the types of logic blocks and/or gates that can
be entered are commonly restricted. For behavioral code, design entry does not
depend as much on the technology library, since the designer describes behavior
and the logic synthesis tool will take care of picking gates and larger blocks
from the library as appropriate. However, for the entry of structural code, the
designer may be picking directly from the library-specific logic block or gate,
sometimes even picking the size of such a block or gate. For each manufacturing

RTL/Logic-Level design 91

_Carballo_Book.indb 91 2/20/08 4:54:16 PM

technology, the available gates and blocks in that library will be limited to what
the technology can produce effectively, customized for best manufacturing, and
modeled (for timing and power purposes) directly on the basis of models from the
manufacturing technology.
During logic synthesis, a largely automated tool utilizes gates and blocks
from the technology library. Following the conventional two logic synthesis steps,
the first logic optimization phase will likely account in its algorithm for the use
of appropriate optimizations that are known to be effective for the particular
manufacturing technology used; the second logic optimization phase, technology
mapping, is entirely focused, as its name indicates, on mapping to the technology
library available and thus will be directly based on what the technology can provide.
Since logic synthesis is supposed to account for timing and power issues (i.e., to
try to satisfy the constraints that the user enters as parameters for the synthesis),
it is the first step in logic design that directly accounts for manufacturing issues
and plugs them into key design requirements, such as timing/speed and power
consumption. However, timing analysis and power analysis come after this step,
and the main goal of logic synthesis is to implement the functionality of the
design, which is not directly dependent on manufacturing characteristics.
During logic simulation, the exclusive goal is to simulate functionality, to find
bugs or functional errors in the code. As a result, logic simulation is not directly
affected by manufacturing characteristics.
During logic timing analysis, the timing-analysis engine pulls modeling
information for each of the gates in the technology library and utilizes library
models to estimate the timing related to the wires connecting those gates. This
modeling information sits within the technology library and is obtained on
the basis of electrical device models and electrical wire models, which in turn
are directly obtained from manufacturing information (actual or predicted). In
addition, manufacturing processes and their parameters are becoming ever
harder to control, owing to the difficulty of producing ever-smaller features
on silicon wafers. As a result, these electrical models increasingly need to be
statisticalthat is, instead of providing a single value for each data point (e.g.,
delay), they provide a statistical distribution of values (e.g., a mean delay and a
standard deviation of delay [well-known statistical parameters], assuming that the
distribution is normal-like). Each gate has a statistical model; each timing value
across the circuit is statistical; and the timing constraints will be violated with a
certain probability, as opposed to with certainty. A typical output of statistical
timing analysis (STA) is a curve that shows a slack distribution (percent probability
that slack is 0, percent probability that slack is 10 nanoseconds, etc.). Statistical
parameters, such as the delay across a gate, primarily relate to manufacturing
process variables, which result in changes in the device model parameters, and
their associated voltages (e.g., threshold voltages for transistors).

92 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 92 2/20/08 4:54:16 PM

During logic power analysis, the analysis engine pulls power modeling
information for each of the gates in the technology library and utilizes library
models to estimate the power consumed by the wires connecting those gates
(some device, typically a set of buffers, has to drive these wires). Again, this
modeling information sits within the technology library and is obtained on the
basis of electrical device models and electrical wire models, which in turn are
directly obtained from manufacturing information (actual or predicted). Further,
since manufacturing processes and their parameters are becoming ever harder
to control, these electrical models also increasingly need to be statistical (i.e.,
providing a statistical distribution of values for power consumption).
During signal integrity analysis, the analysis engine again pulls electrical
modeling information for each of the gates in the technology library and utilizes
models to estimate the distortion of the signal across the wires connecting those
gates. Recall as well that signal integrity analysis is often integrated as a black box
with timing analysis and sometimes with power analysis. Regardless, modeling
information again typically sits within the technology library and is obtained on
the basis of electrical device models and electrical wire models, which in turn are
directly obtained from manufacturing information (actual or predicted). These
electrical models also increasingly need to be statistical. Signal integrity analysis
may also be performed at lower levels of abstraction, which will be explored in
chapter 5. Such analysis is even more dependent on manufacturing characteristics
and variations thereof, and the results may be fed back into the logic design
subflow to make signal integrity analysis more accurate.
During DFT (synthesis and verification), testability is analyzed, new logic is
inserted, and test vectors (inputs) are generated. Consequently, this phase has a
considerable impact on testability, diagnoseability, and the ability to learn how
to improve yield (i.e., percentage of chips that come off the manufacturing line
working correctly). Thus, there is a feedforward influence of DFT on subsequent
manufacturing phases; however, there is also a feedback relationship. Fault
models (e.g., stuck-at) largely determine how DFT tools work and the output that
they produce. These fault models can vary and become more complex depending
on the manufacturing process, how advanced this process is, and what kinds of
issues and defects are likely to occur, especially in reference to the design at hand.
As a result, manufacturing exerts a tremendous influence on how DFT works and
its output.

RTL/Logic-Level design 93

_Carballo_Book.indb 93 2/20/08 4:54:16 PM

_Carballo_Book.indb 94 2/20/08 4:54:16 PM
5 Circuit and
Layout Design
This chapter explains the technical steps, in designing a modern chip,
that follow RTL/logic-level design.

Transitor (Circuit) and Layout

DesignBack to Basics
Once a specific chip or the digital portion of a chip has been generated up
to the RTL/logic abstraction level, the next step is to reach the level at which
polygons are drawn on a layout. Because modern chips, or even blocks, have
hundreds of thousands to many millions of gates, this process needs to be greatly
automated. Indeed, as shown in chapter 4, logic synthesis provides a detailed
list of these gates, each mapped to a library cell from the technology library,
including the size and function of each of them and how they are interconnected.
To generate the overall layout of the block, the layout of each of these cells is
picked from the library and is assembled using a routing process.
Question. How does a designer know whether this combination of
interconnected layout cells really meets the performance, power, and area
Indeed, the answer, as usual, is through modeling. As explained in chapter 4,
logic synthesis and the various forms of verification make use of the library
and the models for each of the components or cells in the library. (A model for
interconnection is also used, but is not discussed here for convenience.)

_Carballo_Book.indb 95 2/20/08 4:54:17 PM

The logic cells in a library need to be able to perform a function (e.g., NAND),
with a certain performance (e.g., input-to-output speed of 0.1 nanoseconds),
power consumption (e.g., 0.1 watts), and a certain size (e.g., area of 100 m2).
Creating, verifying, and modeling these cells is a core activity of the transistor
and layout-level design for digital circuit blocks:
Definition. Digital logic cell design consists of generating and verifying the
circuit and the corresponding layout of a logic cell in the technology library used
for a logic block. Figure 51 depicts the process by which digital logic cells are
created and verified.

Fig. 51. Simplified logic cell design process

Before a layout is generated, a circuit (composed of transistors and wires)

needs to be generated. As figure 51 shows, cell design starts by doing exactly that:
creating a circuit schematic during design entry. A device library is used, made
of transistors of various types and sizes. These transistors are interconnected to
form the desired logic function (e.g., NAND). Inputs, outputs, and power supply
and ground are added to complete the circuit.
Next, the circuit is verified, typically through circuit simulation. The goal is
to verify the correct functionality, speed, and power of the circuit. These goals
are accomplished using device modelsthat is, models for the transistors used,
plus approximate models for the interconnections or wires. This step is called
prelayout verification, as the models cannot account for layout details yet since
the layout is not complete.
Subsequently, a layout is created that faithfully represents the functionality
and the electrical characteristics of the circuit. Layout checking is pursued, to
ensure that the geometrical design rules and guidelines are met.

96 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 96 2/20/08 4:54:17 PM

This layout is then modeled via a process called layout extraction. In this
process, an electrical model is created for the layout by mapping each subset of the
layout polygons to an electrical elementfor example, a resistor or a capacitor.
A characterization process is then run, which creates a set of approximate
functions that fully characterize the electrical behavior of the cell in a way that
can be used by upper-level tools, such as logic synthesis and verification tools.
To do so, the typical characterization process consists of running a number of
simulations of various types and, as a result, creates an approximate function, or
curve, for each of the key parameters of interest. As an example function, for an
input voltage that changes from 0 to 1 with a given slope S, what is the input-to-
output delay D?
As can be seen in the loops of figure 51, if any of these steps produces a result
that will not meet the cells specifications (there must always be specifications!),
then either the circuit or the layout need to be redesigned to attempt to correct
the problem.

Circuit Design
Design entry
The entry of a circuit design is usually done with a schematic entry tool.
A schematic is a graphical description of a circuit, including symbols for each
of its devices (transistors for digital circuits, as well as other devices for analog
circuits) and for the wires interconnecting these devices (typically described as
simple lines).
Entering a schematic facilitates a number of critical tasks that are undertaken
subsequently, including the following:

Describing and documenting a specific circuit

Getting the circuit ready for simulation, since a complete schematic can
then be used as input to generate a netlist

Performing layout extraction once the layout is done. The layout

determines the final topology of devices on the wafer, the actual device
sizes, the decomposition of the devices shapes into equal portions called
fingers, and the orientation of these devices (usually on a Manhattan
basis: north, south, west, or east). In layout extraction, the schematic is
labeled with this information in terms of an electrical model of the layout.

Circuit and layout design 97

_Carballo_Book.indb 97 2/20/08 4:54:18 PM

Ensuring that the schematic matches the layout, through layout
verification. For obvious reasons, verifying that the layout actually
implements the schematic requires having a schematic!

Characterizing the circuit by use of a number of simulations, so that

logic design tools can use this extraction for various already-explained
purposes (e.g., timing and power analysis; see chap. 4).

Basics of circuit design

The fundamental device used in circuit design is the transistor. Even though a
basic knowledge of circuit design is assumed throughout this book, a brief review
of the basics is appropriate here, for practical purposes.
As figure 52 depicts, a CMOS transistor is a four-terminal device, including
a drain, a source, a gate, and a bulk terminal. For an nMOS transistor, drain and
source are both made of doped siliconthat is, crystalline silicon (of group IV in
the periodic table) with a number of impurities (composed of elements of group
V in the periodic table) that make these regions net negative (n regions; hence,
the transistor is called nMOS).

Fig. 52. Basic schematic and electrical characteristics of a CMOS transistor

The gate of the transistor is made of a metal connection (traditionally,

elements such as aluminum or copper, but lately more sophisticated compounds),
sitting atop a thin oxide layer. The bulk terminal is basically the region in the
silicon wafer where the nMOS device sits. It is generally lightly doped. The
source and drain regions are generated by doping right on the bulk. Because of
these device regions, there is a potential capacitor structure between the gate
and the bulk.
To understand, in a simplified manner, the behavior of the transistor, consider
the case when the source and the bulk are both connected to ground (i.e.,
0 volts). In this case, when a sizable voltage is applied to the gate of the transistor,
this voltage should appear between the gate and both the source and the bulk,
and a channel eventually forms between the drain and the source. As the voltage

98 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 98 2/20/08 4:54:18 PM

grows, a thin layer between the drain and the source first becomes depleted of
positive (p) elements, eventually becomes a thin n layer (essentially an excess of
electrons) and thus connects drain and source, thereby becoming an almost zero-
resistance path to electrical current.
Mathematically, this physical/electrical process can be summarized using a
basic transistor equation:


In equation (5.1), K is a variable that depends on the size of the transistor and
fundamental device parameters such as the mobility of electrons:


where W is the width of the transistor (orthogonal to the figure), L is the transistor
length (the line between the drain and source regions), and is the electron
mobility (i.e., how mobile the electrons are that form the current in the channel).
What do equations (5.1) and (5.2) really mean? The meaning is fourfold:

As the voltage between gate and source increases, the current starts
growing very quickly. Clearly, this voltage really turns on the transistor
device and makes it a short (i.e., an almost zero-resistance device). This is
truly the electrical effect that forms the basis of digital logic: a high voltage
(a 1) makes the drain-source connection a short (a 0), and vice versa.

The threshold voltage determines the point at which the device gets
turned on and how fast. Because it determines the current per equation
(5.1), it also determines how much power is consumed.

Since the drain-source connection becomes a short, very little voltage

needs to pass between the two for electrical current to be generated.
Hence, the simplification in equation (5.1).

Current needs to go through the channel, much like water through a

pipe. Hence, the wider the transistor is (higher W), the more current can
be generated; further, the shorter the transistor is (lower L), the more
current can be generated.

As a graphical connection to equations (5.1) and (5.2), we can revisit figure

52. On the right side of figure 52, the conventional set of transistor curves
is shown for an nMOS transistor. As a transistor is turned on by growing gate-
source voltage, we can visualize the current moving from the bottom-right corner

Circuit and layout design 99

_Carballo_Book.indb 99 2/20/08 4:54:19 PM

of figure 52 toward the top-left corner (i.e., rapidly growing current with a small
drain-source voltage).
So how are logic gates and latches formed? They are indeed formed by
interconnecting transistors and powering them using a supply. The most basic
digital circuit is the inverter. Figure 53 depicts the schematic for an inverter
circuit and its basic input-output voltage curve.

Fig. 53. Simplified digital inverter

As figure 53 shows, an inverter is made of the interconnection of an nMOS

transistor (N1) and a pMOS transistor (P1). These transistors are symmetric mirrors
of each other. The nMOS device is grounded at its source, and the pMOS device is
powered at its source (VDD). Their gates are connected and form the input of the inverter
(VIN). Their drains are also connected and form the output of the inverter (VOUT).
Now look at the right side of figure 53. As the input voltage rises, the nMOS
transistor N1 does not get turned on while such voltage is lower than the threshold
voltage; meanwhile, the pMOS transistor, P1, is fully on. Once the threshold voltage
is crossed, both transistors are on. While N1 tries to pull the output voltage down,
P1 tries to pull it up. Under the assumption that both transistors are completely
symmetric (i.e., same size and parameters), when half of the supply voltage is
achieved, their pull is identical, and the output is also half the supply. Once the
voltage gets closer to the supply VDD, N1 is now the transistor that is more on, and
thus the voltage starts pulling toward ground (0). Once the input voltage reaches
VDD - VT (i.e., the supply minus the threshold voltage), the pMOS transistor P1
starts to be turned off, and the output voltage quickly decreases toward ground.
Because transistor current is heavily dependent on gate-source voltage, the
slope of the curve in figure 53 is very highalmost vertical. As a result, the
inverter looks digital, because as soon as the input is close to VDD (1), the output
will very quickly become ground (0), and vice versa.

100 Chip Design for Non-Designers: An Introduction

05_Carballo.indd 100 2/20/08 5:17:56 PM

Once we know how to built an inverter, we can use similar principles to build
other circuits, such as NAND and NOR gates. How is this done? The keys to
building these gates are as follows:

Adding two symmetric transistors per input (nMOS and pMOS)

Combining these transistors so that, when turned on or off, they produce

the desired output voltages
Following these principles, figure 54 depicts two basic schematics, for a
NAND and a NOR gate, respectively. Let us examine each schematic to ensure
that the circuits depicted provide the desired functionality. On the left side of
figure 54, a NAND gate circuit is depicted. When inputs A and B are both on,
transistors N1 and N2 are on, while transistors P1 and P2 are off. As a result, the
output voltage VOUT is grounded (0). In any other case (if any of the inputs is 0),
the output will be the supply voltage (1).

Fig. 54. Basic NAND and NOR CMOS circuits

Similarly, on the right side of figure 54, a NOR gate circuit is depicted.
When either input A or input B is on, either transistor N1 or transistor N2 is on,
while at least one of transistors P1 or P2 will be off. As a result, the output voltage
VOUT is grounded to 0, because one nMOS transistor being on will be enough
to pull the output down and because at least one of the pMOS transistors is off,
thereby cutting off the path to the supply on top of the schematic. Only when
both inputs are 0 will the output be the supply voltage (1). Why? Because both
pMOS transistors will then be on; thus, a short path between the ouput and the
upper supply voltage exists.
Creating latches is a little more complex. Still, the same basic principles
apply. Figure 55 depicts an edge-triggered latch.

Circuit and layout design 101

05_Carballo.indd 101 2/20/08 5:17:57 PM

Fig. 55. Basic edge-triggered latch

As the schematic in figure 55 shows, there are two key inputs, a clock input
clk, a data input VIN, and an output VOUT. The key to understanding the latchs
behavior lies in the clk input. Note the series of inverters at the input of the latch.
These are intended to form a delay between that clock input and the other clock
inputs in the latch. When the clock input rises from 0 to 1, it will immediately
turn on the other clock input transistors, except the transistor at the end of the
inverter chain. This transistor will be off until the delay has elapsed. Until that
time, exactly during the delay time, the left vertical chain of nMOS transistors in
stage 1 will be on, and VIN will be transmitted inverted to stage 2. During this
delay time, the clock input and the bottom transistor in stage 2 will both be on;
thus, transmission will occur all the way to the output.
After the delay has elapsed, the output of the inverter chain is low. As a result,
stage 2 is isolated. Why? Because stage 2s input is set to 1. The top transistor and
the bottom transistor are both off. Thus, the output value is kept (typically, through
an additional couple of inverters connected in a loop, plus perhaps a buffer).

The style of a digital circuit at the transistor level of abstraction is the type of
circuit template that is used for such a circuit. When designing an entire chip, one
to very few circuit design styles are used. A circuit style implies a set of choices
made when creating these circuits and is fixed for a given cell circuit library.
Common choices that determine style include the following (default indicates
the most common choices, exemplified in the type of circuits described in the
previous section):

Static (default) versus dynamic

Single (default) versus multiple versus variable threshold
Differential (default) versus single ended
Single (default) versus multiple versus variable supply

102 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 102 2/20/08 4:54:21 PM

Static circuits have one common characteristic: all input and output signals
are connected at all times to a supply voltage (i.e., either VDD or ground). In
dynamic circuits, a signal may stay at a given value without being connected
to a supply or ground. A given value (0 or 1) may stay on a node owing to the
charge stored on it (thanks to the associated capacitor). Then, by the next clock
cycle, it will be charged or discharged only as necessary, thereby saving time
and power consumption. Dynamic circuits also usually have fewer transistors,
because there does not need to be a path between every node and the power
supply and ground. Designing with static circuits is generally easier; thus, static
styles are by far the most popular. Exceptions include certain custom designs
when the highest speed is necessary, even if it must be at the cost of more
design effort.
In conventional circuit design, only two types of devices would exist:
nMOS and pMOS devices. For each type of device, all instances have the same
threshold voltage. This style is called single-threshold design. Having multiple
thresholds for each type of transistorfor example, two for nMOS devices and
two for pMOS devicesprovides a useful extra degree of freedom. Within a
circuit (e.g., a NOR gate), low-threshold devices can be used for critical paths
that is, parts of the circuit through which the signal needs to go the fastest.
Meanwhile, high-threshold devices can be used for the rest of the circuit, since
that choice helps save power consumption. This style is called multiple-threshold
design. Finally, a given circuit does not always need to go as fast as possible. In
that case, it may be useful to have a threshold for each device that is variable.
Through one of the pins in a transistor, its threshold voltage can be modified and
thus be made variable. This style is called variable-threshold design. Multiple-
threshold styles are growing for low-power, high-speed designs owing to their
stringent requirements.
In a conventional circuit design, each input and each output is given by one
pin or wire. This is called a single-ended circuit design style. Why would it ever
be otherwise? Having two wires to represent each value may be helpful. For each
signal of voltage V, one wire can represent the amount of voltage by holding a
V/2 voltage value, conversely the other wire can represent the same amount but
with a negative voltage, -V/2. The actual voltage V is the sum of both values,
or V = (V/2) - (-V/2). Why is this setup useful? The answer is because of noise.
When noise affects the wires in similar waysfor example, a sudden move that
adds a small voltage v to both wiresthe overall voltage value wont change:
(V/2 + v) - (-V/2 + v) = V. Because of the overhead needed when duplicating the
number of wires and its consequential loss in area efficiency, most circuits are
still single-ended, despite this advantage. Specifically, digital circuits tend to be
single-ended in style, while most analog circuits and analog-like circuits (those
that are digital but sensitive to noise or that interface with analog circuits) are
dual ended in nature.

Circuit and layout design 103

_Carballo_Book.indb 103 2/20/08 4:54:21 PM

In conventional circuit design, only one fixed voltage supply value would
exist for a logic block and logic library (not counting the ground pin). This style
is called single-supply design. Having multiple voltage supplies provides a useful
extra degree of freedom. For example, within a logic block, certain logic gates
could use a lower supply while those included in critical paths may be powered by
a higher supply. This style can clearly lower power consumption, assuming that
the block still works correctly. This style is called multiple-supply design. Finally,
as described previously, a given circuit does not always need to go as fast as
possible. In that case, it may be useful to have a variable supply voltage (VDD). This
approach implies making the supply voltage pin a control pin through which the
supply can be made variable. This style is called variable-supply design. Multiple-
supply styles are growing for low-power, high-speed designs.
The circuit design style has implications on the design methodology, as it
determines various aspects, including but not limited to

How to verify. For example, verifying dynamic logic circuits requires

different noise verification than verifying static logic and is generally
more complex.

How to characterize. For example, characterizing circuits with

variable- or multiple-supply voltages usually requires more complex
characterization, as there are more cases to characterize.

How to clock. For example, while static logic circuits have clocks only
when they are sequential (latches), all dynamic logic circuits have a clock
signal input (so that charges and values can be properly stored on clock cycles).

How to power. Clearly, multiple-supply logic circuits need a more

complex supply voltage distribution network, as multiple supply lines
need to be routed across the chip to arrive at every cell or logic block.

Once a circuit has been entered using an entry tool, the next action taken
is simulation. Circuit simulation is perhaps the oldest circuit design task with
reasonable automation. The goal of circuit simulation is simple: to ensure the
functionality and various performance parameters of the circuit, taking into
account the electrical characteristics of the technology being used to implement
the circuit.

104 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 104 2/20/08 4:54:21 PM

Specifically, in digital circuit simulation, we strive to answer the following

Is the functionality correct? That is, does this circuit actually do what it is
supposed to do?

At what speed can this circuit operate? Answering this question is

typically accomplished by calculating a delay between the time when an
input signal changes to the time when an output signal changes.

What is the circuits power consumption?

What is the circuits resistance to noise conditions?

Figure 56 depicts graphically the steps and components of circuit simulation.

In digital circuit simulation, time is typically a key variable, because it is important
to see whether the signals move from 0 to 1, and vice versa, at the right time and
with the right voltage. This type of simulation is called transient simulation and
is by far the most common in logic circuit design.

Fig. 56. Simplified circuit simulation process

Three main inputs are needed to simulate a circuit:

Inputs signals and parameters. First, we need to know what signals are
entering this circuit and at what times. In the case of digital circuits, most
simulations are time basedthat is, what we are attempting to do is
simulate the circuit over time. Thus, it is important to know when a signal

Circuit and layout design 105

_Carballo_Book.indb 105 2/20/08 4:54:22 PM

arrives at an input and with what value (1 or 0). We also need to know the
environmental conditions under which the circuit is working, including

Ambient temperature, which will be between a minimum temperature

Tmin and a maximum temperature Tmax

Voltage supply, which may vary, intentionally or owing to noise and

other imperfections, and will be between a minimum supply VDD-min
and a maximum supply VDD-max

Assumed process, which goes from a worst-case to a nominal [or

typical] to a best-case assumption (i.e., since manufacturing process
cannot be perfectly controlled, simulations are typically done at a given
process point)

Schematic. Obviously, we need the circuit itself. As with logic synthesis,

the simulation is done on a special data structure that stores all the
information needed in the most effective format for simulation. We call
this structure, again, a netlist. In this case, the netlist includes devices
(transistors, as opposed to gates in logic synthesis), and key aspects, such
as function (pMOS or nMOS), size (how many micrometers in width and
length), and interconnections through wires.

Technology device models. To simulate a circuit, we need the fundamental

electrical characteristics of the devices and wires. These are packaged into
the technology model, a key piece of information that the manufacturing
entity (fabrication facility) needs to provide for each technology in which it
will fabricate circuits. In logic synthesis, we saw that there can be various
technology logic libraries, even for a single manufacturing company.
Analogously, at the circuit level, there can be different technologies,
depending on what kind of transistors are allowed. This matter is related
to the circuit style discussion, in that only certain devices may be allowed
(e.g., slow, high-threshold transistors for a low-power technology).

Example. Consider a NOR gate circuit that needs to be verified. Let us briefly
go over the steps in simulating this circuit.
The function that needs to be verified is VOUT = A NOR B. In other words,
the circuit needs to perform this function while satisfying its other requirements,
such as timing and power consumption.
For verification of timing requirements, the circuit is simulated under key
environmental conditions (temperature, voltage, and process assumption) that
show if its speed is as required. Consider the case in which the minimum circuit
speed requirements are less than 100 picoseconds of delay (i.e., 10-12 seconds

106 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 106 2/20/08 4:54:22 PM

from inputs to outputs). In this case, we need to ensure that when either A
or B changes such that VOUT needs to change, VOUT takes no longer than 100
picoseconds to do so. For example, if A = B = 0, then VOUT = 1. Then, if A
changes from 0 to 1, VOUT will change from 1 to 0, and it needs to do so after no
more than 100 picoseconds.
Question. Under what conditions should we do the simulations?
Because, in this case, we are trying to ensure maximum speed, we should
simulate under the conditions of highest temperature Tmax, lowest supply voltage
VDD-min, and the worst-case process.
To verify power consumption, we also need to perform simulations under
certain conditions. Since power consumption has various components, we may
need to capture the dynamic and static portions of the power consumed. Circuit
simulation is primarily an electrical simulation process, whereby electrical voltages
and currents are computed over time. Power consumption can be computed
using circuit simulation because, as we may recall, power is computed from
voltage and current: P = VI. Power in digital circuits, however, is not independent
of speed. It will depend on how fast the circuit is running. More specifically, it
will depend on how many changes happen in a certain interval of time. Thus,
during simulation, if we want to compute the power consumed, we may want to
examine the current drawn on the supply of that current (i.e., the power supply).
Therefore, we may assume that the voltage is basically constant and equal to
VDD; then, we need only to find the current over a certain interval of time. Once
we do so, we can integrate and/or average that current over a unit of time (i.e.,
a second) to obtain the power consumption. To summarize, measuring power
through circuit simulation means measuring current.
In our example, we may find that, for an example voltage supply of one
volt, and signals changing at a frequency of one gigahertz (109 hertz) the power
consumed is 0.1 milliwatts.
Question: How is this number, 0.1 milliwatts, obtained through simulation?
If the input signals are changing at a frequency of one gigahertz, that means
they are changing every 1/109 seconds (i.e., every nanosecond [10-9 seconds]).
Therefore, to simulate this case, we make the simulator inputs A and B such
that changes from 0 to 1 happen that fast, every nanosecond. Assuming that
the simulator cannot compute power consumption directly (which is increasingly
uncommon), we measure the current at the output signal VOUT over a certain time
(e.g., 10 nanoseconds). Then, we average or integrate it over that time period and
multiply it by VDD, to obtain the final power consumption value. This is an average
power consumption value, not an instantaneous value (which would result from
multiplying the current at a given time by the voltage supply VDD).

Circuit and layout design 107

_Carballo_Book.indb 107 2/20/08 4:54:23 PM

Simulating noise margin is also necessary. To do so, we need to simulate the
circuit to ensure that it works well under conditions that are not the ones that
we are expecting. This is done to account for factors beyond the usual voltage
supply, ambient temperature, and process assumption.
The circuits functional behavior is still valid (i.e., it is still a correct NOR
gate circuit) when input voltage is below a certain value for some reasonfor
example, when the input voltage is 0.7 volts, as opposed to 1 volt. Its functional
behavior is also still valid when the separation between a change in input A and
a change in input B is less than a certain amountfor example, 10% of the
frequency mentioned. Finally, its functional behavior is still valid when the voltage
thresholds of the transistors change by a certain amountfor example, 10%
above or below the expected threshold of each transistor device.
Example: Latch. Figure 57 graphically depicts another example, this time
on a simple latch circuit with a single data input and single clock input.

Fig. 57. Simulation on a simplified latch circuit

As figure 57 shows, the latch includes two stacks of transistors. Each stack
is connected to the clock signals, either its positive version or its negative version
(as denoted with an overbar). The data input enters through the left stack, and the
data output comes out of the right stack.
When the clock signal rises from 0 to 1, the left stack is on, and the right
stack is off. As a result, the data are transferred into the latch by going through
the left transistor stack.
On the same clock cycle, when the clock falls from 1 to 0, the left stack is off,
and the right stack is on. The data are then stored permanently (or at least until
the next clock cycle) and circulate through the inverters (the shapes formed by a
triangle and a bubble). Storage is ensured through positive feedback. The inverters
propagate the signal to the output, then back into the right stack of transistors.

108 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 108 2/20/08 4:54:23 PM

On the right side of figure 57, waveforms resulting from the simulation
of this circuit are shown. At the top is the waveform of the clock signal, as it
goes down from 1 to 0. Note that the signal is not perfectly rectangular, which
is typical in real-world designs. As mentioned before, signals, including clock
signals, experience a number of distortions and imperfections owing to the wires,
devices, and various sources of noise in a chip.
In the bottom-right corner of figure 57, the amount of current resulting from
simulation of the circuit is shown. This current is the best indicator of the power
consumed by this circuit. As figure 57 shows, the current increases significantly as
the clock changes polarity. This is because while the clock is neither 1 nor 0, all the
clock transistors are somewhat on, and there may be a current directly connecting
VDD and ground. This is a phenomenon that is common with logic gates, such as
inverters. Indeed, during this short interval of time, the two inverters at the output
may also experience a similar effect, if their input is neither exactly 0 nor exactly 1.
To compute the overall power consumption by this circuit, under these
conditions, the designer can multiply the supply voltage VDD by the current shown
in the right side of figure 57. The current should be averaged over a clock period
to compute average power. Otherwise, we are dealing with instantaneous power,
as it comes from instantaneous current.

Circuit verification
Once a circuit has been simulated and entered using an entry tool, the next
action taken is circuit verification. Although the word verification is used, in
this context it does not refer to the task of verifying the function, power, speed,
or area of a circuit against its specifications. Circuit verification entails the final
preparations before a circuit can progress to the layout phase (i.e., before layout
generation). Think of this step as a set of checks to make sure various items are
consistentand, thus, can as a best-practices or quality-assurance design task.
Circuit verification includes the following tasks:

Circuit checks. Various checks on the circuit are run, sometimes using
software scripts written by the design team. For example, does the circuit
have a power supply? Does it have a ground? Does it have both inputs
and outputs? Is any device (e.g., a special low-power transistor) used that,
even though it went well through simulation, ultimately cannot be used in
this particular library and for this particular project (e.g., because it is too
expensive an option to select from the manufacturer)?

Pin assignments. Making sure that the pins of the circuit correspond
to other related documentation is more important than it might at first
appear to be. Pins are the entry and exit point for any digital circuit

Circuit and layout design 109

_Carballo_Book.indb 109 2/20/08 4:54:23 PM

entity, small or large, especially for tools that see that entity as a black
box. We do not want those tools to pick the wrong block or use it
incorrectly. For example, the circuit is likely representing a specific
logic gate in a library; accordingly, the pins likely need to be named
consistently with that logic gate. Logic gate pin names may reside in a
completely different file or schematic and thus need to be examined,
ideally by automated software.

Establishment of naming conventions. Every design team includes a

number of engineers, each of whom works on a separate design task
or design block. Naming conventionsthat is, general guidelines about
how a device, a wire, or a block (gate or larger) are namedare utilized
by all members of the team for a number of reasons, including to ensure
consistency, to be able to effectively assemble the blocks, to properly
document the circuits, and to be able to use all the design tools smoothly.

Ensuring that a consistent methodology is followed. The design

methodology encompasses a number of guidelines and steps that must
be followed during a design process. When circuits are complete, it is
important to ensure that the overall methodology has been followed for
the design to which these circuits belong. When checking that a circuit is
legal, methodology checks may include the aforementioned items, such
as naming conventions, as well as a broader set of items, such as the set
of scripts or tools used to generate or simulate the circuit. For example,
we may need to check that the circuit was simulated using a particular
version of the circuit simulator (e.g., version 2.0). Why? This will ensure
consistency in results among all circuits and confirm that the designer
is using a simulator and software libraries that have been properly
calibrated with the manufacturing technology libraries and have been
extensively used before with this technology.

Analog Circuit Design

The previous section covered the most common type of circuit design (or
at least the type of design that covers the most surface real estate among the
worlds chips)namely, digital circuit design. Every digital gate circuit may be
used thousands of times in a digital block. However, an increasingly important
type of circuit design is analog circuit design. In some ways, analog circuit

110 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 110 2/20/08 4:54:23 PM

design predates digital circuit design, since most circuits near the beginning of
electronic design were not digital circuits (e.g., amplifiers, whose job is to make
a signal larger).
Definition. Analog circuits are those that are based on continuously
variable signals.
While digital circuit signals can take only two different levels (0 and 1), analog
circuit signals can take any voltage value. This voltage represents a signalthat
is, it has a proportional (i.e., analog) relationship to that signal. For example, a
voltage value of 0.9 volts may mean a music volume level of 18 decibels, whereas
a voltage value of 0.8 volts may mean a music volume level of 16 decibels.
Similar examples could be provided for other properties, such as video intensity,
atmospheric pressure, and temperature.
No such proportional relationship pertains to digital signals, since these
signals, when grouped into words, hold an encoding (nonproportional)
relationship to the information that they represent. For example, a set of digital
signals encoded as 0101 may represent the number 5 in a calculator chip (i.e.,
20 = 1 for the right-most bit plus 22 = 4 for the third bit from the right; recall that
binary encoding is based on the addition of various powers of 2, elevated to a
number that corresponds to its position in the encoded word).
Analog circuit design is, from many perspectives, much more complex than
digital design. There are not as many analog circuits in a typical chip today, but
each of them takes a long time and much effort to complete. Analog circuits still
do not fit into a fully automated synthesis methodologythat is, they typically
do not belong to a library from which a tool picks blocks to construct a high-level
function. Because of this, analog circuits do not generally appear many times in
a single chipat least not in the same order of magnitude as digital circuit cells.
Notable exceptions include high-speed communications links, which may appear
in the tens or even hundreds in fast networking chips; again, however, these
exceptions do not reach the same order of magnitude as the usage of a typical
logic cell.
Question. Why are analog circuits so much more complex to design?
When verifying a digital circuit, a key assumption holds: signals can only
take two meaningful values, 0 and 1. When verifying an analog circuit, we
cannot make this assumption; rather, the signal can take many different values,
which complicates the process, accuracy, and resolution of each simulation and
verification. In addition, analog circuits tend to have more complex and more
numerous metrics that need to be measured, beyond the speed/power/area/
noise metrics that are typical of digital circuits. For example, simulations for
analog circuits may have to be done, not only on the time domain (transitive)
but also on the frequency domain, to understand a complex behavior for various
frequencies of operation.
Circuit and layout design 111

_Carballo_Book.indb 111 2/20/08 4:54:24 PM

Based on these facts, it might be surprising to see the analog design process
depicted in figure 58. The flow resembles somewhat the digital cell design flow
shown in figure 51.

Fig. 58. Simplified analog circuit design process

Nevertheless, the analog circuit design process does have some important
differences as compared with the digital cell design process. First, you will
notice that instead of layout synthesis the flow chart indicates layout entry. Also,
instead of layout characterization, the flow chart indicates layout extraction and
simulation. Otherwise, the chart looks strikingly similar to the one for digital
cell design.
These differences are due to two important peculiarities of analog design.
First, analog design is much less automated than digital design. While the debate
exists, there is overall consensus in the chip design community that analog
design is at least a decade behind digital design in terms of the level of tool
automation. As a result, few mainstream automated analog synthesis tools exist.
Furthermore, creating schematics and layouts for analog circuits is largely a
manual task, although a new crop of tools have recently made significant strides
toward automation. While creating layouts for digital cells is not entirely automated
either, it tends to be more automated than for analog circuits, and the generation
of the overall layout from assembled logic cells is also largely automated.
Second, analog design is typically not used to create a library of standardized
gates from which a synthesis tool will pick and combine to form a complex
high-level function. Again, this assembly function is primarily, although not
completely, manual, and each analog circuit is used once or only a few times in
the same chip. As a result, analog circuits are not characterized for the use of

112 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 112 2/20/08 4:54:24 PM

synthesis tools. They are simulated carefully, and characteristics are extracted.
These characteristics are primarily used to simulate the schematic with more
accuracy, by means of an extracted netlist that adds more detailed information to
the schematic on the basis of layout information.
Given these two key differences, an analog circuit design goes through the
following phases. Initially, before a layout is generated, the circuit (composed of
transistors, wires, and often other devices, e.g., resistors or capacitors) needs to
be generated. As figure 58 shows, circuit design starts by doing exactly that:
creating a circuit schematic during design entry. A device library is sometimes
used, made of transistors (or groups thereof) of various types and sizes, plus other
devices, such as resistors or capacitors. These transistors are interconnected to
form the desired analog function (e.g., an amplifier). Inputs, outputs, and power
supply and ground are added to complete the circuit.
Next, the circuit is verified, usually through various circuit simulations. The
goal is to ensure the correct functionality, speed, power, and other characteristics
of the circuit, such as maximum bandwidth. To accomplish these goals, device
models are usedthat is, models for the transistors and other devices used, plus
approximate models for the interconnections or wires. This step is called pre-
layout verification, as the models cannot account for layout details yet, since the
layout is not complete.
After prelayout verification, a layout is entered that faithfully represents
the functionality and electrical characteristics of the circuit. Layout checking is
pursued then, to ensure that the geometrical design rules and guidelines are met.
These guidelines are often different from and possibly more stringent than those
for digital circuits.
As for digital circuits, this layout is then modeled using a process called
layout extraction. In this process, an electrical model is created for the layout by
mapping each subset of layout polygons to an electrical element (e.g., a resistor
or a capacitor), so that the circuit simulation can be rerun afterward with much
more accuracy.
Occasionally, a characterization process is also run, consisting of a number
of simulations of various types. The resulting functions, or curves, can be used
for high-level behavioral simulations.
As can be seen in figure 58s loops, if any of these steps produce a result
that does not meet the analog circuit specifications (as always, there must be a
set of specifications!), then either the circuit or the layout needs to be redesigned
in an attempt to correct the problem.

Circuit and layout design 113

_Carballo_Book.indb 113 2/20/08 4:54:24 PM

Design entry
Analog circuit design is the oldest type of circuit design. Circuit design entry
is, in this case, done using a schematic editor. For many years, schematics have
been the most common format used to describe, explain, enter, annotate, and
evaluate analog circuits. It just makes sense. The process is as close as one can
get to drawing the circuit on a piece of paper or a white board.
Schematic entry is therefore a very important piece of analog circuit design,
as it is the main description of the circuit itself! Accordingly, there are several goals
of schematic entry, as outlined using examples in the following paragraphs.
The first goal of schematic entry is to describe and document the goals of
this circuit (including its requirements) and the approach that will be followed in
designing it.
Example. Consider having an annotation in the schematic that says This
is analog amplifier version 1.0, in 0.13 m CMOS technology, and expected
gain of 40 decibels. The gain of an amplifier is the amplification multiple that
is applied to its input, to obtain its outputthat is, a ratio between the output
voltage amplitude (voltage gain) or power consumption (power gain) and the input
voltage amplitude or power consumption, respectively. A decibel is a logarithmic
metric without units, with a base of 10, that expresses an amount relative to a
certain reference, typically power consumption. Because it is logarithmic, very
large or very small amounts can be represented. For metrics such as amplification
gain, it can be expressed as


based on a voltage ratio, or


based on a power consumption ratio.

Since power consumption is roughly proportional to the square of the
voltage, six decibels is equivalent to a twofold gain in voltage, and three decibels
is equivalent to a twofold gain in power consumption.
The second goal of schematic entry is to simulate the circuit. An extensive
discussion of how simulation occurs in digital circuits has already been undertaken.
In the case of analog circuits, a few aspects stand out as different:

Devices need to be simulated that do not show up in most digital circuit

schematics (at least not before extraction from layout has been executed).

114 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 114 2/20/08 4:54:25 PM

For example, resistors, capacitors, inductors, and diodes are all relatively
common components in an analog circuit schematic from the outset.

Also, as with any other circuit, we need to enter input signals to stimulate
the circuit for the simulation. However, since analog circuits are not
restricted to two logical values per signal (0 or 1), the input signals may
need to be much more complex and may need to appear in various
places in the circuit to simulate for a number of complex situations. For
this reason, it is common to have stimuli-generating devices, such as
voltage and current sources that can provide various types of waveforms.

Finally, netlist generation is still needed to generate a data form that the
simulator can take. This tends to be more complex since we need to
ensure that we are using the allowed sets of devices, and these sets are
broader than for digital devices, as discussed earlier.

The third goal of schematic entry is design layout. Circuit schematics can also
store very important information to guide the design of its layout. Indeed, in most
analog circuit design teams, the circuit designer and the layout designer may be
in very close touch or even be the same person, for a given circuit block. Why
is layout-guiding information so important if there is so much communication
between circuit and layout design? The answer is because analog layouts need to
be designed very carefully. Analog signals need to be precisely at certain levels
and do not necessarily saturate into simple 0 or 1 levels; consequently, their
values depend highly on the resistors, capacitors, and other parasitic devices that
result from the way the layout is pursued and were not intentionally created by
the designer. As a result, the schematic may have a number of annotations about
the actual topology, sizes, number of fingers, orientation of each finger, and so
forth. Detailed layout instructions are sometimes even written directly on the
schematic. The performance of the circuit depends on it!
Example. Analog circuits are often differentialthat is, they have two differential
inputs and two differential outputs, and the whole internal architecture is based
on two wires. (The differential concept was explained previously, under Styles.)
Thus, a typical annotation would read something like These two transistors need
to be laid out exactly symmetrically and vertically attached to each other.
The fourth goal of schematic entry is to pursue layout extraction/verification.
As discussed earlier, it is imperative to extract electrical information based on
the layout that was generated and then feed it back to the circuit schematic in
the form of an annotated schematicthat is, a schematic that includes all the
additional devices (most often, resistors and capacitors) derived from wires and
other items that were not included in the initial schematic. The other reason
why the schematic is important is that once the extraction has been done, we

Circuit and layout design 115

_Carballo_Book.indb 115 2/20/08 4:54:26 PM

need to check that the schematic and the layout are consistent with each other,
which is referred to as layout-versus-schematic (LVS) verification. To pursue this
comparison, clearly, we need to start with an accurate schematic!
Example. A schematic has a number of wires interconnecting the various
transistors and other devices in the circuit. These wires, however, have an
associated, nonnegligible capacitance and resistance (and, in extreme cases of
very long wires, inductance). Unfortunately, the value of this capacitance and
resistance depend on the length and the width of the wire, which will not be
known until the wire is laid out. Hence, we need to wait until we have a layout
before we will be able to provide these annotations to the circuit schematic.
Circuit extraction. The analog circuit schematic is also the key source
to extract higher-level models, for use by higher-level design tools, such as basic
behavioral models. Once these models are created, a large simulation can be done
that connects this analog block with other analog blocks and possibly digital blocks.
Example. Consider an analog amplifier again. As discussed earlier, the gain
of that amplifier is a critical parameter. If we wanted to create an extremely
simple behavioral model, we could enter the circuit schematic, then pursue a
simulation to compute the amplifiers gain G, and then create a model as a black
box as follows. The inputs to the box are the inputs to the amplifier. The voltage
supply and ground for the box are the same as the amplifiers power and ground
pins. Finally, the outputs also correspond exactly to the amplifiers outputs. The
function that represents the circuit at a high level is


Example. Figure 59 depicts an example analog circuit schematic, for a

low-noise amplifier (LNA), a circuit typically used in the analog interface of radio-
frequency chips, as used in cell phones and GPS devices. Examination of this
circuit provides an understanding of typical basic annotations, requirements,
relationships, and parameters in analog circuits.

Fig. 59. An example analog circuita simplified LNA

116 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 116 2/20/08 4:54:26 PM

An LNA is able to amplify weak signals captured by the systems antenna
and is usually located close to the antenna. This closeness reduces losses in the
interconnection between LNA and antenna.
This key circuit is said to be part of the front end of many radio receivers.
Signal theory dictates that it should thus be a critical circuit, because the overall
noise of the receiver will be dominated by its very front-end stages. A key goal
of the LNA is to amplify the input signal while adding very little noise to it (i.e.,
minimal noise coming from the LNA itself). The good news is that the higher the
LNAs gain is, the lower will be the output noisethat is, the noise of subsequent
stages is reduced by the gain of the LNA. Thus, an LNA makes it easier to
recognize the signal completely in subsequent stages.
Figure 59 depicts a first LNA stage, with high amplification to reduce noise.
At the bottom of the circuit, a transistor acts as a roughly constant current source,
denoted by Iref. This current is divided into the current going through the left
branch (N1) and the right branch (N2).
When Vin(+) becomes larger than Vin() the current through the left branch
swings larger than the current through the right branch. As a result, the output
voltage declines. Because it falls faster than the input increases, there is an
amplification effect. The converse applies as well: when Vin(+) becomes smaller
than Vin(), the current through the right branch swings larger than the current
through the left branch, producing the opposite effect and larger output voltages
(as the difference between Vout(+) and Vout()).
Typical requirements for a circuit of this kind include

Gain. Input-to-output gain, for a given set of frequencies, is the most

common input requirement for this circuit block. For example, a gain of
30 or higher could be required.

Power consumption. The most important trade-off is power

consumption. Otherwise, it would be easy, as will be discussed later, to
increase gain by increasing current and the size of transistors.

Input impedance and frequency behavior. To properly isolate the circuit

from others at its input, we need to have high impedance (the effective
combination of resistance, capacitance, and inductance) to this circuit.
This also represents a trade-off, as large impedance is often achieved by
having large input transistorswhich consume power!

The concept of impedance can be elaborated on further by reference to the

example of figure 59. Impedance measures the opposition to a certain current
and extends the concept of resistance from steady-current circuits to sinusoidal-

Circuit and layout design 117

_Carballo_Book.indb 117 2/20/08 4:54:26 PM

current (moving up and down as a periodic wave) circuits. Impedance allows us to
describe not only magnitudes of voltage and current but also their relative phases
(i.e., how time shifted they are with respect to each other). The general equation
for impedance is as follows:

V = ZI (5.6)

Note that in equation (5.6), every symbol now represents a complex number.
It turns out that any signal in most analog circuits can be expressed as a
weighted sum of a set of components, each of which represents the signals
portion that runs at a specific frequency. For this reason, to capture magnitude
and phase/frequency characteristics, impedance is typically denoted as a
complex number, which results from moving from the time domain into the
frequency domain by use of mathematical transforms, such as the Fourier
transform. While the theory and mathematical background are beyond the
scope of this book, this concept is important in analog circuits, as currents and
voltages are often smooth and sinusoidal, typically centered around a concrete
set of frequencies.
As a complex number, impedance includes a real number and an imaginary
number. Another way to look at it is as a complex number with magnitude |Z|
and phase :


The impedance magnitude represents the change in voltage amplitude for

a given current amplitude. The impedance argument (angle) gives the phase
relationship and, thus, the frequency component (since phase is proportional
to frequency). The real part is the resistance, and the imaginary part is the
reactance, which comes from capacitance and inductance components. For a
pure resistor, impedance is a simple real number:

ZR = R (5.8)

For a pure inductor, impedance is a simple imaginary number, proportional

to the frequency, since derivatives of current in the time domain become
proportional to frequency in the frequency domain:

ZL = jwL  (5.9)

where w denotes frequency; note again that phase is proportional to frequency.

For a pure capacitor, impedance is a simple imaginary number, proportional

118 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 118 2/20/08 4:54:27 PM

to the inverse of the frequency, since derivatives of voltage in the time domain
become proportional to frequency (or phase) in the frequency domain:


Therefore, as is already evident, with everything else being equal, the larger
an inductance or resistance device in series is, the larger its impedance will be.
Conversely, large capacitors in series can be of low impedancethat is, they will
let current flow easily through them. The opposite is true for devices in parallel.
Finally, like resistances, impedances are added when in series (same wire),


and their inverses are added when in parallel,


Larger input transistors may thus provide larger input impedances. Returning
to figure 59, we see that the larger an input transistor is, the larger will be its
input resistance and inductance associated with its input wire and pin, since both
are in series at the entry. However, since the transistor has effective capacitors in
parallel (remember that there is an oxide separating the gate and the substrate,
which can be assumed to be close to ground), the larger its size is, the larger will
be its capacitor and equivalent impedance.
Other requirements often exist, the most important of which is bandwidth.
Although the details of evaluating circuit bandwidth are beyond the scope
of this book, it can be understood from the preceding discussion that the
higher the frequency is, the higher will be the impedance to passing through
currents and voltages, in most cases. Therefore, typical amplifiers have good
amplification gains until a certain frequency, above which gain begins to taper
off greatly. This limits the range of working frequencies to a certain level,
which is the bandwidth. Thus, there is a trade-off: with improved gain and
bandwidth (i.e., the two most common performance requirements in analog
design) comesguess whatincreased power consumption (the eternal issue
in circuit design).

Circuit and layout design 119

_Carballo_Book.indb 119 2/20/08 4:54:28 PM

Annotations. There are several notable schematic annotations in figure
59, including

The symmetric annotation, indicating that both transistors and both

inductors should be carefully laid out as symmetric structures. Why is this
important? Because when both inputs are equal in voltage, we should
not have any outputthat is, both output voltages should be equal. If
both transistors are laid out carefully and symmetrically, the likelihood
that their device characteristics will be exactly the same is high; thus,
when both input voltages are the same, the effective output voltage will
be zero.

The sizes, in terms of width (W) and length (L) of each of the transistors.

The current reference indication, which may also include its actual value
(four milliamperes, or four-thousandths of an ampere) and a reminder of
how that current was obtained (through a fixed voltage, of 0.5 volts, on
the bottom transistor).

To improve gain, transistors are set to have a high current run through them
and are made as large as power constraints allow, because gain is proportional
to their current and size.
Question. Why does amplifier gain depend on its current?
As can be seen in figure 59, the output voltage is


Thus, amplifier gain depends on the currents and how they change over time,
the sum of which is the bottom total current. Large currents will produce large signal
swings and, thus, large gain. Since current depends on size, based on equation
(5.1), you can thus deduce that transistor size matters. Again, the trade-off is power
consumption, which is why transistors in this case cannot be made infinitely large.

120 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 120 2/20/08 4:54:29 PM

Analog circuit simulation is the bread and butter of analog design. Analog
circuit simulation includes various types of analysis (generally including more
types than digital, which tends to be limited to a power-performance analysis
based on transient simulation). The following are the most common simulation
methods in analog circuits:
Transient analysis
AC analysis

DC analysis

In transient analysis, we can visualize and analyze various signals in the circuit
over time. This analysis is utilized in analog design to derive key items, such as
power consumption, and various kinds of noise, including Vdd-induced noise.
AC analysis focuses on frequency, that is, on the behavior of a circuit as
viewed from the frequency domain. The chief assumption in AC analysis is
that we are dealing with a number of sinusoidal signals, each of which has only
one frequency. While this is a big assumption, AC analysis is both sufficient
and extremely useful to understand how certain key parameters change as we
vary the frequency at which signals and the circuit need to work. As such, AC
analysis is the key to understanding requirements such as gain, bandwidth, and
overall stability of the circuit. (Certain analog circuits may enter an unstable state,
where signals may saturate owing to positive-feedback effects. Stability typically
needs to be analyzed over frequency, since instability happens only at certain
frequencies. Thus, we need to ensure that the circuit is designed to never reach
those frequencies, or it will not work.)
DC analysis, by contrast, focuses only on one frequency, the zero frequency.
DC analysis assumes that all signals in the circuit have a frequency of zero;
therefore, we are talking about straight, constant voltages and currents. What
we are really doing is doing a special type of AC analysis (at frequency = 0)
and consequently analyzing only those components of all signals in the circuit.
Why this abrupt decomposition into DC and AC analysis? For circuits that
always have constant values of voltages and currents, the answer is obvious;
for all others, the answer is biasing, whereby a devices characteristics depend
on the major currents and voltages that are applied to it. If we can assume
that the device is biased by a major, constant current or voltage, plus a set
of small, sinusoidal signals, then its main characteristics as a device can be
determined by that major current or voltage. For example, the main value
of the current that goes through a transistor, such as the input transistors
in our amplifier, determines the actual gain that we can obtain. Hence, key

Circuit and layout design 121

_Carballo_Book.indb 121 2/20/08 4:54:29 PM

parameters, such as gain, and the various average, maximum, and minimum
voltage and current levels can be derived though DC analysis. This analysis
can also help us to understand what region of behavior a transistor or other
device is in. In analog design, the useful regions of behavior are usually linear;
thus, DC analysis can be used to ensure that each device is properly biased.
For each of these analyses, we can pursue three approaches based on the
nature of the simulation parameters:

Single-point simulation
Parametric simulation

Statistical simulation

Single-point simulation is the default, for which every input variable, device
size, voltage, temperature, process point, and so forth, is a constant number.
Only one simulation is executed, and the results are returned quickly.
In parametric simulation, we vary, or sweep, certain variables to obtain an
understanding of a parameters behavior, either to find an optimal design or simply
to characterize it for the use of higher-level tools or the like. Frequently varied
parameters include the voltage supply, Vdd, and transistor sizes (to determine the
right size for each transistor).
In statistical simulation, we attempt to ascertain the statistical behavior of
the circuit. Unfortunately, circuit parameters of all kinds are not realistically
deterministic (i.e., not a concrete, certain number), but have some statistical
behaviorthat is, for each possible value, there is a probability that that value
actually corresponds to what is manufactured. As the size of transistors and
features keeps decreasing, it becomes ever more difficult to manufacture them
with certainty that all will be sized exactly as desired (e.g., in terms of the width or
the length of a transistor). This trend has been making statistical simulation ever
more important. While this is also becoming important for digital circuits, it has
already been of utmost importance for analog circuits for a long time, for reasons
already given. Small variations of parameters in an analog circuit can completely
ruin its behavior.
Popular parameters to be varied in statistical simulations include the threshold
voltage of transistors, Vt, and the voltage supply, Vdd. Statistical simulations are
often pursued by two methodologies:
Assuming and computing a certain statistical model for each of the key
parameters to be varied
Doing a number of single-point simulations to cover as best as possible
the space of combinations of all parameters as they vary

122 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 122 2/20/08 4:54:29 PM

For example, if we believe that the supply voltage can vary between 0.9,
1, and 1.1 volts with equal probability and the threshold voltage for the bottom
transistor can vary between 0.3, 0.35, and 0.4 volts with equal probability, then
we would probably try to execute up to 3 3 = 9 simulations. The principle
wrong assumption here is that the supply voltage and the threshold voltage are
independent variablesthat is, if one takes one value, the other still has equal
probability of taking any of its three values. This is not always the case; therefore
a good statistical modeling approach should capture these trends, so that the
number of simulation can be reduced, thereby saving time and resources.
A very popular type of statistical simulation, even today, is Monte Carlo
simulation. In Monte Carlo simulation, when we have a space of possibilities that
is too large to completely simulate in reality, we sample that space approximately
randomly. The principle is simple: we generate suitable random numbers (in
reality, these are almost-random or pseudo-random numbers, because that is
the only type that a computer can generate), and then we observe what fraction
of these numbers meet some specific requirement. Monte Carlo simulations
efficiency relative to other methods increases with the size of the problem. For
example, consider the case in which Vdd can vary randomly anywhere from 0.9
to 1.1 volts. We can generate random numbers, each of which corresponds to a
particular Vdd value and then observe what percentage of simulations produce an
amplifier gain of at least 30, assuming that that is the gain requirement.
Figure 510 depicts the general process followed in the simulation of analog
circuits. As figure 510 indicates, analog circuit simulation is roughly similar to
digital simulation in terms of the process.

Fig. 510. Analog circuit simulation process

Circuit and layout design 123

_Carballo_Book.indb 123 2/20/08 4:54:30 PM

First, the inputs to the simulation process include the following:

The schematic itself, with all its details.

The input signals, or stimuli, and the conditions under which the circuit
needs to work (temperature, voltage supply, and process point). Of these
conditions, the input stimuli tend to differ the most from digital circuits;
analog circuits may require a fairly complex set of input signals over time
and/or frequency. These may be inserted directly into the circuit as signal
generators, or widgetsfor example, as a sinusoidal-voltage generator or
a fixed-current generator. In figure 510, note the sinusoidal signal that
happens to be one of the inputs (the left input) to the LNA differential
amplifier: Vin(+). This input is a sinusoidal signal (basically a single-frequency
signal) with an amplitude of 0.2 volts. In practice, there is a symmetric signal
mirroring this one, Vin(), which is the other (right) input to the amplifier and
will be exactly opposite in phase (i.e, Vin(+) will be the exact same magnitude
as Vin(), but it will be positive when Vin() is negative, and vice versa).

The type of analysis desired (AC analysis, DC analysis, etc.), as well

as its level of precision, maximum number of iterations, and the type
of algorithm followed. Although most of these choices may be set to
default, they do exist and represent a useful yet complex instrument to
optimize a simulation to meet ones needs.

The device models (i.e., the mathematical models for each device,
including transistors, resistors, capacitors, inductors, diodes, etc.). These
models are a fundamental characteristic of the manufacturing technology
used for this chip design and thus are provided by the team in charge of
interfacing with manufacturing or directly by the manufacturing company
(often called the foundry) or department (if in the same company).
Based on these inputs, the simulation tool generates a netlist (i.e., the data
structure that the simulator can take directly), which represents the circuit in the
schematic, potentially including the simulation type, conditions, and device models.
The result from a typical analog circuit simulation has two components:

Numerical performance results (plus a log file to examine how the

simulation was performed) that can be directly examinedfor example,
power consumption numbers

A graph or set of graphs (or the data from which graphs can be built)
plotting specific parameters as a function of time, frequency, temperature,
or whatever has been deemed as the critical dependent parameter

124 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 124 2/20/08 4:54:30 PM

For example, in figure 510, one of the results from the simulation is a graph
showing the dependence of the gain on frequency. According to this graph, the
circuit has a maximum gain around a certain central frequency, decreasing on
both sides owing to the frequency-dependent (i.e., capacitive or inductive) effects
of the devices and wires in the circuit. This bandpass filter effect (i.e., filtering
frequencies around a central one) is typical of analog circuits and can be directly
obtained using circuit simulations such as AC-type simulations. As with digital
circuits, analog simulations may need to be redone, changing parameters, device
models, conditions, the type of analysis, and/or the schematic, until the result
satisfies the requirements of the circuit.
Figure 511 depicts another simulation result. In this case, the result of
simulating the output voltage of an analog circuit called a voltage reference is
shown. The function of a voltage reference is to provide, at its output, a fixed
voltage value with great stability, especially with regard to temperature.

Fig. 511. Simulation for a fixed voltage reference

As is evident from in figure 511, the output voltage is dependent on the

temperature. The output voltage seems to have a decent, although not excellent,
stability versus temperature, as it varies only from 0.88 to 0.9 volts when the
temperature is increased to more than 100C.

Circuit verification
As with digital circuits, analog circuits need to be prepared for layout. Analog
circuit verification has a goal preparing the circuit for layout generation precisely,
once it has been designed and correctly simulated.
Common circuit verification tasks include

Circuits checks. We need to check for common issues, such as whether

the circuit actually has a power supply and ground, whether it has all
inputs and outputs, and whether a device was utilized that cannot be
used in this library and for this particular project.

Circuit and layout design 125

_Carballo_Book.indb 125 2/20/08 4:54:30 PM

Pin assignments. We need to ensure that the pins of the circuit are
correctly instantiated and correspond to other documentation and related
or interconnected circuit blocks.

Establishment of naming conventions. We need to check for proper

naming conventions, for devices, wires, and the circuit block itself, and
to ensure consistency, effective assembly, documentation, and effective
design tool usage.

Ensuring that a consistent methodology is followed. We need to check

that the design methodology, including all of its guidelines and steps,
has been followed when designing the analog circuit. This includes the
set of analog tools versions, their settings, the editor, and simulator
software libraries.

Layout Design
For both digital and analog circuits, the next step after a circuit has been
simulated to correctness is to generate a correct layout of the circuit. Figure 512
depicts the process of generating a correct layout based on a correct circuit.

Fig. 512. Simplified layout design process for a circuit

How is a layout really generated? For a given digital or analog cell, the answer
is mostly by hand. As figure 512 shows, the key starting points are the schematic
and/or netlist for the circuit that we are trying to implement and the targets and

126 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 126 2/20/08 4:54:31 PM

constraints that need to be met by the implemented layout (performance, power,
area, bandwidth, maximum operational frequency, temperature stability, etc.).
Question. Why are targets and constraints important in layout design, if they
were already addressed during circuit design?
Some targetsfor example, the area occupiedcan be directly addressed
only at layout time. In general, however, every target or constraint can only be
estimated (such as area) or simulated with an incomplete model (e.g., performance
or power consumption) until the layout is complete.
On the basis of these inputs, the process of layout entry process begins. In
this process, a designer starts by drawing polygons of various colors on a screen,
faithfully representing the devices and wires in the circuit. The colorful result
(i.e., the layout) is an extremely useful blueprint of the circuit. This blueprint has
enough information to generate all the masks necessary to build this particular
part of the chip (i.e., this circuit).
When entering the layout, the layout designer usually has lots of help.
The designer does not have to draw each line of each polygon, nor does he or
she then have to fill the polygon with a given color. Todays layout tools are
quite sophisticated. Using the mouse and keyboard in any standard computer,
the designer can indicate what layer a certain polygon belongs to. For example,
if we were drawing a polysilicon rectangle, to represent the main portion of a
transistors gate, this would usually be a red rectangle; or, if we were describing the
source and drain, these would most frequently be green rectangles for n devices
and orange rectangles for p devices. Advanced layout systems, especially analog
ones, may contain a library of entire devices or portions thereof, sometimes
called pcells, that can be selected and dropped directly onto the layout, thereby
saving the designer the considerable time that would be spent to draw them from
scratch. The following are the most common components used to build devices
and wires in a layout design:

Polysilicon. Polysilicon is polycrystalline silicon, a material made of many

small silicon crystals. In chips, it is used as input material to generate
single-crystal silicon wafers and as the conducting gate in a CMOS
transistor. It is usually intensely n- or p-doped to enhance conductivity. In
a typical layout, polysilicon is used only for this purpose and often only
for local interconnect (i.e., very short wires inside a logic or analog cell).

Metal. Metal is used in circuits to form wires. The material used for this
purpose was aluminum until recently. Aluminum has been replaced by
copper because copper has much higher electrical conductivity, which
means less resistance, thereby making the circuit run faster with lower
power consumption.

Circuit and layout design 127

_Carballo_Book.indb 127 2/20/08 4:54:31 PM

Diffusion. Diffusion implies the region that forms the drain or the source
of a transistor. It is made of doped silicon, either n-doped (for an nMOS
transistor; by convention, shown in green) or p-doped (for a pMOS
transistor; by convention, shown in orange).

Wells. nMOS transistors need to reside in a lightly doped p silicon layer.

Conversely, pMOS transistors need to reside in a lightly doped n silicon
layer. Depending on the specific manufacturing technology, we may need
to draw rectangles in this layer that are called wellsthat is, swaths of
doped silicon that sit on the actual wafer but have the correct doping sign
for the transistors that we are drawing. For example, if our technology is
based on wafers that are n-doped, nMOS transistors may need to sit on a
p-well and thus are p-well processes. If our technology is based on wafers
that are p-doped, pMOS transistors may need to sit on an n-well and thus
are n-well processes. Finally, if we have nMOS transistors on a p-well and
pMOS transistors on a n-well, then we have a twin-well process.

There are other components in layout, but we wont cover them here for the sake
of simplicity.
What is missing in this picture? The design rules have yet to be incorporated.
The final layout needs to meet the layout design rules for the manufacturing
technology that is being used. Examples of rules include the distance between
two parallel polysilicon lines must be at least 0.13 m and when inserting a
contact into a diffusion (drain or source) for a transistor, the diffusion region
needs to be at least 0.07 m larger on every dimension than the contact, and the
contact needs to be entirely inside the diffusion region.
In recent layout design software packages, it is possible to access the design rules
directly via the layout editor, so that the software can aid the designer in drawing
rule-compatible shapes. In any case, layout design rule checking (DRC) is run after a
design has been edited and checked, because a number of rules still may need to be
checked, especially when assembling several blocks together into a single layout.
Once rules have been checked, the design is not done yet! Despite all
the automation in current design tools, the layout has been mostly manually
generated, so we now need to make sure that this layout faithfully represents the
circuit schematic on which it is based. This process is LVS. In LVS, the design
tool tries to match every device and wire to the set of polygons that represent
them in the layout. When it finds discrepancies, it points to those discrepancies
in such a way that the designer can solve them quickly. For example, it may be
common to highlight simultaneously both in the schematic screen/window and
in the layout screen/window the device and/or wire that is not matching exactly.
The tool may also point to a device or wire in the schematic that cannot be found
at all in the layout.

128 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 128 2/20/08 4:54:31 PM

Once the layout is DRCed and LVSed, the layout is considered to be legal.
A cell extraction and characterization process is then run to extract the actual
devices that exist in the layout that are not shown explicitly in the schematic;
these are referred to as parasitic.
Question. Why are parasitic devices not explicitly drawn initially?
Wires are drawn in a schematic simply as wires. However, in a physical layout,
they can take many forms. For example, a wire can be implemented in a layout as a
fairly long and thin metal rectanglefor example, 10 m long. As a result, what we
really have is a resistor (metal is a good conductor, but not perfect!) and, potentially,
a capacitor. Transistor devices also have parasitic devices, such as resistors and
capacitors, derived from items such as their input and output wires, which may not
be perfect either. Parasitic devices will be discussed in more detail later.
Extraction results in an annotated circuit, with the parasitic devices now
drawn explicitly by the extraction tool. Next comes characterization, whereby
a set of mathematical functions are generated that describe the behavior and
fundamental quantitative characteristics of the circuit implemented by our layout.
As discussed earlier, this is extremely important in digital design, since these
functions, or curves, for small logic cells (e.g., a NAND gate) are used by the
automated digital synthesis and verification tools on the larger blocks that use
those small logic cells. In analog design, characterization is becoming increasingly
important, since larger blocks are becoming popular and can increasingly be
designed using behavioral models of the smaller blocks.
To pursue characterization, we need the device models, so that we can do
the simulations needed to create the functional and performance models. Criteria
represent an interesting input to the extraction and characterization process. Since
these processes, especially characterization, can be fairly complex and long, it is
important to be able to control how the process is executed and what parameters
or parasitic devices are most important for the design at hand. For example, we
may want to characterize for resistance and capacitance parasitic devices, but not
inductance, for a relatively simple, older manufacturing technology; or we may
want to characterize for power and speed but run a smaller number of simulations
for a very new technology whose device models are still rather inaccurate.
Figure 513 again depicts the circuit schematic for a NAND gate and a simple
layout corresponding to this circuit (at right, in fig. 513). Figure 513 clearly
shows how transistors are formed as intersections of polysilicon and diffusion
shapes, to describe the transistor gate at the center (polysilicon on top of oxide
on top of diffusion) and the drain and source terminals around that center. Metal
is used to connect into and out of these three terminals. Furthermore, contacts
that connect one layer to another are used to connect this metal with either
polysilicon or diffusion. Finally, metal is also used for any other interconnections
and, most importantly, to implement power supply (Vdd) and ground (Gnd).

Circuit and layout design 129

_Carballo_Book.indb 129 2/20/08 4:54:31 PM

Fig. 513. A NAND gate circuit schematic (left) and its layout (right)

As can be seen in figure 513, there are a number of interesting guidelines

followed by layout designers, beyond meeting the basic layout design rules. A few
specific items are notable:

Device location. Transistor devices are clustered together, primarily for

insertion into their respective wells. As is evident in figure 513, pMOS
devices are clustered on the upper part of the layout, while nMOS
devices are clustered on the lower part. It is also evident that this is a
twin-well manufacturing process, as there are wells for both the top
(pMOS) and the bottom (nMOS) devices.

Layer and device orientation. For various reasons, certain devices and
layers tend to be laid out in one specific direction. For example, in the
layout depicted in figure 513, transistor diffusion shapes are laid out
vertically, while power supply and ground lines, as well as inputs and
outputs to the circuit, are laid out horizontally. For interconnections,
metal and polysilicon lines tend to move vertically. This level of
organization and regularity helps in assembling multiple cells together
(e.g., all power supplies fit together and can easily be routed as one),
making the cell more manufacturable (regular, grid-style layouts are
easier on the photolithography tools that need to push the patterns onto
the wafer), and making it easier to pack the devices tightly in a small
space, saving area and power while improving timing/speed.

130 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 130 2/20/08 4:54:32 PM

Tracks and size of a cell. Note also that a maximum, fixed height exists
for a cell, typically measured in the form of tracks, to keep height
constant and facilitate horizontal assembly of cells. Each track is given
by a certain pitchthat is, a fixed distance proportional to the minimum
printable size in the manufacturing technology. In layout design, pitches
are usually utilized to describe the minimum distance between two
identical featuresfor example, the distance between the center of two
polysilicon lines. For a technology whose minimum printable feature is
0.13 m, the pitch of each track could be 0.26 m, and the maximum
number of tracks per cell (i.e., maximum height) could be 20, for a
maximum height per cell of 5.2 m.

Layers of interconnect. As the layout in figure 513 shows, even though

multiple layers of metal are typical in modern semiconductor processes
(eight-layer semiconductors are common today), only one type of metal
is used inside a cell; together with the polysilicon layer, this provides the
only high-conductance means to interconnect devices in this layout. The
reason for this choice is called local interconnect. Within cells, distances
are quite short; thus, few layers of interconnect are needed. The upper,
thicker layers of interconnect can then be reserved to connect many cells
together or connect blocks made of groups of cells together. We can
also use them for special signals, such as the power supply, ground, or
clock signals across long distances in the chip. Contacts are needed to
interconnect one layer with anotherfor example, metal layer 1 (M1,
the one used in this simple layout) with metal layer 2 (M2). This layout is
simplified, since it would be common to see a different metal layer used
for power and ground, as opposed to internal interconnects.

Numerous and complex rules (restrictions). In relation to the guidelines

described previously, there may be numerous additional restrictions
with various motivations, generally aimed at making the chip more
manufacturable. For example, modern chips may require that every
shape falls on a grid in the layout. By falling on a grid, the manufacturing
lithography tools will have an easier time replicating the pattern given
by these shapes on to the wafer. A typical complex logic gate will have
numerous such restrictions, most of which will be embedded in the
set of layout design rules embedded with the layout editing tool. Note
that this example is a very simple circuit, and uses very simple device
combinations, for example, no low-threshold devices are combined with
the standard threshold transistors in the layout. Such combinations would
surely add new shapes and would encounter layout restrictions as well.

Circuit and layout design 131

_Carballo_Book.indb 131 2/20/08 4:54:32 PM

Layout verificationDRC
A key aspect of layout design verification is to check against the rules given
by the manufacturing process. This DRC subflow, as previously discussed, is
part of the overall layout design flow, and is graphically described in figure 514.
(Other rules, such as antenna rules, electrical rules, XOR checks, etc., need to be
verified but either belong somewhere else in the described design flow or will not
be covered in this book owing to their not being as critical to chip design.)

Fig. 514. DRC subflow as part of layout design

Definition. Design rules consist of a group of rules provided by each

semiconductor manufacturer to allow designers to verify the correctness of layout
data as the key input needed to create a set of masks for chip production. Design
rules ensure that most of the manufactured chips will work correctly by expressing
geometric and connectivity constraints or restrictions that provide the margin
needed to compensate for the variability inherent in the manufacturing process.
The most common design rules are based on three main parameters:

Size rules. The minimum width of a certain layout shape is a common

design rule. For example, metal wires of layer M1 need to be at least
0.26 m. The key goal of this type of rule is to avoid opensthat is,
breaks in the metal that thin it so much that it loses its conductivity,
either partially or completely (imagine a wire that is partially or
completely cut in the middle, and thus becomes unusable).

Distance rules. The separation between multiple objects needs to be

at least a certain distance for the chip to be reasonably manufactured.
These rules are also key limiters of the size of a chip (setting a minimum
size). An example of such a rule follows: the distance between two
parallel polysilicon lines must to be at least 0.26 m.

132 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 132 2/20/08 4:54:33 PM

Surrounding rules. Some shapes in a layout lay on top of one another.
These layers often need to have sufficient overlap, for various reasons.
A common reason is that because of the way layers are manufactured
in the chip fabrication process, like layers on a cake, there needs to
be enough overlap for them to interface sufficiently. As is evident in
the example layout of figure 514, a contact needs to be surrounded
sufficiently by both the diffusion and the metal layers that are being
connected to each other by that contact.
The lowest layers in a process (those that are laid down first), such as
polysilicon, tend to have the tightest distance and size rules. Higher layers,
especially metal, tend to be thicker and have larger distance rules.
There are numerous subtypes of rules within these three main classes, and
there are increasing numbers of new rules that cannot be easily expressed as one
of these three types. Since rules are essentially Boolean statements (questions
with digital answers, i.e., yes [failed] or no [passed]), rules are typically expressed
in a certain design rule language. Regardless of the programming language,
manufacturing vendors provide a design rule manual that describes every single
rule, how to understand its implications, and guidance on how to debug it.
Unfortunately, as manufacturing processes have become more complex, rules
have in effect become very complex programming statements that are difficult to
express verballyperhaps even more so with a design rule language. Not only has
the complexity of each rule grown exponentially, but the number of rules has also
exploded for a given manufacturing process. As such, the DRC process involved
in verifying (which is done by the tool) and, more important, debugging (which
needs to be done by the designer) has grown in complexity as well. Current design
rule manuals from most manufacturing vendors include hundreds to thousands of
increasingly complex rules, making the job of DRC ever more difficult. Fortunately,
layout design tools and, perhaps more important, the computational power of
todays computers and server farms have also improved substantially.

Layout verificationLVS
Once the layout is design rule correct, a designer still needs to make it matches
the schematic from which it originated. As in many other design tasks, even
though we generate more lower-level details based on higher-level descriptions,
we still need to check that the details faithfully represent and match the higher-
level description and thus meet the requirements/specifications.
LVS verification attempts to find the electrical devices and wires in a layout
and then compares them with the schematic for the implemented circuit. Figure
515 depicts the LVS process as part of layout design, expanded to provide a
simplified example.

Circuit and layout design 133

_Carballo_Book.indb 133 2/20/08 4:54:33 PM

Fig. 515. LVS process and simplified example

To do their job correctly, LVS tools have to take a data file, including all the
shapes drawn for each layer in the layout, and perform a number of computations
to determine what devices and wires are represented in the layout (e.g., a
polysilicon shape on top of a diffusion shape means a transistor). Importantly,
it need to ascertain how each device is connected to others through wires. This
includes all signals, including the power supply and ground.
Once a database listing wires and devices (including parasitic devices) has
been obtained, the LVS tool generates a netlist data structure based on that list.
Why? Because we are attempting to compare a layout with a schematic, and we
cannot compare apples (layout components) and oranges (schematic symbols).
The common language is the netlist.

134 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 134 2/20/08 4:54:33 PM

As figure 515 shows, from the schematic (extracted and annotated),
another netlist is generated in parallel. Then, the layout netlist is compared to the
schematic netlist. This matching problem is a well-known EDA problem whose
solution is based on solid computer science technique (basically comparing two
different graphs and noting any differences). If the two netlists match exactly, we
are done (passing or clean); otherwise, we need to debug the LVS result, which
typically comes in the form of a report, and make the appropriate changes on
the layout.
What is an LVS violation like? The most common LVS violations are
mismatches like the following:

Shorts and opens. When several wires are connected in the layout but
are not connected in the schematic, something is clearly wrong! This is
commonly referred to as a short, since the schematic is assumed to be
correct (although that is not always the case). Conversely, when several
wires are not connected in the layout but are connected in the schematic,
that is referred to as an open.

Wrong device. When a device of a certain type is used in the layout and
a different one is used in the schematic, we must have the wrong device.
Examples are using pMOS when nMOS should be used and using low-
threshold-voltage nMOS when standard-threshold-voltage should be used.

Missing device. Sometimes a device or wire is simply not found in the layout.
A relatively common case is the lack of a power supply or ground wires.

Parameter mismatch. Finally, a frequent error happens when a device has

a different length or width in the layout as compared with the schematic.
For example, parameter mismatch occurs when a pMOS device should be
0.52 m in width and 0.13 m in length (i.e., 0.52 0.13) but is only 0.26
m in width and 0.13 m in length (i.e., 0.26 0.13).

Although the example in figure 515 does not show all the details in the
report, it is clear that the LVS process failed, and debugging needs to happen. It
appears that the power supply has not been found in the layout.
Other potential problems found during LVS include

Some but not all of the power or ground pins in the layout are found.

Not all input and/or output pins are in layout.

The names of certain pins differ between the layout and the schematic.

There are too many nets (wires) in the layout. Some connections are
not made.

Circuit and layout design 135

_Carballo_Book.indb 135 2/20/08 4:54:34 PM

There are too many nets in the schematic. There is probably a short in
the layout.

The following are typical approaches to debugging LVS issues:

The designer reads a report in the form of an output computer file and
finds a number of violations.

The designer tries to locate the issue or offending device in the

schematic by using the schematic editor.

The designer cross-checks in the schematic while visualizing the layout

and/or the error.

The designer uses the layout or schematic editor to fix the error.

Most modern tools will be able to clearly highlight errors in both layout and
schematic directly on the screen, thereby facilitating the designers job.

Layout verificationlayout extraction

One design task that we have glossed over several times is layout extraction.
This task is necessary for several parts of the layout and circuit design process,
including LVS, characterization (which will be addressed next and is key for a lot
of other higher-level tasks), and the all-important final circuit simulation, including
all layout effects. Although different extractions would in theory be needed for
different purposes, there is a single most common way to do extraction that
provides the vast majority of the information needed.
Layout extractionor, perhaps more accurately, circuit extractionaims at
transforming a given layout into its corresponding circuit (an electrical netlist),
which it is attempting to represent faithfully. The layout extraction process, as
well as its role within the overall layout design process, is depicted in a simplified
manner in figure 516.

136 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 136 2/20/08 4:54:34 PM

Fig. 516. Layout extraction process as part of overall layout design

The goal of layout extraction is to obtain a detailed netlist, including all

parasitic devices in the circuit. This netlist is extremely useful for two main reasons.
First, by use of this netlist, the schematic with which we want to compare is also
annotated with those parasitic devices found. The tool can then link the extracted
circuit (and even the logic function) with the layout. The designer can then cross-
reference any device or wire in the circuit to the layout.
Second, the netlist can be directly used to simulate the circuit with all detailed
parasitic devices and/or to characterize it for the use of higher-level tools. For different
simulator tools, we can also translate the netlist into different formats as required.

Circuit and layout design 137

_Carballo_Book.indb 137 2/20/08 4:54:34 PM

The annotated netlist and/or schematic will include the following:

The actual devices and wires that the designer intended in the first place.
We use the terms interconnect and interconnect extraction to refer to
these wires and their extraction, respectively.

The parasitic devices that come from the electrical characteristics of the
shapes in the layout, but were not explicit input in the schematic, as they
have no useful function

There can be several levels of precision when extracting devices, wires,

and parasitics. More accurate extractionspotentially including all resistances,
capacitances, and inductancesmay be necessary for characterization for use
by timing analysis and power analysis tools. Less accurate estimations may be
acceptable to simulate speed and power for the cell, to check that the circuit and
the layout were reasonably well designed.

Once a layout and a schematic are correctly design and matched, we need to
characterize them for the use of higher-level tools. In this way, the overall circuit
or block can be evaluated in all its glory. This is a very structured process in digital
circuits, but it is becoming more common in analog circuits as well. The main
goal of circuit characterization is to generate a simplified yet detailed model of a
cell layout for use by other tools, including timing and power analysis tools.
Cell characterization, as well as its role within overall cell design, is depicted in
figure 517. As figure 517 shows, the results of characterization are a set of models,
or curves, that describe key characteristics (hence, the term characterization) of
the circuit, in a way that the circuit is parametrized. That is, using these curves,
we can put this circuit under various different conditions (e.g., change the value
of the power supply Vdd) and obtain its key performance or power characteristics
without having to create the circuit and its layout from scratch.

138 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 138 2/20/08 4:54:35 PM

Fig. 517. Depiction of characterization process and its role within overall design

Examples of characterization results for a logic cell include

How delay varies with slope of input signals
How delay varies with slope of input clock (latches)
How delay varies with voltage supply

How setup/hold time for a latch varies with signal slope or supply voltage
There are a number of possible output characterization forms, the most
common of which are as follows:

An analytical family of curves, which are basically fit to the behavior of

the circuit over a set of parameter intervals. Curve-fitting methods are
often used, in which an equation is created for each curve by using the
points that have been generated through simulation.

Circuit and layout design 139

_Carballo_Book.indb 139 2/20/08 4:54:35 PM

A look-up table (LUT), similar to the preceding form but with values
being looked up in a table instead of computed using a curve-fitting
equation. Although LUTs can be huge, this is a very effective way to use
the computed fit function.

The typical procedure for circuit characterization consists of running lots of

circuit simulations on the extracted netlist, potentially using statistical methods
(e.g., Monte Carlo sampling, as described earlier) to save time. The curve-fitting
or LUT model is then computed on the basis of the results of these simulations.

Analog Layout Design

Although this book is not about analog design, key characteristics of analog
design should be understood, given its increasing importance and integration
with digital design. Two key differences inherent in analog design from the layout
perspective, with respect to digital design, are the need for symmetry and its much
higher dependency on parasitic devices (resistors, capacitors, inductors, etc.).
Example. Consider a common type of analog circuit these days: a voltage
regulator. Its role is to produce a very stable voltage supply, taking as an input a
higher, potentially less stable supply. Common characteristics of a regulator in
CMOS technology include

Size. It is a large cell (possibly over 150 m on each side) as compared

with digital cells. Also, each device (transistor) and wire can be quite
large (more than 100 m in some cases).

Regularity. It does not have much regularity (several blocks inside or

various sizes, structure, and orientation, such as a bandgap, an amplifier,
and the output transistor, which is frequently very large).

No fixed form factor. There is no specific limit for the number of

vertical tracks, as this circuit will not be lined horizontally next to
thousands like it.

Careful routing. The voltage supply is extremely carefully laid out and
very thick.

For analog simulation, we may need to verify that the circuit behaves similarly
after layout extraction numerous times, as compared with digital circuits, which

140 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 140 2/20/08 4:54:36 PM

even today are much less sensitive to parasitic devices. An analog layout design
must typically modify layout dozens to hundreds of times for each circuit.
DRC is also similar to digital design, except that it may be more complex
owing to special layout design rules and to special devices, such as diodes,
resistors, and capacitors. Similarly, LVS verification for analog circuits may be
more complex than digital, especially given the addition of various devices and
layout design rules (e.g., symmetry restrictions).

Chip Physical (Layout) Design

As with any other engineering and/or construction undertaking, the fun starts
when you assemble all the blocks to finalize the chip! This process is called chip
integration in chip design and builds on another process much earlier in the design
process, called chip planning. Both are depicted symbolically in figure 518.

Fig. 518. Simplified inputs and outputs of chip integration

Circuit and layout design 141

_Carballo_Book.indb 141 2/20/08 4:54:36 PM

Early in the design process, chip planning or floorplanning is pursued, with
the goals of coming up with a rough placement of the main blocks and coming
up with a rough estimate of area (perhaps also power and performance). While
this topic has already been covered in some detail, what is critical later on in
the design is the placement of those blocks. Early in the design, several blocks
may not have been designed; only those that are reused from other teams or
companies will have been designed at this point. Thus, initial floorplanning needs
an estimation process to come up with their estimated sizes, form factors, and
performance characteristics.
Floorplanning can be manual or automatic. The concept is simple: the designer
tries a virtual prototype of the block, then estimates whether timing, power, and
area will be feasible. Key information is captured, such as the approximate wire
lengths, especially for long chip wires, and the resulting placement can help with
design later on (e.g., with timing block-level constraints and, at the end, with
assembly). As figure 518 shows, assembly of chip integration consists of taking
that floorplan and implementing it once the blocks in the chip are complete.

Clock planning
An absolutely critical net (i.e., set of fully interconnected wires) and its
associated electrical signal in a chip is the clock signal (or signals), as discussed
previously. In the floorplanning and/or assembly of the chip, the clock signal
needs to be carefully distributed across the chip. Clock planning is the design
task that defines how the clock(s) is (are) distributed across chip.
Two key needs are addressed in clock planning:

Low-skew clocks

Clean clocks
Skew is defined as the timing difference between two signals. The whole
point of having a clock is to synchronize signals and circuits across the chip. If the
clocks for different circuits are not synchronized, it is not clear how much of a time
budget exists for signals to get from latch to latch; thus, time differences (skew)
need to be deducted from the time budget. Consequently, circuits are made more
conservatively than they should be and thus run slower and/or consume more
power. Therefore, we need to have a clock that has low skew when appropriate.
In other words, for two circuits that in theory take the same clock as an input
but in reality pull the signal from different points in the clock wire network, we
need to pay attention to skew. We need to make sure that the clock arrives at all
of these locations at the same time. Any difference in time amounts to skew and
should be deducted from the time budget.

142 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 142 2/20/08 4:54:36 PM

Clocks should ideally be nicely square signals that move periodically between
0 (ground) and 1 (Vdd). Unfortunately, clock signals are difficult to carry as square
periodic signals. First, it is extremely difficult to create a perfectly square signal with
clock generator circuits. Second, even a perfectly square signal can get smoothed
and generally distorted by the numerous parasitic devices through which it has to
run or that reside beside it. Long wires are resistors and, possibly, inductors, and
the various devices that receive the clock signals act as capacitors for the clock.
Unclean clocks have two problems. First, any difference from square
signals amounts to skew. Second, slow-rising or slow-falling signals (as happens
when they are not square) lead to more power consumption. (Remember that
any voltage between 0 and Vdd may lead to a chain of connected transistors
transmitting current from Vdd to ground.)
The practical goal of clock planning is to come up with a clock plan, more
commonly referred to as clock distribution networkthat is, the wire tree that
transports the clock signal across the chip (or the block in the case of more
local clock planning). As shown by figure 519, which depicts the clock-
planning process, the most common approach to minimizing skew is to develop
a fork or H-shape network. The theory underlying this approach is that most end
points have the same distance between them and the source of the clock.

Fig. 519. Simplified clock-planning design process

Power planning
The other key distribution structure in a chip is the power supply
distribution network. An increasingly important design task, power planning
has as its goal the exploration of various supply distribution options early in
design cycle, to ensure that the best ones are chosen and implemented as the
design proceeds.

Circuit and layout design 143

_Carballo_Book.indb 143 2/20/08 4:54:37 PM

To be most effective, power-planning tools need to be very fast and interactive.
Questions that a power-planning tool should be able to answer include

Can a power distribution network with 80% density handle 0.2 watts per
square millimeter without losing voltage strength?

How much decoupling capacitance does an input/output buffer need on

its power supply to work properly?
As figure 520 shows, power planning can provide valuable insights about
how the supply is being distributed and the consequences thereof. In figure
520, note how the voltage supply, as it comes from the edges of the chip, loses
strength and, thus, magnitude as it approaches the center of the chip. However,
even though it loses strength, it decreases more or less uniformly and thus could
help in controlling clock skew and other key parameters between circuits that are
a similar distance from the center. Modern power-planning tools can provide this
information and much more, including spreadsheet-like interfaces, to define the
overall power grid, and advanced visualization and analysis capabilities.

Fig. 520. Voltage-level distribution across a chip surface

Impact of Manufacturability
on Design
As a final topic, the most important design issue of all awaitsnamely,
manufacturability, which is the ability to manufacture a designed chip effectively.
This entails consideration of the overall yield of the chipthat is, the percentage
of correct chips out of all those coming off the manufacturing line.

144 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 144 2/20/08 4:54:37 PM

Manufacturability is slowly but surely being inserted into all mainstream
design flows, both on the digital design and analog design sides. One example
of such penetration in the design flow is presented by postlayout yield-analysis
tools. These tools provide a score based on the manufacturability of the circuit or
chip. Metrics used to compute manufacturability include

The number of redundant contacts, or vias, that the designer opted to

include to improve yield
How tightly a number of distance design rules are met
Another example is presented by layout yield enhancement tools. These
tools act on the layout itself and fix items that could reduce manufacturability,
Automatically increasing the spacing between wires in the layout
Automatically reducing general layout congestion by spacing everything more

As figure 521 shows, manufacturability has penetrated the design flow

deeply, in several aspects in addition to the yield-checking/enhancement tools
already mentioned. First, basic device models and design rules are expanding
to include rules with more restrictions, or radically restricted rules, that force
layouts to resemble grids.

Fig. 521. Manufacturability issues influencing design flows

Circuit and layout design 145

_Carballo_Book.indb 145 2/20/08 4:54:38 PM

Second, resolution enhancement technologies (RET) have penetrated the
design flow. These techniques account for optical effects and resolution limitations
of lithography equipment in modern manufacturing processes. They may, for
example, force layouts to have alternating types of layers only in points of a
grid, so that contrast and optical proximity is optimal for the lithography tool to
faithfully push the pattern on the wafer.
Third, new circuits that are devoted to measuring and possibly fixing
manufacturability, have appeared recently. For example, an oscillator circuit that
measures its own oscillator frequency can quantify how transistor speed varies
across a wafer, chip by chip.

146 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 146 2/20/08 4:54:38 PM

As demonstrated throughout this book, modern methodologies to design
chips exploit a combination of top-down and bottom-up approaches. That is,
they start at the top, but as they move down, they are based on the development
of smaller components (the circuit and the layout of logic cells and analog
circuits). These are then assembled in various manners, by use of logic synthesis
and verification, chip assembly/integration, and so forth.
The top and bottom edges of the design stack hold a lot of action. At the top
is the system level of design, an increasingly important discipline. At the bottom
is the layout layer, where manufacturability issues have returned to the forefront,
yet are being slowly being inserted into the design flow. In other words, in modern
design flows, the information flows both upward and downward, which makes
the process most effective.

_Carballo_Book.indb 147 2/20/08 4:54:38 PM

Digital design
Starting with an existing design (e.g., a CLA adder, which can be found on
the Internet and in various books), complete the following steps:

Design flow

Draw the design flow to be followed when designing this adder

Identify a computer-aided design (CAD) tool for each step of the design

Make sure tools are installed in your computer environment


Write a functional/behavioral description of the adder

Use VHDL, Verilog, Matlab, or C

Logic design

Generate a gate netlist

Run timing and power analyses

Layout and postlayout

Generate a layout, verify, and extract

Rerun timing and power verifications

Route to a pad frame

Analog design
Starting with an existing design (e.g., a two-stage operational amplifier,
which can be found on the Internet and in various books), complete the following

Design flow

Draw the design flow to be followed when designing this amplifier

148 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 148 2/20/08 4:54:38 PM

Identify a CAD tool for each step of the design flow

Make sure tools are installed in your computer environment


Write a functional/behavioral description of the adder

Use Matlab, C, Verilog-A, or VHDL-AMS (please note that the latter

two are languages specifically intended for analog and mixed-signal
design, with the advantage that their grammar resembles Verilog and
VHDL respectively, and thus are friendly to digital designers).

Circuit design

Generate a schematic

Run power, bandwidth, and gain simulations

Layout and postlayout

Generate a layout, verify, and extract

Rerun simulations until successful

Route to a pad frame

Exercises 149

_Carballo_Book.indb 149 2/20/08 4:54:38 PM

Overall chip design
ASICs. IBM Semiconductor Solutions.

Keating, Michael, Russell John Rickford, and Pierre Bricaud. 2006. Reuse Methodology
Manual for System-on-a-Chip Designs. New York: Springer.

Martin, Kenneth W. 1999. Digital Integrated Circuit Design. New York: Oxford University

Pedram, Massoud, and Jan M. Rabaey, eds. 2002. Power Aware Design Methodologies.
Boston: Kluwer Academic Publishers.

Rabaey, Jan M., and Massoud Pedram, eds. 1995. Low Power Design Methodologies.
Boston: Kluwer Academic Publishers.

Weste, Neil H. E. 1993. Principles of CMOS VLSI Design: A Systems Perspective.

Reading, Mass.: Addison-Wesley.

Zheng, Pei, and Lionel Ni. 2005. Smart Phone and Next Generation Mobile Computing.
San Francisco: Morgan Kaufmann.

System-level design
Alexander, Perry. 2007. System Level Design with Rosetta (Systems on Silicon). San
Francisco: Morgan Kaufmann.

De Micheli, Giovanni, Rolf Ernst, and Wayne Wolf, 2001. Readings in Hardware/Software
Co-design. San Francisco: Morgan Kaufmann.

Gajski, D., Nikil D. Dutt, Allen C. Wu, and Steve Y. Lin. 1992. High-Level Synthesis:
Introduction to Chip and System Design. Boston: Kluwer Academic Publishers.

Gerstlauer, Andreas, Rainer Domer, Junyu Peng, and Daniel D. Gajski. 2001. System
Design: A Practical Guide with SpecC. Boston: Kluwer Academic Publishers.

Grtker, Thorsten, Stan Liao, Grant Martin, and Stuart Swan. 2002. System Design with
SystemC. Boston: Kluwer Academic Publishers.

Jerraya, Ahmed Amine, Sungjoo Yoo, Norbert Wehn, and Diederik Verkest. 2003.
Embedded Software for SoC. Boston: Kluwer Academic Publishers.

Lavagno, Luciano, Grant Martin, and Bran Selic. 2003. UML for Real: Design of
Embedded Real-Time Systems. New York: Boston: Kluwer Academic Publishers.

Mermet, Jean. 2001. Electronic Chips and Systems Design Languages. Boston: Kluwer
Academic Publishers.

150 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 150 2/20/08 4:54:39 PM

Mller, Wolfgang, Wolfgang Rosenstiel, and Jrgen Ruf. 2003. SystemC: Methodologies
and Applications. Boston: Kluwer Academic Publishers.

OMAP Platform. Texas Instruments.

Raghunathan, Anand, Niraj K. Jha, and Sujit Dey. 1997. High-Level Power Analysis and
Optimization. Boston: Kluwer Academic Publishers.

Staunstrup, Jrgen, and Wayne Wolf. 1997. Hardware/Software Co-design: Principles

and Practice. Boston: Kluwer Academic Publishers.

Walls, Colin. 2005. Embedded Software: The Works. Amsterdam: Newnes.

Logic designtiming analysis

Agarwal, A., V. Zolotov, and D. T. Blaauw. 2003. Statistical timing analysis using bounds
and selective enumeration. IEEE Transactions on Computer-Aided Design of Integrated
Circuits and Systems. 22 (9): 12431260.

Harris, D., M. Horowitz, and D. Liu. 1999. Timing analysis including clock skew. IEEE
Transactions on Computer-Aided Design of Integrated Circuits and Systems. 18 (11):

Logic designpower analysis and optimization

Alpert, C., C. Chu, G. Gandham, M. Hrkic, J. Hu, C. Kashyap, and S. Quay. 2004.
Simultaneous driver sizing and buffer insertion using a delay penalty estimation
technique. IEEE Transactions on Computer-Aided Design of Integrated Circuits and
Systems. 23 (1): 136141.

Brooks, D., V. Tiwari, M. and Martonosi. 2000. Wattch: A framework for architectural-
level power analysis and optimizations. In Proceedings of the 27th International
Symposium on Computer Architecture. 8394. New York: The Association for
Computing Machinery.

Rabe, D., G. Jochens, L. Kruse, and W. Nebel. 1998. Power-simulation of cell based
ASICs: Accuracy- and performance trade-offs. In Proceedings, Design, Automation
and Test in Europe. 356-361. Los Alamitos, Calif.: The Institute of Electrical and
Electronics. Engineers.

Roy, Kaushik, and Sharat Prasad. 2000. Low-Power CMOS VLSI: Circuit Design. New
York: John Wiley & Sons.

Logic designtestability
Abramovici, Miron, Arthur D. Friedman, and Melvin A. Breuer. 1994. Digital Systems
Testing and Testable Design. New York: IEEE Press.

Bibliograhphy 151

_Carballo_Book.indb 151 2/20/08 4:54:39 PM

Layout design
Clein, Dan. 1999. CMOS IC Layout : Concepts, Methodologies, and Tools. Boston:

Analog design
Abidi, Asad A., Paul R. Gray, and Robert G. Meyer. 1998. Integrated Circuits for Wireless
Communications. New York: IEEE Press.

Gray, Paul R., Paul J. Hurst, Stephen H. Lewis, and Robert G. Meyer. 2001. Analysis and
Design of Analog Integrated Circuits. 4th ed. New York: John Wiley & Sons.

Razavi, Behzad. 2000. Design of Analog CMOS Integrated Circuits. Boston:


Digital design
Rabaey, Jan M., Anantha Chandrakasan, and Borivoje Nikolic. 2002. Digital Integrated
Circuits. 2nd ed. Upper Saddle River, N.J.: Prentice Hall.

Director, Stephen W., Wojciech Maly, and Andrzej J. Strojwas. 1990. VLSI Design for
Manufacturing: Yield Enhancement. Boston : Kluwer Academic Publishers.

152 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 152 2/20/08 4:54:39 PM


A automated test pattern generation

(ATPG), 87
abstraction, xi, 34, 6, 11 automotive electronics, xiii
AC analysis, 121
AC power consumption, 7274
active power consumption, 7274 B
aggressor signal, 8182 bandpass filters, 125
alternating current analysis, 121 bandwidth, 71, 119
alternating current (AC) power baseband processing, 2
consumption, 7274 behavioral description, 22
amplifiers, 114 BER (bit error rate), 5, 6, 36
extraction and, 116119
gain, 120 BIST (built-in self-test), 87
LNA, 116117 bit error rate (BER), 5, 6, 36
analog design, 34, 110126, 152 block description, 40
automation in, 112 blocking bus, 29
circuit extraction in, 115119 blocks
circuit verification in, 111112, logic, 39
113, 115116, 125126 restriction of, 9192
complexity of, 111112 in RTL/logic-level design, 4954
definition of, 111 in SoC, 24
differential, 115 bottom-up processes, 78, 147
digital design vs., xii, 112113,
140141 BUFFER gate, 5051
entry, 114120 buffer insertion, 80
layout in, 115 built-in self-test (BIST), 87
library in, 112113 bus(es), 22, 24, 2830, 84
parasitic devices and, 140 blocking, 29
phases of, 113 examples, 30
simulation in, 114115, 121125 networks vs., 29
trade-offs in, 117, 119 nonblocking, 29
voltage in, 111
analog signal, xii, 35, 118
AND gate, 51 C
annotated schematic, 115, 120, 137 C++, 14, 16, 24, 37
application-specific integrated circuit C programming language, 14, 16, 24,
(ASIC), xiv 37
arrival times, 6566 capacitance, 80, 82
ASIC (application-specific integrated of gates, 74
circuit), xiv switching, 72
assemblers, 28 of wires, 7374
assumed process, 106 capacitors, 33

_Carballo_Book.indb 153 2/20/08 4:54:39 PM

charge distribution of, 82 informal, 1415
leakage and, 74 languages used in developing,
carrier, 3134 1416
CD (critical dimension), 5 requirements of, 1314
cell, 57 circuit design, 95146
layout, 138 analog, xii, 34, 110126
library, 57 basics of, 9596, 98102
size, 130 definition of, 96
entry, 9798, 114125
ceramic pin grid array (CPGA), 31 simulation in, 96, 104109,
channel, 35, 9899 114115
characterization, 97, 113, 129, 138 styles of, 102104
139 verification in, 109110
curve fitting output from, 139 circuit extraction, 97, 113, 115119,
goal of, 138 136138
in layout design, 138140 circuit layer, 5
LUT output from, 140
process of, 139 circuit verification, 109110
analog, 111112, 113, 115116,
chip area, 37 125126
chip design circuit checks in, 109, 125
complexity in, 17 consistent methodology in, 110,
fundamentals of, xxi 126
manufacturability impacting, establishment of naming
144145 conventions in, 110, 126
overall, 150 LVS, 116
prototyping process, 3944, 142 pin assignments in, 109110,
sub-flow, 21 125126
tools, 911 prelayout, 96, 113
chip design flow, 111 circuit/logic topology, 75
chip design methodologies, xixvi clean clocks, 142
domain specific, xii clock distribution network, 143
implementation specific, xivxv
level specific, xii clock planning, 142143
market specific, xiiixiv clock signal, 142
technology specific, xvxvi parasitic devices and, 142
chip devices square, 142
key electrical characteristics of, 70 used to synchronize signals, 142
parameters of, 6970 clock synthesis, 39
performance characteristics of, 70 clock-gating techniques, 73
chip integration, 141 CMOS (complimentary metal oxide
chip package, 3134 semiconductor), xvi, 41, 71, 98,
codesign of, 3132 101
types of, 31 coding languages, 24, 4748
chip planning, 3944, 142 combinational blocks, 4952
advanced, 4243
collaboration in, 4243 communications circuits, xiii
feasibility established by, 40 comparator block, 5455
synthesis in, 3940 compilation, 63
chip specifications, 8, 1318 compilers, 28, 87
definition of, 1314 complexity
development of, 1618 of analog design, 111112
formal, 14, 1516

154 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 154 2/20/08 4:54:39 PM

in chip design, 17 in system-level design, 2229
management techniques, 34 tools for, 5457
reduced by NoC, 30 design for test (DFT), 85
complimentary metal oxide compilers, 87
semiconductor (CMOS), xvi, 41, manufacturing influencing, 93
71, 98, 101 scan based, 8687
test synthesis in, 8688
computing circuits, xiii test verification in, 8889
conductors, 83 tools, 8587
constraints, 41, 58, 60, 65, 6768, design loops, 32
77, 79, 89, 127 design rule checking (DRC), 128,
consumer electronics, xiii 132133
contacts, 129, 130 design rule language, 133
controllability, voltage, 73 development environments, 28
core, 26 device location, 130
corners, 6970, 8081 device models, 124
coupling, 8184 device orientation, 130
CPGA (ceramic pin grid array), 31 DFT (design for test), 8589, 93
critical dimension (CD), 5 diffusion, 128
critical paths, 103 digital design, 34, 97110, 152
cross talk, 84 analog design vs., xii, 112113,
aggressor vs. victim in, 8182 140141
fundamental principles of, 82 encoding relationship in, 111
influencing signal integrity, 8182 digital signal, 35
custom analog block, 78 digital signal processor (DSP), 22, 35
custom digital block, 78 dimensional variations, 5
customer demand, for computational DIP (dual in-line package ), 31
power and bandwidth, 71 direct current (DC) power
consumption, 72, 7475
D direct memory access (DMA), 2223
divider block, 55
DC analysis, 121122 DMA (direct memory access), 2223
DC power consumption, 72, 7475 drain terminal, 74
debugging, 28, 6263 DRC (design rule checking), 128
decision loops, 8 in analog design, 141
decomposition, 4 definition of, 132
increasing importance of, 133
defects, 85 rules checked in, 132133
design defects, 85 as subflow of layout design, 132
design entry DSP (digital signal processor), 22, 35
analog, 114120 dual in-line package (DIP), 31
annotations in, 120
circuit, 9798, 114125 dynamic circuits, 102103
goals of, 114116 dynamic power, 72, 77
graphical languages used in, 49 dynamic power consumption, 7274
hardware design languages in,
RTL/logic-level, 4749

Index 155

_Carballo_Book.indb 155 2/20/08 4:54:39 PM

E flip-chip pin grid array (FC-PGA), 31
floorplanning, 39, 142
EDA (electronic design automation), 9 foundry, 42, 124
fully automated tools lacking in, FPGA (field-programmable gate array),
system-level design using, 2021 xv, 38, 43
edge-triggered latch, 53, 102 front-end circuits, 2
editor tool, 63, 7879, 128 functional behavior, 108
EEPROM (electronically erasable functional description, 20
programmable read-only memory), functional level, 5
22 functional requirements, 13, 15
electrical device variations, 5 functional variations, 5
electromigration, 83 functionality, 3, 6163, 8586
electronic design automation (EDA ), 9, fundamental blocks, in RTL/logic-level
2021, 38 design
combinational, 4952
electronically erasable programmable sequential, 5254
read-only memory (EEPROM), 22
emacs, 54
embedded-software developers, 27 G
accelerating high-level design, 43 gain
function as focus of, 44 amplifier, 120
simulation vs., 43 input-to-output, 117
of SoC, 43 gate(s), 45, 49, 70
entry/assembly tools, 910 AND, 51
OR, 51
equalization, 37 BUFFER, 5051
equivalent circuit, 32 capacitance of, 74
estimator functions, 37 edge-triggered latch, 53, 102
exclusive OR gate, 52 exclusive OR, 52
exercise(s), 148149 formation of, 100101
abstraction, 6 leakage of, 74
chip design methodology exercise, level-sensitive latch, 53
xvi library, 59, 9192
chip specification, 16 MUX, 51
functionality, 3 NAND, 51, 100101, 130
NOR, 100101, 106108
extraction, 32, 97, 113, 115119, 129, NOT, 50
136138 restriction of, 9192
scannable latch, 53, 8687
XOR, 52
F gate level, 5, 45
fast synthesis, 39 general programming languages, 24,
fault coverage, 8889 3437
fault models, 8586 gigahertz, 2
FC-PGA (Flip-chip pin grid array), 31 GPS (Global Positioning System), 3
field-programmable gate array (FPGA), granularity, voltage, 73
xv, 38, 43 graphical languages, 25
hardware programming vs., 47
fingers, 97 in RTL/logic-level design, 49

156 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 156 2/20/08 4:54:40 PM

H physical, 129
resembling grids, 145
hardware, software vs., xii layout design, 95146, 152
hardware/system design languages, analog, 140141
24, 4748 basics of, 9596
characterization in, 138140
high-level design, xii, 43 components used in, 127128
high-level synthesis, 38 constraints in, 127
hold constraint, 6768 DRC subflow of, 132
hot spots, 34, 75 entry, 127
flow, 126
guidelines in, 130131
I influenced by manufacturing, 132
manufacturing technology and,
impedance, 117119 128
inductors, 33 physical, 141144
restrictions in, 131
input logic description, 60 tools, 127, 128
input parameters, 105106 verification in, 132138
input patterns, 62, 7475 layout extraction, 97, 113, 136138
input signals, 105106, 115 goal of, 137
input/output circuits, 2 netlists in, 137
input-to-output gain, 117 process, 137
instance, 25 layout verification
DRC, 128, 132133
interface description, 20 layout extraction in, 136138
interface requirements, 14, 15 LVS, 128, 133136
inverter, 100, 102 layout-versus-schematic (LVS)
IP (intellectual property), 24 verification, 116, 128, 133136,
JK leakage power, 71
leakage power consumption, 72,
Java, 14, 16, 24, 38 7475, 80
jitter, 5, 36 level shifter, 90
level-sensitive latch, 53
L level-sensitive scan design (LSSD), 87
library, 57, 9293, 9596
latch in analog design, 112113
edge-triggered, 53, 102 cell, 57
formation of, 100102 gates, 59, 9192
level-sensitive, 53 mapping, 5960, 92
scannable, 53, 8687 of pcells, 127
simulation on, 108109 setting up, 60
layer orientation, 130 link configuration, 37
layers of interconnect, 130 LNA (low-noise amplifier), 116117
layout, 5, 47 local interconnect, 131
in analog circuit design, 115 logic design, 39, 4564, 7192,
automation of, 95 151
cell, 138 logic level, 5, 45
checking, 96 logic optimization, 59, 92, 151

Index 157

_Carballo_Book.indb 157 2/20/08 4:54:40 PM

logic simulation, 6164 micro-electromechanical devices
functionality verified by, 6163 (MEMs), 5
manufacturing influencing, 92 models, 8, 9597
process of, 62 for blocks in SoC, 24
logic synthesis, 39, 5761 device, 124
logic optimization in, 59, 92 fault, 8586
manufacturing influencing, 92 general programming languages
tasks of, 6061 creating, 3437
technology mapping in, 5960 technology device, 106
tools, 5758 modules, 55
look-up table (LUT), 140 Monte Carlo simulation, 123125
low-level design, xii multiple threshold, 102103
low-noise amplifier (LNA), 116117 multiple-supply design, 104
low-skew clocks, 142 multiplexer (MUX), 51
LSSD (level-sensitive scan design), 87
LUT (look-up table), 140
LVS (layout versus schematic), 116, N
128, 133136
in analog design, 141 naming conventions, 110, 126
netlist generated by, 134 NAND gate(s), 51
process, 134 circuit schematic, 130
violations, 135 formation of, 100101
layout, 130
netlist, 57, 106, 124
M generated by LVS, 134
generation of, 60, 115
manufacturability, 152 in layout extraction, 137
chip design impacted by, 144145 verification of, 61
design flows influenced by, 145 networks, 2830
manufacturing buses vs., 29
device models in, 124 networks on chip (NoC), 30
layout design influenced by, 132 nMOS transistor, 98102, 103
logic design influenced by, 9193
power influenced by, 8081 NoC (networks on chip), 30
timing influenced by, 6970 noise
manufacturing defects, 85 definition of, 81
output, 117
manufacturing layer, 5 substrate coupling, 8384
manufacturing variations, 5 supply voltage, 83
mask set, 8 noise margin, simulation of, 107108
Mathematica, 24, 28 noisy spots, 34
Matlab, 14, 24, 25, 28, 37, 38 nonblocking bus, 29
Media Processing, 3 nonexhaustive process, 10
component, 22 NOR gates, 100101, 106108
with SiP, 22
with SoC, 2223 NOT gate, 50
n-well, 128
MEMs (micro-electromechanical
devices), 5
metal, 127, 129, 130
metastability, 6768

158 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 158 2/20/08 4:54:40 PM

O trade-offs in, 117
verification of, 107
objectives, 41 power distribution, 34
open-source software, 27 power planning, 143144
operating system, 27 power supply distribution network, 34,
operators, 45, 4953, 70, 74, 9192, 143
100102 power verification, 64, 7181
OPGA (Organic pin grid array), 31 basic concepts of, 7280
optical lithography, 9 customer demand and, 71
across entire design flow, 75
optimization criteria, 41, 58 estimation as, 76
optimization parameters, 37 loop of changes in, 7980
OR gate, 51 manufacturing influencing, 8081,
organic pin grid array (OPGA), 31 93
optimization and, 71, 77, 7980
simulation and, 76
PQ technology limitations and, 71
verification as, 76
packets, 30 power/battery management, 2
parametric chip model, 37 prelayout verification, 96, 113
parametric simulation, 122 pre-PD checking, 64, 8991
parasitic devices, 129, 137, 140, 142 pre-physical design (Pre-PD) checking,
partial scan, 87 64, 8991
partitioning, 39 automation of, 91
pcells, 127 chip influencing, 90
companies influencing, 8990
performance requirements, 13, 15 guidelines for, 89
physical design, 141 technology influencing, 90
clock planning in, 142143 process variations, 75, 80
power planning in, 143144
processors, xiv
physical implementation, 39
prototype, simulation with, 44
physical layer, 5
prototyping process, 3944, 142
pin assignments, 109110, 125126 p-well, 128
pitch, 130
placement, 40, 84
pMOS transistor, 100101, 103 R
polysilicon, 127, 129, 130
radio-frequency circuits, 2
postlayout yield analysis, 145
random access memory (RAM), 22
power analysis, 78, 151
reactants, 118
power constraints, 77, 79
read-only memory (ROM), 22
power consumption
AC, 7274 refinement process in, 17
average, 78, 107 register, 54
best case vs. worst case, 77 regression tests, 63
components of, 74 requirements capture
DC, 72, 7475 goal of, 17
estimation of, 36, 107, 109 refinement process in, 17
leakage, 72, 7475, 80 requirements of, 17
manufacturing influencing, 8081 stakeholders and, 1718
ratio, 114

Index 159

_Carballo_Book.indb 159 2/20/08 4:54:40 PM

resistors, 32 cross talk influencing, 8182
resolution enhancement technologies electromigration influencing, 83
(RET), 146 placement/routing, 84
substrate coupling influencing,
RET (resolution enhancement 8384
technologies), 146 supply voltage noise influencing,
ROM (Read-only memory), 22 83
routing, 9, 40, 84 timing analysis, 84
RTL, 45 silicon substrate, 8384
RTL/logic-level design, 151 simulation
entry, 4749 AC analysis, 121
flow, 4647 in analog design, 114115,
fundamental blocks in, 4954 121125
graphical languages in, 49 in circuit design, 96, 104109,
hardware programing vs. 114115
graphical languages in, 47 conditions for, 107
increased automation in, 45 DC analysis, 121122
manufacturing influencing, 9193 emulation vs., 43
power verification, 7181 goal of, 104105
pre-PD checking, 8991 inputs needed for, 105106
signal integrity in, 8184 on latch, 108109
simulation, 6164 logic, 6164, 92
synthesis, 39, 5761, 92 Monte Carlo, 123125
system-level design vs., 4547 netlists used in, 137
testability in, 8589 of noise margin, 107108
timing verification, 6570 on NOR gate, 106108
tools, 5457 parameters in, 122123
parametric, 122
and power verification, 76
S with prototype, 44
RTL/logic-level design, 6164
scaling, voltage, 73 single-point, 122
scan based DFT, 8687 speed limitations of, 43
statistical, 122123
scannable latch, 53, 8687 tools, 10
schematic, 96, 9798, 106 transient, 105, 121
annotated, 115, 120, 137 simulator, 63
edge-triggered latch, 102
entry, 114116 Simulink, 25
inverter, 100 single-ended circuit, 102104
NAND, 101, 130 single-point simulation, 122
NOR, 101 single-supply design, 104
semiconductors, 84 single-threshold circuit, 102103
semi-custom digital block, 78 SiP (system in package), 22
sequential blocks, 5254 skew, 142
classification of, 52 slack, 6869
fundamental, 53
SLDL (System Level Description
set-up constraint, 67
Language), 14, 1516
shorts test, 86
SoC (system on chip), xiv, 22
signal coupling, 8384 emulation of, 43
signal integrity, 34, 80, 8184 system-level models in design of,
analysis of, 93 2324
BUFFER gate and, 51

160 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 160 2/20/08 4:54:40 PM

software design, 2728 definition of, 19
software, hardware vs., xii entry, 2237
source terminal, 74 growth of, 1921
importance of, 20
SpecC, 24, 25 packaging aspects in, 3132
special-purpose design languages, 24 RTL/logic-level design vs., 4547
STA (statistical timing analysis), 92 system-level design flow, 21, 36
stakeholders, 1718
static circuits, 102103
static power consumption, 72, 7475 T
statistical power, 77 technology device models, 106
statistical simulation, 122123 technology mapping, 5960, 92
statistical timing analysis (STA), 92 technology parameters, 41
stimuli, 124 temperature, 70, 75, 81, 106
stimuli generating devices, 115 test input generation, 87
storage devices, 66 test logic generation, 8687
structural description, 20, 22 test plan, 85
stuck-at fault model, 8586 test synthesis, 64
styles, of circuits, 102104 in DFT, 8688
methodology implications of, 104 test input generation goal of, 87
static, 102103 test logic generation goal of,
substrate coupling, 8384 8687
subthreshold leakage, 74 testability, 8589, 151
supply voltage noise, 83 analysis, 8889
switching capacitance, 72 controllability in, 88
observability in, 87
synthesis in RTL/logic-level design, 8589
in chip planning, 3940 test synthesis increasing, 64,
clock, 39 8688
fast, 39 verification of, 64, 8889
high-level, 38
logic, 39, 5761, 92 testbench, 62
test, 8688 threshold voltage, 99
system characteristics, 6 timing analysis, 6570, 84, 92, 106
system in package (SiP), 22 107, 151
system level, 6, 8 timing analyzer, 68
system Level Description Language timing constraints, 65
(SLDL), 14, 1516 timing, impact of manufacturing on,
system on chip (SoC), xiv, 22, 2324, 6970
43 timing verification, 64, 106107, 151
system simulator, 37 complications of, 66
key aspects of, 65
SystemC, 14, 24, 26, 29, 38 manufacturing influencing, 92
system-level description outputs of, 6869
components of, 20 RTL/logic-level design, 6570
in specC, 25 for simple function, 6667
in systemC, 2627 STA in, 92
system-level design, 1944, 150 tools for, 84
block description from, 40 violations in, 6768, 83
chip planning in, 3944

Index 161

_Carballo_Book.indb 161 2/20/08 4:54:41 PM

tool(s) vias, 145
assembly, 910 victim signal, 8182
ATPG, 87 viewer tool, 6364
chip design, 911
design entry, 5457 violations,
DFT, 8587 in LVS, 135
editor, 63, 7879, 128 in timing verification, 6768, 83
layout design, 127, 128 voltage
logic synthesis, 5758 in analog design, 111
power analysis, 78 controllability, 73
RTL/logic-level design, 5457 granularity, 73
simulation, 10 high, 99
software design, 28 as independent variable, 123
timing analysis, 84 noise and, 83
verification, 1011 reference, 125
viewer, 6364 scaling, 73
top-down processes, 78, 147 supply, 106
threshold, 99
total scan, 87 uncertainty in, 70, 80
tracks, 130 voltage reference, 125
transient analysis, 121 voltage supply, 106
transistor, xvxvi, 74, 108, 113, 120
basic equation for, 99
CMOS, 98
nMOS, 98102, 103
pMOS, 100101, 103 waveforms, 109
twin-well process, 128 limitations of, 63
usefulness of, 6263
wells, 128
U wire constructs, 55
UML (Unified modeling language), 14 wires
capacitance of, 7374
unified modeling language (UML), 14 electromigration in, 83
key electrical characteristics of, 70
manufacturing influencing
V parameters of, 6970
variable threshold, 102103
variable-supply design, 104 YZ
dimensional, 5 yield, 144145, 152
electrical device, 5
functional, 5
manufacturing, 5
process, 75, 80
verification tools, 1011
Verilog, 24, 47, 48, 54, 55, 58, 60,
VHDL (very-high-speed integrated
circuit hardware design language),
9, 38, 47, 48, 56, 58, 60

162 Chip Design for Non-Designers: An Introduction

_Carballo_Book.indb 162 2/20/08 4:54:41 PM