1428

IEEE TRANSACTIONS ON COMPUTER AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 19, NO. 12, DECEMBER 2000

An Industrial View of Electronic Design Automation
Don MacMillen, Member, IEEE, Michael Butts, Member, IEEE, Raul Camposano, Fellow, IEEE, Dwight Hill, Senior Member, IEEE, and Thomas W. Williams, Fellow, IEEE

Abstract—The automation of the design of electronic systems and circuits [electronic design automation (EDA)] has a history of strong innovation. The EDA business has profoundly influenced the integrated circuit (IC) business and vice-versa. This paper reviews the technologies, algorithms, and methodologies that have been used in EDA tools and the business impact of these technologies. In particular, we will focus on four areas that have been key in defining the design methodologies over time: physical design, simulation/verification, synthesis, and test. We then look briefly into the future. Design will evolve toward more software programmability or some other kind of field configurability like field programmable gate arrays (FPGAs). We discuss the kinds of tool sets needed to support design in this environment. Index Terms—Emulation, formal verification, high-level synthesis, logic synthesis, physical synthesis, placement, RTL synthesis, routing, simulation, simulation acceleration, testing, verification.

I. INTRODUCTION HE AUTOMATION of the design of electronic systems and circuits [electronic design automation (EDA)] has a history of strong innovation. The EDA business has profoundly influenced the integrated circuit (IC) business and vice-versa, e.g., in the areas of design methodology, verification, cell-based and (semi)-custom design, libraries, and intellectual property (IP). In the digital arena, the changing prevalent design methodology has characterized successive “eras.”1 • Hand Design: Shannon’s [SHA38] work on Boolean algebra and McCluskey’s [79] work on the minimization of combinational logic circuits form the basis for digital circuit design. Most work is done “by hand” until the late 1960s, essentially doing paper designs and cutting rubylith (a plastic film used to photographically reduce mask geometries) for mask generation. • Automated Artwork and Circuit Simulation: In the early 1970s, companies such as Calma enter the market with digitizing systems to produce the artwork. Spice is used for circuit simulation. Automatic routing for printed circuit boards becomes available from Applicon, CV, RacalRedac, and some companies such as IBM and WE develop

T

Manuscript received May 24, 2000. This paper was recommended by Associate Editor M. Pedram. D. MacMillen is with Synopsys Inc., Mountain View, CA 94065 USA (e-mail: macd@synopsys.com) M. Butts, R. Camposano, D. Hill, and T. W. Williams are with Synopsys Inc., Mountain View, CA 94065 USA. Publisher Item Identifier S 0278-0070(00)10449-X.
1We do not attempt to reconstruct the history of EDA in detail. The sketch below is only meant to exemplify and summarize some of the major advancements that created sizeable markets.

captive IC routers. Calma’s GDS II becomes the standard layout format. • Schematic Capture and Logic Simulation: Workstation technologies combined with graphical user interfaces allow designers to directly draw their logic in schematic capture tools. Once entered, these designs could be simulated at the logic level for further development and debugging. The largest entrants at this juncture were Daisy, Mentor, and Valid. • Placement & Routing: During the 1980s, powerful IC layout tools emerge: automatic placement & routing, design rule check, layout versus schematic comparison, etc. at the same time, the emergence of the hardware (HW) description languages Verilog and VHDL enables the widespread adoption of logic and RTL simulators. The main players in the commercial arena become Mentor and Cadence. (ECAD/SDA/Gateway/Tangent/ASI/Valid). • Synthesis: In 1987, commercial logic synthesis is launched, allowing designers to automatically map netlists into different cell libraries. During the 90s, synthesizing netlists from RTL descriptions, which are then placed and routed, becomes the mainstream design methodology. Structural test becomes widely adopted by the end of the decade. RTL simulation becomes the main vehicle for verifying digital systems. Formal verification and static timing analysis complement simulation. By the end of the 1990s, Cadence and Synopsys have become the main commercial players. The EDA market for ICs has grown to approximately $3B. Also, the fabric of design, meaning the basic elements such as transistors and library cells, has evolved and become richer. Full custom design, cell based designs [mainly standard cells and gate arrays for application specific ICs (ASICs)] and FPGAs [or programmable logic devices (PLDs)] are the most prevalent fabrics today. This paper reviews the technologies, algorithms, and methodologies that have been used in EDA tools. In particular, we will focus on four areas that have been key in defining the design methodologies over time: physical design, simulation/verification, synthesis, and test. These areas do not cover EDA exhaustively; i.e., we do not include physical analysis (extraction, LVS, transistor-level simulation, reliability analysis, etc.), because this is too diverse an area to allow a focused analysis within the space limits of this paper. We also exclude the emerging area of system-level design automation, because it is not a significant market today (although we believe it may become one). We then look briefly into the future. Design will evolve toward more software (SW) programmability or some other kind

0278–0070/00$10.00 © 2000 IEEE

MACMILLEN et al.: AN INDUSTRIAL VIEW OF ELECTRONIC DESIGN AUTOMATION

1429

of field configurability like FPGAs. The evidence for this exists today with the healthy growth of the FPGA market as well as the increased attention this topic now receives at technical conferences such as the Design Automation Conference. The reasons are manifold. The fixed cost of an IC (NRE, Masks, etc.) and the raising costs of production lines require an ever-increasing volume. Complexity makes IC design more difficult and, consequently, the number of IC design starts is decreasing, currently at around 10 000 ASICs/Systems on a Chip (SoC) per year plus some 80 000 FPGA design starts/year. So there is an increasing pressure for designs to share a basic architecture, typically geared toward a specific application (which can be broad). Such a design is often referred to as a “platform.” In this scenario, the number of platforms will probably grow into the hundreds or thousands. The number of different uses of a particular platform needs to be at least in the tens, which puts the total number of different IC “designs” platforms (average platform use) in the 100 000 range. If the number of different platforms is higher than of the order of 1000, then it is probably more accurate to talk just about programmable SoC designs, but this does not change the following analysis. The EDA market will fragment accordingly. • A market of tools to design these platforms targeting IC designers. These tools will need to master complexity, deal with deep submicrometer effects, allow for large distributed design teams, etc. Examples: Physical synthesis placement routing), floorplanning, chip (synthesis level test, top level chip routers, etc. • A market of tools to use these platforms by System on a Chip Designers. Typically, these tools will handle lower complexity, will exhibit a certain platform specificity and will require HW/SW codesign. Examples: Synthesis of FPGA blocks, compilers, HW/SW cosimulation including platform models (processors, DSP, memory), system-level design automation, etc. This paper explores the technical and commercial past and touches on the likely future of each of the four areas—layout, synthesis, simulation/verification, and test—in this scenario. II. PHYSICAL DESIGN From both technical and financial perspectives physical design has been one of the most successful areas within EDA, accounting for roughly one-third of the EDA revenues in the year 2000. Why has physical design automation been so successful? There are many factors, but here are three based on the history of EDA and machine design: First, physical design is tedious and error prone. Early computer designers were comfortable designing schematics for complex machines, but the actual wiring process and fabrication was a distinctly unattractive job that required hours of exacting calculation and had no tolerance for error. And since physical design is normally driven by logic design, minor logical changes resulted in a restart of the physical process, which made the work even more frustrating. Second, the physical design of machines could be adjusted to make it more amenable to automation. That is, whereas the external behavior of a circuit is often specified by its application, its layout could be implemented in a range of ways, including some that were easy

to automate. Finally, the physical design of computers was something that is relatively easy to represent in computers themselves. Although physical design data volume can be high, the precise nature of layout geometry and wiring plans can be represented and manipulated efficiently. In contrast, other design concepts such as functional requirements are much harder to codify. Providing mechanized assistance to handle the sheer volume of design was probably the basis of the first successful EDA systems. To meet the demand, EDA systems were developed that could capture artwork and logic diagrams. Later, the EDA tools started adding value by auditing the designs for certain violations (such as shorts) and later still, by automatically producing the placement or routing the wires. The value of design audits is still a major contribution of physical EDA. In fact, today, physical verification of designs (e.g., extraction) amounts to about 45% of the market for physical EDA—the design creation process accounting for 55%. However, due to space limitations, this paper will focus on the design creation half. A. Evolving Design Environment The evolution of physical EDA tools ran in parallel with the evolution of the target process and the evolution of the available computing platforms. Defining the “best” target process has been a tug-of-war between the desire to simplify the problem in order to make the design and testing process easier, and the desire to squeeze the best possible performance out of each generation of technology. This battle is far from over, and the tradeoff of technology complexity versus speed of design will probably be a theme for the next generation of chip technologies. In addition to the evolution of the target technologies, each new generation of technologies has impacted EDA by providing a more capable platform for running CAD tools. This section will discuss the co-evolution of EDA tools, the machines they ran on, and the technologies that they targeted. Because physical EDA is a large and diverse field, there are many taxonomies that could be applied to it. For the purposes of exposition, we will divide it roughly along the time line of technologies as they have had an economic impact in the EDA industry. Specifically, these would include interactive tools, automatic placement & routing, layout analysis, and general layout support. On the other hand, there have been physical design technologies which have been quite promising, and even quite effective within limited contexts, but which have not impacted the revenue numbers very much. The end of this section hypothesizes why these technologies have not yet had an economic impact. B. Early Physical Design: Interactive Support The earliest physical EDA systems supported the technology of the day, which was small and simple by today’s standards but complex compared with the systems before it. Early electronic machines consisted of discrete components, typically bipolar transistors or very small-scale ICs packed with passive components onto a two-layer board. A rack of these boards was connected together via a backplane utilizing wirewrap interconnections. In this environment, the first physical EDA tools were basically electronic drawing systems that helped produce manufacturing drawings of the boards and backplanes, which technicians could then implement with hand tools. In the late 1960s,

for example. At the time. Prior to this. The introduction of the “super minis. which greatly simplified the problem of placing and routing cells. Applicon and ComputerVision developed the first board and chip designing graphics stations during the 1970s.1430 IEEE TRANSACTIONS ON COMPUTER AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS. and relied heavily on programmable logic arrays (PLAs) which appeared to be the ideal way to specify complex behavior of circuits. which were 16-bit machines like PDP-11s or perhaps 24-bit machines such as Harris computers. This was the approach of Mentor and essentially of all successful physical EDA companies since then. and the size of circuits was limited. NO. which was a rarity on computers in the late 1970s. new machines were invented. resulting in a much larger change in area. But the problems designing with PLAs soon exceeded their advantages: they did not scale to very large sizes and did not generally abut to make larger chips. and set the pace for EDA work to follow. In those days. It was based on the specification of individual polygons in the CIF language. Since the spacing of the . When the layout was made up of cells of a standard height that were designed to abut so that their power and ground lines matched. or perhaps 80 character by 24-line CRT terminals. In this context. but the approach it advocated seemed ripe for automation. the first commercial automatic physical design systems started demonstrating that they could compete with designs done by layout engineers. VOL. could afford. thus. almost any placement you create was legal (even if it was not very efficient). most computer users were running mechanical paper terminals. especially IBM. but meant that EDA SW developers could depend on a steady sequence of ever more capable workstations to run their SW-only products on. automatic physical EDA in the late 1970s was just beginning to provide placement and routing functions. and RCA were probably the most advanced. But around 1969. was now a small fraction of machine sales. This often meant that it would no longer fit into the area allocated for it. Finally. included the Mann IC Mask maker and the Gerber flatbed plotter. These were commercially significant. created a demand for tools that could accelerate the intermediate steps of design. the very EDA systems that optimized their area undermined their predictability: making a small change in the logic of a PLA could cause it to no longer fold. therefore. As long as the final design was being implemented by hand. the EDA support for PLAs. that could produce the final masks automatically. An open question at the time was: “What is the most appropriate IC layout style?” The popular book by Mead and Conway [MEA80] had opened up the problem of layout to masses of graduate students using only colored pencils. and defined a new level of abstraction. tools internal to the biggest companies. thus. This killed the prospect for special EDA workstations. 12. [39]). general-purpose workstations were available on the open market that had equal and soon greater capabilities than the specialized boxes.. and could be modified easily. Specifically. grew to large companies by developing and shipping workstations specifically intended for EDA. Bell Labs. The capabilities of these early EDA workstations were limited in many ways. Daisy and Valid. At the time. industry was experimenting with a completely different paradigm called “standard-cell” design. These were considered high-end machines because they required a huge amount of memory (as much as 128K bytes) and were. The EDA industry. With the advent of an almost unlimited (about 24 bit) virtual memory. the 1980s brought a collapse of memory prices and bit-mapped displays came to dominate the market. (But the story of programmable logic is far from over. was never a major economic factor. the introduction of standard cells de-coupled the transistor-level design from the logic design. it was fashionable to model the design process as one that was external to the computing environment and. C. priced out of the range of ordinary users. DECEMBER 2000 the situation changed because the backend of physical design started to become automated. Furthermore.) While PLAs and stick figures were all the rage on campus. including both the algorithms applied and the HW that was available. it became possible to represent huge design databases (of perhaps 20K objects) in memory all at once! This freed up a lot of development effort that had gone into segmenting large design problems into small ones and meant that real designs could be handled within realistic running times on machines that small to medium companies. PLA EDA was developed (e. some display designers were starting develop “bit mapped” graphics displays. and of storage tube displays (which use a charge storage effect to maintain a glow at specific locations on the face of the CRT) opened the door to human interaction with two-dimensional physical design data. the digitizing tablet was a device for “capturing” it and storing it into a computer memory. At the time of their commercial introduction. The advent of reasonably priced vector-graphics monitors (which continuously redraw a series of lines on the CRT from a display list). Whereas custom ICs were laid out a single transistor at a time and each basic logic gate was fit into its context. other design styles were being developed that were just as easy to use but which scaled to much larger sizes. which was typically limited to something like 64K words—hardly enough to boot today’s machines. whereas other tools would run on either mainframes or the “minicomputers” of the day. Part of this was the “capturing” of schematic and other design data. as will be discussed in the section EDA Futures. most EDA companies in the late 1970s and 1980s were seen as system suppliers that had to provide a combination of HW and SW. Mechanizing the mask making. 19. had predictable interfaces.g. After all. which were typically stored on large reels of magnetic tape. the time to produce a new machine was long. and universities. per se. complex printed circuit board and IC masks were done directly on large sheets of paper or on rubylith. So.” especially the DEC VAX 11-780 in 1977 probably did as much to accelerate computer—aided designas any other single event. Eventually. where each pixel on the CRT is continuously updated from a RAM memory. To support them. Automatic Design While interactive physical design was a success. Physical EDA needs more than computing power: it requires graphical display. they were easy to design. which had served as an early leader in workstation development. All at once the bottleneck moved from the fabrication side to the production of the design files. Of course. These technologies were commercially developed during the 1970s and became generally available by the late 1970s. The IBM tools naturally ran on mainframes.

and there was a huge demand of medium scale integration (MSI) and large scale integration (LSI) chips for the computer systems of the day (which typically required dozens of chip designs. layout and electrical problems all at once. respectively. which included both a placement and a routing subsystem. The current state of the art for timing-driven placement is to merge the placement algorithm with timing analysis. at that time. In more recent years. the use of analytical guidance produced much higher quality much more quickly in systems like PROUD [117] and Gordian [68]. but did not stop their adaptation by other EDA vendors. and has recently put these formats and related readers into an open source model. and later. and others provided early silicon P&R tools. Hence. the annealing technology became better understood. part of their success was due to the development of a standard ASCII format for libraries and design data. respectively. called LEF and DEF. around 1983. guided by relatively simple graph partitioning algorithms. But slicing placement was not dead—it gained new life with the advent of analytical algorithms that optimally solved the problem of ordering the cells within one or two dimensions. The tools that provided effective area routing with the three metal layers. came into high demand for gate array use. and more parameters were derived automatically based on metrics extracted from the design. they became the de facto standards for the industry. the nature of the Metropolis and Simulated Annealing algorithms [67] became much better understood from both a theoretical and practical viewpoint. The CalMP program from Silvar-Lisco achieved both technical and economic success. Gate Ensemble. with a fixed set of time critical paths. These algorithms have the advantage that they deal with the entire netlist at the early steps. About this time. it suddenly became possible to automatically produce large designs that were electrically and physically correct. area routing dominated the CAD support world for gate arrays. But such people were in short supply. and the compute power available to EDA increased during the early 1970s. Daisy. Eventually. Following closely on the heels of standard cell methodology was the introduction of gate arrays. it became attractive to throw more cycles at the placement problem in order to achieve smaller die area. and the mantel of highest quality placement was generally held by the annealing placers such as TimberWolf [99]. Because these allowed vendors to specify their own physical libraries in a relatively understandable format. Cadence continues to support and extend LEF and DEF in its tools. SDA. tools from ArcSys (later called Avant!) and others. not stretchable. At the moment. In addition to their multilayer capability. Gradually. VRI. SDA.” producing a valid placement was never an issue with gate-arrays. Silvar-Lisco. Many decried the area efficiency of the early standard cells since the channel routes took more than half the area. logic. the competitive environment for placers has started to include claims of timing-driven placement. Synthesis directed placement (or placement directed synthesis. for example.: AN INDUSTRIAL VIEW OF ELECTRONIC DESIGN AUTOMATION 1431 rows was not determined until after the routing is done almost any channel router would succeed in connecting 100% of the nets. Academics called for a generation of “tall thin” designers who could handle the architecture. and later Silicon Ensemble products.org . Commercial automatic physical design tools started appearing in the early to mid 1980s. And the lack of circuit tuning resulted in chips that were clearly running much slower than the “optimal” speed of the technology. including the GARDS program from Silvar-Lisco. most of the chips were routing limited. Early annealing engines required expert users to specify annealing “schedules” with starting “temperatures” sometimes in the units of “million degrees. and put on firmer foundations. even for a simple CPU). and could at least potentially produce a placement that reduced the number of wire crossings. and synthesis-directed placement. so that the user need only supply overall timing constraints and the system will derive the paths and the internal net priorities. [46]. and ECAD were placed under the Cadence umbrella. they were marketed as Q for “quadratic” and for “quick” algorithms. according to some) is a melding of placement operations and synthesis operations. Thu the new problem. gate arrays offered more: the reduction in the chip fabrication time. When commercialized by companies such as Cadence. The fact that they were. and factors of 10 from one run to another were not uncommon. Various speed-up mechanisms were employed to varying degrees of success. which was fine for standard cells but not very effective for gate arrays.” Such homegrown heuristics reduced the accessibility of physical design to a small clique of experts who knew where all the buttons were. proprietary and not public formats slowed. Early routing products were limited to two layers. But because the area available for routing was fixed. They opened the door to placing designs of 100K cells in a reasonable time. the physical design tools from Tangent. More recent work in the area has supported path-based timing constraints.” The benefits were so obvious that standard cell layouts became standard practice for a large percentage of design starts. therefore. such as Kernighan– Lin [64] and Fiduccia–Mattheyses [50].OpenEda. and Cadence came to dominate the market for physical EDA with its Cell3. But as the competition for better QOR heated up. Early timing-driven placement techniques worked by taking a small fixed set of net weights and tried to place the cells such that the high weighted nets were shorter [45]. Because all the cells were predesigned to fit onto the “master-slice. CADI. ECAD. The Tancell and more especially the TanGate programs from Tangent were major players in placement and routing in the 1985–1986 timeframe. The synthesis operations range 2see http://www. Because simulated annealing is relatively simple to understand and implement (at least in a naïve way) many students and companies implemented annealing engines. Whereas standard cells could reduce design time. The obvious priority was for design efficiency and accuracy: most chips had to “work the first time. By putting even more constraints on the layout they reduced the number of masking steps needed to “turn around” a chip. Cadence and Avant! hold the dominant position in the market for physical P&R tools with their Silicon Ensemble and Apollo systems. Because the classic partitioning algorithms tend to run very slowly and suboptimially on problems of more than a few thousand cells. Since the introduction of Cell3 there has been competition in this area.MACMILLEN et al. 2 Many of the first commercially successful placement algorithms were based on slicing tree placement.

started to migrate to the silicon domain. where automatic routing algorithms predated successful placement algorithms. and in some cases they have met with significant success within a specific environment. Very often. The nature of the technology involved placing hundreds to thousands of relatively simple circuits. This may be partly due to the lack of successful tools in the area. early automatic gate-level placement was often standard-cell based. are currently in the market leading tools. 19. Dolphin from Monterey Designs. including a process of strategically sequencing the route with a global route run before the main routing [31]. Blast Fusion from Magma. Although its economic impact of integrated synthesis with placement is small at this time. Cadence. In a different approach. before starting on the next. automatic placement had almost no value without automatic routing. The current market entrants into this new arena include Saturn. [35]. D.1432 IEEE TRANSACTIONS ON COMPUTER AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS. This tended to result in high-completion rates for the nets at the beginning and low rates. but they have not yet had large economic impact. But the classic routing techniques had a problem in that they tended to route one net at a time to completion. At this point the biggest challenges in physical EDA. sometimes called sequential routing. Although standard cells connected by channel routes were effective the advent of gate-array layout brought the need for a new generation of IC routers that could connect a set of points distributed across the surface of the die. Sometime in the 1970s the number of components on ICs started to exceed the number of components on PC boards. Ironically. or long paths. PKS from Cadence. This is not to say that the technical approaches are the same for these products. Producing a competitive router today requires extensive experimentation. 12. Hence. As defined in the early 1970s. as of yet they have not had as much impact on the business side of EDA. There were at least three reasons for this: routing was a more tedious process than placement. Addressing these quality-ofresults problems has been a main focus of router work for more than a decade. There have been many significant advances. Many academic papers and some commercial EDA systems have proposed solutions to floorplanning [90]. Because breadth-first search techniques require unacceptably long runalgorithm have been times. with Silicon Ensemble. DECEMBER 2000 from simple sizing of devices to complete resynthesis of logic starting from technology independent representations. integration. But other constraints are much more difficult to compute. F. Examples of such tools include clock tree synthesis. some of which are easy to compute. an example being specialized clock tree synthesis solutions with deliberate nonzero skew. However. Most of the practical routing algorithms were basically search algorithms based on either a breadth-first grid search [73] or line-probe search algorithms [59]. VOL. can be found in [104]. More detailed discussions on this and other issues in physical design. and other functions. which meant that channel routing could be applied. with Apollo. Floorplanning is the process of organizing a die into large regions. core logic synthesis such as timing-driven structuring. These greatly improve the timing of the final circuit. such as the amount of wasted area lost by ill fitting blocks. and part of the floorplanning process is reshaping the blocks to fit into the die efficiently. many optimizations. many other physical design facilities and tools have been developed and many of these have been launched commercially. and assigning major blocks to those regions. Examples of these include automatic floorplanning tools. and to a lesser extent. it has not generally been commercially viable to build these as separate tools to link into another company’s P&R system. including both gridded and gridless routing—although most of the practical commercial routers have been gridded until fairly recently. from Avant!. and many heuristics for selective rip-up and reroute done during or after the net-by-net routing [51]. There are many constraints and cost metrics involved in floorplanning. and tuning of these processes. Some start-up companies have begun to offer such support as point tools. layout generators. power/ground routing. NO. and the number of pins to be routed was an order of magnitude greater than the number of components to be placed. Routing The first application of automatic routing techniques was to printed circuit boards. channel routing was the problem of connecting a set of pins on either side of a parallel channel with the fewest tracks [42]. there was a strong motivation to provide a solution that included both. and Avant!. Early PC board routing EDA consisted mostly of in-house tools in the 1960s and 1970s running on mainframes and the minicomputers available at the time. Physical Design Tools with Smaller Economic Impact In addition to the central place/route/verify business. its definition lead to a large number of algorithms and heuristics in the literature. Most commercial “placement” systems. In this context. As mentioned above. it seems likely that most commercial placement tools of will have to have some kind of synthesis operations incorporated within them in order to remain competitive in the long run. and compactors. for nets at the end of the run. including the current generation from Cadence and Avant! have effective sizing and buffering operations built into them. in the market [92]. and placement. filler cell insertion. cell layout synthesizers. and the biggest revenue. to the nature of the floorplanning problem: it is both tractable to . others are currently working to combine RTL synthesis operations such as resource sharing. this was more like the original printed circuit board routing problem and could be attacked with similar methods. But the EDA industry overall has not seen a large revenue stream associated with floorplanning—it amounts to only about 2% of the physical IC EDA market. However. But it is also due. This section discusses some of the reasons behind the limited economic impact of these and similar tools. Because this problem was simple enough to model analytically. E. such as the used to speed up routers [58]. with references. such as the impact that routing the chip will have on its timing. and others. all while using common timing models and core timing engine. only that they are all targeting the solution of the timing closure problem by some combination of synthesis with placement. the blocks to be floorplanned are malleable. minor changes in placement required redoing the routing. Physical Compiler from Synopsys. at least in part. Critical Support Subsystems Within Place and Route Practical layout systems depend on many specialized subsystems in order to complete the chip.

and so. Generally.MACMILLEN et al.g. However. [112]. but few designers would pick up someone else’s generator and use it. Even custom computer microprocessor chips. the current economic impact of cell synthesis on EDA is small. that is doubling the exponent in an exponentially increasing function. such as the idea that within ten years. At the moment. and can provide amazing productivity gains within specific contexts.. there is a new category of design. it seems as if the EDA developer’s effort level has been roughly constant over the last 20 years. Hence. since the use of explicit hierarchical blocks provides effective mechanism for insulating one complex subsystem from another. and will continue to remain so for the foreseeable future. SIMULATION AND VERIFICATION OF VLSI DESIGNS Verification of design functionality for digital ICs has always been a huge and challenging problem. That is.3 In reality. The final case example would be layout compaction tools. But long experience has clearly shown that verification work is a super-linear function of design size. a greater number of systems than ever can be designed without the need for “mask programmed” silicon at all. as a problem such as metal migration becomes well understood. chip architects are not desperate to hand this problem to a machine since skilled designers can produce floorplans that are better than the best machine-produced floorplans. design verification time continues to double every 18 months. Many projects of this nature have been done in research [118] and inside olarge companies. they were viewed with suspicion by the designers used to doing full custom layout interactively. and that the extra complexity also at least doubles the number of cycles needed to get acceptable coverage [22]. many groups would write generators for such circuits as ALUs. which is two to the power of the number of elements. run on more powerful computer platforms. the use of layout generators was a promising area in the late 1980s and early 1990s. The PLAs that represented the early attempts to automate chip design turned into PLDs and then CPLDs. states that the density of circuits on an IC roughly doubles every 18 months . we face an explosion of two to the power of Moore’s Law. automatic cell layout has had little impact on the EDA economy. The overall effect is cumulative and practical EDA systems today tend to be large and grow roughly linearly with time.: AN INDUSTRIAL VIEW OF ELECTRONIC DESIGN AUTOMATION 1433 human designers. they were targeted at the limited number of full custom layout designers. and a few start-ups have attempted to make this a commercial product. such as the Intel Pentium. These tools provide a framework for algorithmic placement systems to be written by designers. Another area that has not had much commercial impact has been cell layout synthesis. This allows one mask set to support multiple users. even across multiple customers. companies today seem unwilling to spend large sums for automatic floorplanning. However. and too little demand for them. almost all designs will have some amount of programmable logic [93]. For people interested in full-custom layout. self-heating) is revealed. and these people could very often provide similar systems themselves. and can develop them within a matter of days. The cost of designing and testing each chip is large and growing. These were promoted starting in the 1980s as a way of improving the efficiency of physical design and enabling a design to be ported from one technology to another [120]. On the other side of the balance. This is because there are several factors that tend to oppose each other: the number of design starts is diminishing. 3Moore’s Law. As chips get larger. named after Gordon Moore of Intel. since its detailed physical and electrical properties were not well understood. and design teams get more complex. This may be because the total amount of transistor-level layout done in the industry is small and getting smaller. they produced layouts that were generally correct in most rules. the verification problem is Moore’s Law squared. all other things remaining equal. generator development systems were targeted at a relatively small group of people who were experts in both layout and in programming. targeted at ever faster and more capable very large scale integration (VLSI) technologies. Second. they suffered from several problems. Also. if not expanding market. as in the TransMeta Crusoe processor. Such tradeoffs abstract some of the physics of the problem to allow complex effects to be safely ignored. Third. In addition. floorplanning and hierarchical design in general may play a larger role. A fair approximation is that doubling the gate count doubles the work per clock cycle. but the complexity of each design is increasing. but often required tuning to understand some of the peculiar technology rules involved in the true detailed high-performance layout. the gate-arrays used to reduce turnaround time now compete head-to-head with FPGAs. they tended to suffer from the syndrome of too many contributors. The number of custom chip designers will probably remain relatively constant. these systems are not in wide use. Hence. Thus. What does this mean for physical EDA? That depends on if the EDA is for the chip designers or for the chip users. Coupled with the increasing complexity of the fabrication process and the faster workstation. III. the leverage that advanced EDA tools can offer continues to be significant. named “application specific programmable products” [38] which combine programmability into traditional application specific chip designs. Another example is the projections from FPGA vendors. and in the end were not widely accepted. the next wrinkle in the technology (e. and attractive to them. But in any case. On the Future of Physical Design EDA Each generation of technology has brought larger-scale problems. Each generation has traded off the raw capabilities of the underlying technology with the desire to get working designs out quickly. Even if processor performance continues to increase according to Moore’s Law (exponentially). EDA tools for mask programmed chips that to offer good performance with excellent reliability should probably have a steady. Likewise. That is. If we measure difficulty of verifying a design by the total possible states of its storage elements. First. This is the process of taking a transistor-level specification and producing detailed transistor-level layout. it is not that bad. now compete with SW emulation of the same instruction set. or it may be because the overhead verifying circuits at the transistor-level raises the effective cost too high. But so far. G.

such as IBM. and multiple clock domains. Event-driven logic simulation remains a widely used fundamental algorithm today. VOL. at least on the asynchronous portion. and now into the system level. 12. C. [46]. locates it in time.e. Models Most designs contain large modules. Abstraction Any level we choose to verify at involves some abstraction.1434 IEEE TRANSACTIONS ON COMPUTER AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS. Cycles Soon a higher-abstraction alternative to the event-driven algorithm came into some use. particularly from logic automation and logic modeling systems.e.. 1970s examples of such simulators include IBM’s VMS [2]. which do not need verification. Gate evaluation takes very little time. Designs with asynchronous feedback. timing resolution has moved from nanoseconds to clock cycles. by taking advantage of parallelism and specific dataflows and operations. so it can correctly model asynchronous feedback. from the gate level. not on gates and time steps without activity. which built large. this is an efficient algorithm. since they are standard parts or previously verified HDL cores. ASICs) appeared in the early 1980s. and which could enforce a strict synchronous design methodology on their designers. In pure cycle-based simulation. Cycle-based simulation does not verify timing. which were the earliest adopters of VLSI technology. and corrected with jumper wires. VLSI gate arrays (i. usually only conditions that enable or disable whole blocks. Likewise. Compute time is only spent on processing events. which were verified by real-time prototyping. However. Over the last 20 years. but which must be present for the rest of the design to be verified. which are common in telecommunications and media processors. Daisy Systems. static timing analysis is used instead. TEGAS [115]. Not surprisingly. called an event. which includes transparent latches. at a substantial speed penalty. and CADAT. Since no more than 10%–20% of the gates in a gate-level design are active on an average cycle. all other things have not remained equal. Logic emulation and rapid prototyping use real programmable HW to run billions of cycles of design verification per day. mainstream verification has moved to continually higher levels of abstraction. but their verification is logically complete in a way that simulation and emulation can never be. and scheduling. They were developed for very large projects. and Valid Logic. and fans it out to its receivers. event handling is dispensed with in exchange for evaluating all gates in a block every cycle. Logic simulators used in the earliest days of digital VLSI worked mainly at the gate level. Mentor Graphics. and used an event-driven algorithm. or other asynchronous behavior. It has become less attractive as the performance of event-driven simulators has steadily improved. i. DECEMBER 2000 Since “time to market” pressures do not allow verification time to double. reporting false paths. Modern static timing analysis tools are much more capable of avoiding false paths than earlier analyzers. transparent latches. to the register transfer level (RTL). Early on logic designers found they could abstract above their transistor circuit designs and simulate at the logic level of gates and flip–flops. Fortunately we can use models of these modules. Timing analysis has the opposite disadvantage. escaping from SPICE for functional verification. which involves a lot of data-dependent branching and pointer following through large data structures. A. can prove equivalence and properties of logic networks. Pipelined processors with large caches. Multiple clocks in a common clock domain can be simulated by using subcycles of the largest common divisor of the clock periods. when a design has asynchronous logic. mostly CPUs. B. IBM’s 3081 mainframe project developed EFS. while internal details are abstracted away. most of which emerged in the mid-1980s. replacing discrete MSI TTL implementations. not everyone can use pure cycle-based simulation. Electronic CAD algorithms and methodologies have come to the rescue in three general ways: abstraction. [26]. Resulting performance is typically an order of magnitude greater than eventdriven simulation on the same platform.4 Special-purpose HW has been applied to accelerate verification. such as mainframe computers. It fundamentally assumes that each gate is only active once per clock cycle. formal verification. which were becoming available in mainframes and workstations of the late 1980s. which only provide the external behavior. event-driven simulation with full timing is called for. Events The event-driven algorithm was previously well known in the larger simulation world outside ECAD. Preprocessing analyzes the gates between storage elements to find a correct order of evaluation.. It makes no assumptions about timing beyond gate delays. Setup and hold time violations on storage elements are modeled and detectable. Formal tools are limited in the depth of time they can address. Analysis. Analysis does a more thorough job of verifying timing than simulation. a cycle-based simulator. partially because of their incorporation of cycle based techniques. With ASICs came a much broader demand for logic simulation in the design community. Unfortunately. acceleration and analysis. cannot be simulated this way. Models take a number of SW or HW forms. can run this algorithm very fast. 19. The assumption of a single clock cycle excludes designs with multiple independent clock domains. became well established by putting schematic capture and event-driven logic simulation running on networked microprocessor-based workstations onto ASIC designers’ desks. course these days a nanosecond is a clock cycle … . The simulator has few run-time data dependencies. which can accommodate its methodology constraints. [45]. fully modeled nominal timing at the nanosecond level. NO. that is difficult to speed up in a conventional processor. compared with simulation and emulation. the first cycle-based logic simulators were developed by large computer companies. timing. Cycle-based simulation abstracts 4Of time up to the clock cycle level. In synchronous designs. in the late 1970s [84]. since it covers all paths without depending on stimulus for coverage. complex processors with a single clock domain. It calculates each transition of a logic signal. Pure cycle-based simulation is widely used today on designs. among others. in rank order. paths that can never occur in actual operation. Most of the work in event-driven simulation is event detection.

and finally. 2) Handling multibit buses. and it became an IEEE standard in 1995 [116]. While originally considered synonymous with the cycle-based algorithm. with an optimized PLI model interface. F. ISP was also the first to use the term RTL. Verilog’s Programming Language Interface (PLI) is a standard model interface. such as VCS. in 1979–1980. stimulus generators in process simulation languages. The performance of cycle-based simulation is achieved when the design permits it. but also included RTL and behavioral features. In 1988. E. Synopsys introduced Design Compiler. As in programming language compilers. are widely used. Often only the bus-level behavior is needed. Hardware description languages (HDLs) were first developed to meet this descriptive need. Around 1980. as seen from the chip’s pins. A HW modeler takes an actual chip and interfaces it to simulation. the U. registers and constants as single values. and can execute binary code at a very high speed. VHDL became an IEEE standard in 1987 [44]. into a native compiled-code executable. Other speedup methods used in VCS include circuit levelization. Testbench languages which express that abstraction had a diffuse origin. This technique trades preprocessing time for run-time efficiency. the overhead of an interpreter is avoided. Instruction set simulators abstract a processor further. and IBM’s CEFS of 1985 [1] were early examples of compiled code simulation. This is how the precursors to the testbench language Vera were implemented. which was the first tool to synthesize higher-level Verilog into gates. once enabled by HDL synthesis. which is now Synopsys VCS. All of the Above Today’s highest performance SW-based simulators. Verilog was very successful. and some optimizations can be found at compile time which are not practical or possible at run time. to express testbenches and other parts of the system. Verilog-XL.S. Abstraction to a HDL increases verification performance in a number of fundamental ways. is done automatically when safely possible. simulates the design according to the simulation algorithm. . 1) Replacing simulation of each gate with simulation of multigate expressions. to the instruction level.MACMILLEN et al. but without giving up the ability to correctly simulate any Verilog design. and would work identically on any simulator. 4) Using higher-level ways to express control flow. from four-states and 120-values to two states. an RTL HDL [84]. Compiled Code Simulators were originally built as looping interpreters. optimization of hierarchical structures and design abstraction techniques. All these modeling technologies connect to simulators through standard interfaces. by retaining full four-state on those variables that require it [32]. Initially motivated mainly by representation. recognizing that most activity is triggered by clock edges. design at the gate level became too unwieldy and inefficient. or NC-Verilog. Department of Defense demanded a uniform way to document the function of parts that was technology and design-independent. takes advantage of the compiled-code architecture to further improve performance. Hardware Description Languages As design sizes grew to hundreds of thousands of gates. Testbench Languages Testbenches abstract the rest of the universe the design will exist in. to instrument it in arbitrary ways. coalescing of logic blocks. which was very efficient for gate-level simulation. IBM’s EFS simulator [45]. Nothing is as fast or accurate as the silicon itself. 3) Combining these two abstractions to use multibit arithmetic and logic operators. and an event wheel list structure to schedule events in time. Advanced source code analysis and optimization. They can also provide visibility to internal registers through a programmer-friendly interface. such as case. Hill’s SABLE at Stanford [60]. and remained proprietary through the 1980s. HDLs verification performance benefits. VCS inserts a cycle-based algorithm where it can. There was also a need to express designs at a higher level to specify and evaluate architectures. Cadence. G. Further abstraction of logic values. using static data structures to represent the logic and its interconnect. which constructs a program that. which simulated the ADLIB HDL [61]. much like that done in programming language compilers. propelled it into the mainstream methodology it is today. HDLs. with multiple influences from testers. after it has acquired Gateway.: AN INDUSTRIAL VIEW OF ELECTRONIC DESIGN AUTOMATION 1435 Fully functional SW models completely represent the internal states and cycle-based behavior. opened it in 1991. based on the earlier ADLIB HDL and Ada programming language. which are controlled by transaction command scripts. providing stimulus and checking responses to verify that the design will work correctly in its environment. VHDL (VHSIC HDL) emerged from this requirement. when executed. In 1984–1985. It is a differentiating feature of SW versus HW-accelerated simulators. without losing compliance with standard Verilog. Only a HDL could satisfy this demand. which is usually a good tradeoff to make in logic verification. Chronologic’s Verilog Compiled Simulator. so bus-functional models of chips and buses. PLI also makes it possible for the user to build custom utilities that can run in tandem with the simulator. D. The first widely known HDL was Bell and Newell’s ISP of 1971 [10]. event-driven simulators have also been implemented in compiled code to good effect. The logic design is analyzed by a compiler. and many coverage tools have been written using PLI. for and if/then/else statements that can enclose entire blocks. [46] used BDL/CS. was the first compiled code simulator for Verilog in 1993. creating and verifying their designs in HDL. combine event-driven and cycle-based simulation of HDLs. This allowed engineers to move their whole design process to a higher level of abstraction. Moorby and others at Gateway developed the Verilog HDL and event-driven simulator. inside an eventdriven simulator. modern programming languages and formal methods. Some early cycle-based simulators used a compiled code technique instead. For example. and also the first native-code compiled code simulator.

Ikos remains the sole vendor of event-driven HW accelerators. was a pipelined superscalar RISC attached processor. and most such offerings eventually disappeared. [81]. Similar technology became commercially available in the 1990s in logic emulator form. we can see in the IC-Test language of John Newkirk and Rob Matthews at Stanford in 1981. Today’s CoBALT Plus is capable of speeds over 100 000 cycles per second on system designs up to 20 million gates. A HW time wheel schedules events. at both the operation and processor levels. J. Verilog and VHDL. not testbenches. in pure cycle-based or unit-delay modes. developed from the start with a focus on functional verification. K. I. followed by the Engineering Verification Engine (EVE) [9]. Event-Driven Accelerators Event-driven HW accelerators have been with us almost as long as logic simulation itself. encapsulation of testbench IP. Verification problems exploded in the mid to late 90s. their case has been overstated [29]. explicit binding of testbench and device pins. and Quickturn introduced its CoBALT machine. with timing abstracted in generators that were directly tied to specification sheets. fast enough to be an in-circuit-connected logic emulator in live HW targets. and from specialized operators and control. or a mix of the two.8 million gates were achieved by the mid-1980s [11]. Modern testbench languages. from Systems Science. first the Yorktown Simulation Engine (YSE) research project [41]. so pipelining can take advantage of this low-level parallelism to raise gate evaluation speed to match the accelerator HW clock rate. such as Vera. [30]. Their sharply increased performance and capacity greatly decreased the demand for HW accelerators. Silicon Solutions. H. check results. Massively Parallel General-Purpose Processors Another 1980s response to the need for accelerated performance was research into logic simulation on general-purpose massively parallel processors [107]. and Specman. which allows users to generate stimulus. in the late 1980s and early 1990s. and continue to mature and grow in their usage and capabilities. 19. and give it to the time wheel very quickly. Formal methods had been proposed to address testbench generation. automatic generation of stimulus (with specialized packet generation mechanisms and with nondeterministic automata). function. Most timesteps have many independent events to process. Hardware can directly evaluate a logic primitive. with large physical memory and HLL compilers. Some of the issues it has addressed include directed and random stimulus. The modern verification languages Vera. Although there are many appealing elements in both standard languages and in some formal techniques. DECEMBER 2000 In the early 1980s. ways of launching and synchronizing massive amounts of concurrent activity. Zycad.1436 IEEE TRANSACTIONS ON COMPUTER AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS. with companies finding that they were spending substantially more on verification than on design. appeared in response. we can see the influence of process simulation languages and early HDLs in Cadat’s DSL (Digital Stimulus Language). This was one to two orders of magnitude faster and larger than the SW simulators of the day. Today. and finally. now part of Synopsys. gaining speedup from parallelism. Events across partition boundaries are communicated over a fast interconnect. Most accelerators also use the high-level parallelism available from locality of the circuit’s topology. specialized testbench debugging. 12. Ikos. etc. from Verisity. contained most elements found in event-driven accelerators ever since. have been adopted by many of the largest semiconductor and systems companies. the Compute Engine [6]. The first machine. a productization of IBM silicon and compiler technology directly descended from the YSE/EVE work. Still. detect an output event. as testbenches have become complex programs in themselves. the need for acceleration was immediately apparent. detailed above that apply to all implementations of the cycle-based algorithm. and were sometimes oversold as universal panaceas.). EVE could simulate up to two million gates at a peak of 2. Speeds between 0. It was widely used within IBM. which they are not. sequences. capturing the notion of executable specifications. running actual SW on the accelerated design. especially for processor and controller-based systems. seamless interfacing with HDLs and and Java for HW/SW with high-level languages such as C co-simulation. which accelerate a partitioned design. Cycle-based machines are subject to the same design constraints. and measure coverage in a pragmatic fashion [4]. so a large fraction of design projects cannot use them. Also. Valid. by having multiple parallel processors. and others responded with event-driven HW accelerators. NO. Systems Science worked with Intel to develop a functional macro language. as well as standard languages Perl and . built in the 1960s at Boeing by Angus McKay [11]. In the 1990s. high-level coverage (state. It foreshadowed the development of general-purpose workstations with pipelined superscalar RISC CPUs.1 million and 360 million gate evaluations per second and capacities up to 3. became widely used to develop testbenches. in spite of C/C their focus on HW or SW description.2 billion gate evaluations per second. Soon after one MIPS desktop workstation-based EDA emerged in the early 1980s. features that became integral in modern testbench languages: de-coupling of timing and function. and the stimulus constructs of HiLo and Silos. which are fanned out to their input primitives through one or more netlist tables. we can see a big influence from both of those fields. VOL. and the use of a standard. when Arkos introduced its system. Cycle-Based Accelerators and Emulators IBM developed several large-scale cycle-based simulation accelerators in the 1980s. A logic emulator can plug the design into live real-time HW for verification by actual operation. HW-aware language. flexible abstractions tailored to check complex functional responses. Acceleration and Emulation An accelerator implements a simulation algorithm in HW. high-level language (C extended with programs). Vera is an object-oriented. Mentor Graphics’ 1986 accelerator. The result of this research . find its delay. Daisy’s stimulus language. Daisy. In the late 1980s.

is widely used today on major microprocessor development projects [52]. System-level problems can be observed through the logic emulator’s built-in logic analyzer. So far. or equivalence checkers. L. which is impractical with the real silicon. Both interconnect architectures make the FPGAs’ internal routing interdependent. Events can immediately propagate very far across a logic design. users have found as much value post-silicon as pretapeout. such as fluid flow and structural analysis. Formal verification tools can provide such analysis in many important cases. and a live. At run time. use a separate partial crossbar interconnect architecture. without wasting FPGA capacity or introducing excessive routing delay. [97]. which can take a long time to execute. with access to design internals during actual operation. and it is probably the most commonly used one today. but the overhead required outweighed the gains [109]. to prove the properties to be true. Alternately. processors busy [8]. which analyzes a design to see if it satisfies properties defined by the designer.MACMILLEN et al. OS and data sets as stimulus. Many projects have built “farms” or “ranches” of tens or hundreds of inexpensive networked workstations or PCs. which compare two designs to prove they are logically identical. observing capacity and pin constraints and critical timing paths. Most commercial emulators. but nowhere near 100. so it is hard to incrementally change the design or insert probes without a major recompile. because the inherent nearestneighbor communications in three-dimensional physical reality keeps communications local and predictable. multimedia. the expense of logic emulation HW and SW. all practical formal verification has been at the functional level. because of the visibility afforded by the emulator. When a single thread of verification is required. such as running OS and application SW and large data sets on the logic design. Many timesteps have few events to do in parallel. partitions the netlist into FPGAs and boards. no task-level parallelism is available. dealing with time only at the cycle level. the FPGAs and interconnect are programmed. Model Checkers A model checker checks conditions expressed in temporal logic. against the set of reachable states in a logic design. It is important to remember that this only works when task-level parallelism is available. telecom and networking designs. and time-multiplex signals over the pins synchronously with the design’s clock [7]. Most tools emerging from formal verification research have been either model checkers. runs timing analysis to avoid introducing hold-time violations. enough high-level parallelism is available to keep up to about ten. has confined logic emulation to those large projects that can afford it. multiway timing-driven partitioning is a large and NP-hard problem [5]. Today. used as look-up tables to implement combinational logic. precompiled vectors may be used. requiring a better representation. Distributed-time algorithms [28] were investigated. and have grown to 100K gates or more today. In global-time event-driven logic simulation. Much more common is task-level parallelism. Another major technical challenge is the logic emulation compiler [24]. and flip–flops. scalable and affordable way. often requiring full-time specialist staff. Logic emulation. called properties. as with cycle-based simulation. parallelism in logic designs is very irregular in time. Interestingly. such as the post-1992 Quickturn systems. Analysis Even the most powerful simulator or emulator can only increase confidence. In terms of cycles/s/dollar. and the persistent difficulty of using it in the design cycle. greatly concentrated at the beginning of a clock cycle. which can only describe relatively small systems. which run conventional simulation tools in batch mode. The most difficult technical challenge in logic emulation is interconnecting the FPGAs in a general purpose. M. that all the important states and sequences in a large design have been checked. FPGA-Based Emulators FPGAs emerged in the late-1980s as an implementation technology for digital logic. This is because logic designs have a very irregular topology. An emulation design database is required to map from HDL-level names to the FPGA HW for run time debugging. Analysis resulting in proven conclusions is much to be desired. this is a very successful acceleration technique. some simulation tools can take advantage of the modest multiprocessing widely available in modern workstations. which exists across a large collection of regression vector sets that must be simulated to validate design correctness after minor design revisions late in a project. in both FPGA and cycle-based forms. not guarantee. FPGAs have less capacity when used for modeling ASIC and full custom designs than when intentionally targeted. Multilevel. A compiler does emulation-specific HDL synthesis into FPGA primitives. especially in identifying clock trees for timing analysis of designs involving many gated clocks. and runs chip-level placement and routing on each FPGA. Massively parallel processing succeeds on physical simulations. The emulated design may be plugged directly into a slowed-down version of the target HW.: AN INDUSTRIAL VIEW OF ELECTRONIC DESIGN AUTOMATION 1437 was mainly negative. Also. built with a hierarchy of crossbar routing chips [23]. But systems of any meaningful size have a staggering number of total states. Engineering change orders can be proven to correct the problem before a respin of the silicon. real-time emulation of the logic design results. for very large regression tests. in an array of programmable interconnect [19]. routes the inter-FPGA interconnect. Early research was based on explicit state representation of finite-state models (FSMs). Ikos emulators directly interconnect the FPGAs to one another in a grid. Multi-MHz system clock rates are typically achieved. Most FPGAs are composed of small clusters of static random access memory. N. since the technology mapping and I/O pin utilization is far from ideal. depending on use of static timing analysis alongside. However. FPGAs used for verification could hold 1K gates in the early 1990s. Logic emulation systems soon took a large number of FPGAs and harnessed them as a logic verification tool [23]. Often quite a lot of user intervention is required to achieve successful compilation. and operated with the real applications. and which is resistant to parallel acceleration. The problem of state-space explosion has dominated research and limited the . and very high performance on a single task is still needed.

Chrysalis in 1994 and Synopsys Formality in 1997. Bryant’s breakthrough development of ordered binary decision diagrams (OBDDs) in 1986 [20] opened the way for symbolic model checking. Pixley at Motorola. which is most of the time. The two are connected through the Eaglei system. A single thread of verification is required to run code. The process is ideal in principle. Once an implementation database has been proven equivalent. Within a year. and relationships between them. an RTL version of the design is validated using simulation or emulation. Modern commercial equivalence checkers. Larger chips and system designs succeed by adding acceleration or emulation. by Eagle Design Automation. With modern simulation and formal verification tools. to form a HW/SW co-verification tool set. avoiding deadlock or live-lock. when SW is interacting with simulated HW. the golden database. The embedded SW OS. systems include processors. so long as Moore’s Law holds up. their increased gate count requires more work per cycle and more cycles to get verification coverage. acceleration. to express temporal logic properties correctly. This golden version serves as the reference for the functionality at the beginning of an equivalence checking methodology. Second. . The first equivalence checker was developed for the 3081 project at IBM in about 1980 [106]. and VIS [16]. in the design cycle. As before. to get results in reasonable time and space on designs with a wide variety of characteristics. such as cache coherency implementations. The problem of system-level verification is largely one of HW/SW co-verification. The rest of the system HW resides in a conventional HDL logic simulator. This way the HW-SW system can be verified through many millions of cycles in a reasonable time on a standard workstation [114]. such as multipliers. 19. The VSP interacts with the HW simulation through a bus-functional model of the processor bus. so no task-level parallelism is available. Formality’s most important innovation was its use of a number of different solvers that are automatically chosen according to the analysis problem. because the size of the OBDD representations explodes. Early equivalence checkers used Boolean representations. and simulated from. pose the latest and greatest challenge. The processor-memory part of the system is modeled by either an instruction set simulator. Since nearly all large designs are modular to that level of granularity. which can achieve hundreds of thousands of instruction cycles per second. Model checkers are usually limited to module-level analysis by the fact that determining the reachable state set across more than several hundred registers with consistency is impractical. Bryant’s development of OBDDs [20] was put to immediate use to provide a canonical form in research tools. and processors run code. Equivalence checkers work by applying formal mathematical algorithms to determine whether the logic function at each compare point in one design is equivalent to the matching compare point in the other design. DECEMBER 2000 scope of model checking tools [34]. but were not canonical. The accuracy of the verification depends on the correctness and completeness of the properties declared by the user. O. which can be used with simulation to identify and correct the error. McMillan at CMU. Model checkers have been used successfully to check protocol properties. the flattened gate-level implementation version of a design can be proven equivalent to its hierarchical HDL source. To accomplish this. Each subsequent version of the design is an implementation of the behavior specified in. Equivalence Checkers An equivalence checker formally compares two designs to prove whether they are logically equivalent. which embeds the processor-memory SW execution within a Virtual Software Processor (VSP). which have recently become economically desirable to fabricate. 12. establishing it as the golden version. With the boundary points of each logic cone aligned between the two designs. by both HW and SW engineers. or uncoupled when SW is running on its own. Both equivalence checkers and model checkers can be very demanding of compute time and memory space. such as automotive airbags. NO. or by direct execution of the embedded source code compiled to the verification workstation. Equivalence checking is also very powerful for validating very large combinational blocks that have a clear higher-level definition. and analysis have risen to meet the challenge so far. triggered by HW or SW activity. An important additional result of model checking can be specific counterexamples when a model fails to meet a condition [33]. further developed model checking technology. and Madre and Coudert at Bull. which is even faster. Future: SoC Super-linear growth of verification work shows no sign of letting up. Later university systems. Coupling control can be static or dynamic. For example. it can be used as a reference for any subsequent equivalence comparisons. solvers can use the functional descriptions of the two designs to determine their equivalence [113]. VOL. the tool segments the circuit into logic cones with boundaries represented by source points and end points. and are used today [89]. acceleration and analysis. as before. in a compact and canonical form. They also require very intensive user training and support. Systems on chip (SoC). such as primary I/Os and registers. It is based on OBDDs for complex control logic. or by prototyping with FPGAs. The combination of abstraction. allowing full-speed execution of the SW. this challenge is being met by a combination of abstraction. because. P. which were almost like OBDDs. Abstraction of system SW execution to the source code or instruction set levels was combined with HDL logic simulation. The VSP and logic simulator can be coupled together to follow the same simulation time steps. notably SMV [82]. Usually. or safety-critical control systems.1438 IEEE TRANSACTIONS ON COMPUTER AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS. analysis is usefully applied at the module level. since no user-defined properties or testbenches are involved. First. while other algorithms are used for large combinational datapath elements like multipliers. all either proposed or developed model checking tools that used OBDDs to represent reachable state sets. brought the technology to mainstream design projects. This sharply increases the number of verification cycles needed each workday. and generally are not practical for more than a few hundred thousand gates at a time. because SoCs are systems. SoCs are doubly hard to verify. and will dominate verification issues in the near future. most chip development projects today succeed in getting correct first silicon. drivers and applications and the SoC HW must be verified together. Kukula at IBM.

such as SystemC. acceleration and analysis technologies as well. It was probably in this fashion that Design Compiler was first most heavily used. but also as a training ground for many researchers. Without natural and . The algorithmic approach was used in MIS. Another key for the early acceptance of Design Compiler was the availability of schematic generation. The early success of Design Compiler in this environment can be traced to a number of technical and business factors. The goal of the optimization systems then becomes one of matching or beating a human designer in a greatly accelerated fashion.MACMILLEN et al. both used a rule-based approach. in the late 1980s this was not universally understood that it could be a make-or-break proposition for tool acceptance. More sophisticated HDL analysis and compilation techniques continue to improve HDL simulator performance. MINI [62] was an early system that targeted two-level optimization using heuristic techniques. as necessity drives further innovation to meet the needs of the market. We will see increasing synergy among abstraction. For a comprehensive treatment. In what follows. While this is understood much better today in the design and EDA communities. as well as the earlier LSS. and Optimal Solutions Inc. that will enable a system-based design methodology that gets high verification performance by abstraction. benefiting from the widespread acceptance of FPGAs in HW products. or minimization. this contained information only about logic functionality and connectivity. both static and dynamic power consumption. Not only from publishing papers. users could try out the optimization capabilities and make their designs both smaller and faster with no change to their design methodology. On the business side. New and better algorithms and implementations in formal verification can be expected. LSS was a rule-based system that first operated on a technology independent representation of a circuit and then mapped and optimized the circuit further. Analysis techniques cover logic very thoroughly.: AN INDUSTRIAL VIEW OF ELECTRONIC DESIGN AUTOMATION 1439 Moving HW descriptions above HDL to a system-level representation is another way to get verification performance by abstraction. then the second wave was focused on logic design. optimal solutions were. IV.lib STAMP formats that are licensed through the TAP-IN program. IBM synthesis research gave the necessary background to a generation of investigators. and still are. and Socrates [54] from General Electric. Library Compiler was developed to support the encapsulation of all the pertinent data for logic synthesis. area. A corollary to the optimization-only methodology was the use of Design Compiler as a porting and reoptimization tool that could move a design from one ASIC library or technology to another. FPGA and compiler technology for acceleration continues to grow rapidly. (later renamed Synopsys). Once system-level languages. can be used with system-level synthesis tools on general-purpose logic designs as HDLs are today. and delay. the Yorktown Silicon Compiler (YSC) [14]. The companies Daisy. see [40] and [43]. while Socrates. it is natural that IBM has played a pivotal role in the invention of many algorithms and in the development of many in-house systems. An early multiple-level logic synthesis system to find use of designs that were manufactured was IBM’s LSS [37]. but can not go very deep in time. On the business side. Notable efforts in the universities include the University of Colorado at Boulder’s BOLD system [57]. The early work of Quine [91] and McCluskey [79] address the simplification. The Beginnings Logic synthesis can trace its early beginnings in the work on switching theory. Berkeley’s MIS [13]. as well as another IBM synthesis system. Since the most demanding logic design was encountered in the design of computers. Work on LSS. Work continues on new ways of applying FPGA acceleration technology to abstracted HDL and system-level simulation. Today. In this scenario. The impact on design methodology is a key factor in the acceptance of any new EDA tool. SYNTHESIS A. Today. Logic Optimization and Design Compiler The mid to late 1980s saw the beginnings of companies that were focused on commercializing the advances of academic research in logic synthesis. and ever increasing amounts of physical information. these libraries contain information about wire-loads. contributed in many ways to the start of the explosive growth in logic synthesis research that occurred during the 1980s. Silc. of Boolean functions. we will concentrate on the developments in technology and business in the Design Compiler family of synthesis products. These complementary technologies can be combined to exploit the strengths of each. In either case. This was driven by a combination of early top customer needs coupled with a keen internal focus on silicon vendors with both support and tool development. B. the availability of a large number of ASIC libraries was a key advantage. While these offerings did not utilize logic optimization techniques. in both design verification and testbench generation. opportunities for further innovation can become apparent. we find the very first use of logic manipulation in the “second wave” of EDA companies. acceleration and analysis technologies. out of computational feasibility. One of the first is that the initial use of Design Compiler was for optimization of preexisting logic designs [98]. All of these systems were aimed at the optimization of multilevel logic and the approaches they used could be roughly characterized as either algorithmic or rule-based (or both). Verification power will continue to increase through incremental improvements in each of the abstraction. some of them did offer rudimentary logic manipulation such as “bubble pushing” (phase assignment) that could manipulate the logic in schematics. as this is an active research area in academia and industry. and Valid all offered schematic entry as a front end to logic simulation programs. with references. This initial usage scenario was enabled by a number of factors. When new techniques are applied to real-world design problems. this is supported by Liberty) . Initially. If the first wave of EDA companies was focused on physical design and verification. We may even see FPGA acceleration of solvers in formal verification. while simulation can cover long times without being complete. The early entrants in this market where Trimeter. Mentor.

it was the introduction of Verilog that enabled the HDL methodology to gain a foothold. the optimization goal was to meet the designer’s timing constraints while minimizing area. the initial acceptance of logic optimization by logic designers would have been significantly slowed.” as it was called at the time. A notable exception was the BDSYN program at Berkeley [100]. logic synthesis coupled with automatic place and route won almost all of the business for several reasons. Users were first attracted to Verilog. VOL. C. at that time. interconnect delay estimation. which is a collection of techniques and algorithms. not RTL synthesis. NO. the designs produced by silicon compilation were bigger and slower than those produced by synthesis and APR. The general design of a commercial logic synthesis system is outlined in [96]. 12. Silicon compilers were either tightly targeted on a specific design (i.e. a whole family of solutions on the design tradeoff curve could be obtained. did not understand synthesis by the “equation” Synthesis Translation Optimization What we have already described was the optimization term in this equation.1440 IEEE TRANSACTIONS ON COMPUTER AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS. To address this situation. Most HDL synthesis research in the 1980s was focused on behavioral-level synthesis. and constraint management. More precisely. they could then experiment with more SW oriented language constructs such as “if … then … else” and “case” as well as the specific HW modeling statements that modeled synchronization by clock edges.” By this we mean compilers that produced final layout from an initial specification. The first is having an embedded.6 standard for VHDL Register Transfer Level Synthesis. Logic designers at that time were most familiar with schematic entry and were accustomed to thinking in schematic terms. thus. as evidenced by the long history of CHDL [S10] (see also the comments on HDLs in the Verification section). Without good timing. This model. accepted by the design community. the correctness of the new design. RTL Synthesis The early Synopsys literature attempted to describe to a large audience of logic designers who. but because it was the fastest gate-level simulator available [98]. This had a pivotal impact making HDLs more generally useful and. 19. not because of the language. Flow control in the language text becomes data steering logic (muxes) in the circuit. This design tradeoff curve plots the longest path delay ( -axis) verses the area required to implement the described functionality ( -axis). These techniques. again resulting in uncompetitive designs. or algorithm’s. This “Synopsys style” or “Synopsys subset. The output of optimization had to be presentable in that fashion so that the designers could convince themselves not only of the improved design characteristics. QoR was demonstrated to be superior if its banana curve was entirely below the banana curve results of any other approach. producing an 8051) or were compiling to a fixed micro-architecture. The static timing engine must be incremental to support the delay costing of potential optimization moves. VHDL then was introduced in the mid 1980s from its genesis as a HDL based on the Ada syntax and designed under the VHSIC program [44]. [18]. Once designers started using these languages for gate-level simulation. there were many competing subsets and coding styles for both Verilog and VHDL. a table look-up delay model was developed. The biggest competition for logic synthesis for a place in the design flow in the mid 1980s was the concept of “Silicon Compilers.. RTL synthesis starts with a fairly direct mapping from the HDL to the initial circuit. By continuously changing the timing constraints. as designers gained confidence in the methodology. But good static timing analysis also needs good timing modeling to obtain accurate results. We now turn our attention to the translation term. But this change was effected only slowly at first. First was that in most cases. The second key technology is timing-driven synthesis. but more importantly. It also must be fast and memory efficient. Hardware Description Languages have been an active area of research for many years. silicon compilation did not have the universality of application that was provided for by a synthesis based design flow. as well as many techniques that operate on technology mapped netlists. This curve is also known as the banana curve. enabled the accurate estimation of arrival and required times for the synthesis procedures. It became clear in the late 1980s that simple linear delay models were no longer adequate to describe gate delay [78]. However. Before any RTL optimizations. modeled both the cell delay and the output slope as a function of the input slope and the capacitance on the output. In this approach. known as the nonlinear delay model (NLDM). a rigid separation of data and control was relaxed to combine all the generated logic into one initial design netlist that could then be further optimized at both the RTL and logic level.” One tool’s. DECEMBER 2000 intuitive schematic generation. delay modeling. which includes a thorough understanding of clocking. Perhaps the greatest reason for initial the success of Design Compiler was that it focused on performance-driven design. They include timing-driven structuring [119] in the technology independent domain. incremental static timing analysis capability. Iteration becomes completely unrolled and used for . Second. Another term of art that has survived is “quality of results”. Experimentation with RTL synthesis soon followed. Design Compiler and Verilog XL were able to bootstrap the RTL methodology into the ASIC design community. most often abbreviated as “QoR. At that time. became the de facto standard and its influence can be seen today in the recently approved IEEE 1076. There are two key technologies for achieving timing QoR for synthesis. it was not obvious how to subset the languages and how to superimpose synthesis semantics onto them. In fact. A stumbling block to using these HDLs as synthesis input was that they were both designed as simulation languages. it would be impossible to achieve high-quality designs. resulting in uncompetitive designs. There was a lot of uncertainty in the market as to which technology would provide the most benefit. Synopsys pioneered the use of sequential statements to imply combinational and sequential logic. While most efforts relied on concurrent assignment statements in the HDLs to model the concurrency of HW. since it will be called many times in the middle of many synthesis procedures. In the end. along with the development of the routing estimation technique known as wire load models (WLM). It was in this fashion that together.

: AN INDUSTRIAL VIEW OF ELECTRONIC DESIGN AUTOMATION 1441 building parallel HW.MACMILLEN et al. subtractions. or the behavioral level. such as dead-code elimination and strength reduction on multiplication by powers of two. By utilizing the same static timing analysis. Timing QoR has improved dramatically as well.e. To support the accurate delay and area costing of RTL components. today. There are a number of SW compiler-like optimizations that can be done during this initial phase as well. It is at this stage that design for testability (DFT) for sequential elements can be accomplished. Clock gating is a well-known technique used to save on power consumption. but support is need throughout the EDA tool chain . D. such as using canonical signed digit representation to transform multiplication by a constant into a series of additions. and bit shifts (another form of strength reduction). followed by minimum path (hold constraints) fixing are no longer necessary. Power optimizing synthesis is one of these techniques. This is necessary because critical paths can be interleaved between control and data and many RTL optimizations tend to move the control signals in the design. Afterwards. mapping to multibit registers can have positive effects during the place and route phase of design realization. But modern ASIC libraries have a wide variety of library elements that can implement the necessary functionality. implementation selection (performance-area tradeoffs for RTL components). Clocking statements result in the inference of memory elements. Also. Power optimization can be performed at the gate-level [S18]. There are other objectives that can be targeted as well as testability. That is. a flip–flop or transparent latch active high or low). and common subexpression elimination. an RTL optimization subsystem was built in the middle of the compile flow [56]. the initial DesignWare subsystem was constructed. These elements might offer asynchronous or synchronous reset inputs as well as a synchronous load enable input (also called a clock enable). The invention of dynamic variable ordering and the sifting algorithm [95] has proven to be a key component in making OBDDs more generally useful. It is crucial to have accurate bit-level information here. But the road from theory to practical industrial use has taken longer than initially thought [102]. This subsystem manages the building. The stumbling block has been that most designers use a variety of complex sequential elements that were poorly handled by the classical retiming algorithm. After the initial circuit synthesis. it proved invaluable to the developers of logic and RTL optimization algorithms. robustness. modules in the range of 5000–10 000 gates used to take upwards of ten CPU hours. Extending the Reach of Synthesis We have already mentioned how scan flip–flops can be inserted during the sequential mapping phase to achieve a one-pass DFT design flow. costing. E. One early use at Synopsys was the very simple equivalence checking offered by the check_design command of Design Compiler. and the bane of neophyte RTL designers continues to be the fact that incomplete assignments in branches of control flow will result in the inference of transparent latches. Advances in RTL arithmetic optimization have used carry save arithmetic to restructure operation trees for minimum delay [66]. it is now possible to simultaneously perform min/max delay optimization. Therefore. and modeling of all RTL operators and yields great flexibility and richness in developing libraries. and debugging capability of the equivalence checking product Formality. Generally. effective algorithms can be constructed for timing-driven resource sharing. This functionality was based on an efficient implementation of an OBDD package [15]. techniques that do not require timing information can also be performed at this time. RT level. The initial HDL design description determines the type of memory element that will be inferred (i. That this initial mapping is direct is evidenced by the ability of those skilled in the art to directly sketch synthesized circuits from Verilog text fragments [55]. One sequential optimization technique that has held the promise of increased design performance has been retiming [74]. [101]. Also. In all of these applications. The choice of these elements can have an impact on both the area and the timing of the final design [72]. separate runs for maximum delay optimization (setup-constraints). Sequential Mapping and Optimization Supporting the combinational logic synthesis is the synthesis and mapping of sequential elements. these checks could be added to the regression test suite to ensure that bad logic bugs would not creep into the code base. the size of the OBDDs is a critical issue. both for the RTL operators (such as additions and subtractions) as well as the control signals. by using scan-enabled flip–flops and automatically updating the design’s timing constraints. These types can now be optimized using multiple class retiming techniques [47]. Another use of OBDDs is in the representation and manipulation of control logic during RTL synthesis. Techniques such as critical path resynthesis and a variety of transformations on technology mapped circuits have yielded improvements of over 25% in timing QoR. While less than a decade ago. it becomes necessary to perform them in context of the synthesized logic. Transforms and optimizations could be checked for correctness during development with the check_design command. these RTL optimizations can be performed even on user defined and designed components. The invention of OBDDs has benefited synthesis almost as much as verification. with bit-level information on both operator timing and area costs. By adding the capability to specify any arbitrary function as an operator in the HDL source. Allowing the description of the RTL operators themselves to be expressed in a HDL and then have the synthesis procedures called in a recursive fashion so that all timing information is calculated “in-situ” enables this subsystem all the functionality and flexibility of the logic synthesis system. a top down automated chip synthesis of one million gates is possible in under 4-h running on four parallel CPUs. Since that time. This continual stream of innovation has enabled designs produced by modern synthesis tools to operate with clock frequencies in excess of 800 MHz using modern technologies. there has been a continuous stream of innovations and improvements in both logic and RTL synthesis. Since most of these involve design tradeoffs. While this simple command did not have the capacity. there remain a number of RTL optimizations that can gain significant performance and area improvements. operator tree restructuring for minimum delay..

Second. high-quality input into F. the observation is made that the traditional synthesis methodology can be maintained if a hierarchical design implementation is employed. Behavioral Synthesis Behavioral synthesis has long been a subject of academic research [80]. H. Datapath Synthesis Specialized techniques can often gain significant efficiencies in focused problem domains. which in effect relies on a biased average capacitance. or gain. and hierarchical scheduling was developed with a method called “behavioral templates” [77]. and [76]. Using these rules leads to a circuit that has good performance and area. because the design transformations are also at a higher level. Even with these challenges. By this fashion. Also. that wire resistance can be a significant contributor to the (nonlinear) wire delay. ATM switches. hence. which isolates the input of operand by using control signals that indicate whether the operand’s outputs will be needed. This library contains elementary RTL components such as adders and multipliers with multiple implementations to be leveraged by the RTL optimization algorithms. and implementation of the global interconnect for chips with 2000 to 3000 such blocks could prove to be challenging. allocated. While the initial purpose of the DesignWare subsystem was to support RTL optimization techniques. This circuit can be post processed by a general-purpose logic optimization tool for incremental improvements. Timing closure refers to the desirable property that the timing predicted by RTL and logic synthesis is actually achieved after the physical design phase. Also. approach for gate sizing after placement and routing [110]. and that the gate delays themselves can be nonlinear due to input slope dependence. This is often not the case. one of the main challenges in ASIC implementation has been in the area of timing closure. wireless LANs. NO. and this allows for accurate estimation of the final wire lengths and. This pass takes RTL design descriptions as input and produces an optimized and (legally) placed netlist as an output. First. a consistent input/output (I/O) methodology was developed that introduced [69] the “cycle fixed. [40]. planning. a new technique for handling timing constraints. Another approach is to use a constant delay.1442 IEEE TRANSACTIONS ON COMPUTER AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS. This is true for both the operators that will be implemented on scheduled. 12. The root of this divergence lies in the modeling of interconnect delay during synthesis. This library is an extensive collection of technology-independent. Providing a well-structured. and the fact that performance is dictated by the worst case delay. G. virtual micro-architectural components. the wire delays. In [65]. it has found leverage use in the design set-top boxes. by describing a system at a higher level of abstraction. usually quoted in the 50K to 100K gate range. Other techniques at the RT and behavioral-level include operand isolation. Budgeting. In this way. There have been many proposals in how to address the timing closure problem. the key to high-quality designs remains in accurate area and timing estimates. preverified. effort to change those thought processes. Finally. The approach to timing closure that seems to hold the most promise is one where synthesis and placement are combined into a single implementation pass. VOL. sequential operator modeling. There were a number of innovations embodied in Behavioral Compiler. There is a fundamental mismatch between this estimation technique.” “super-state fixed” and “free float” I/O scheduling modes. hundreds of chips have been manufactured that have Behavioral Compiler synthesized modules. included are more complex components such as DW8051 microcontroller and the DWPCI bus interface. Accurate estimates of the interconnect delay are not available at this stage and a statistical technique that uses both the net’s pin count and the size of the block that contains the net to index into a look up table for capacitance estimates. prechaining operations. The techniques for building high-quality datapath designs are well understood and can be expressed in the form of rules that are used while constructing the design. While behavioral synthesis has not achieved the ubiquity of logic synthesis. First. [25]. it became quickly evident that the feature set enabled “soft-IP” design reuse. One constant theme is that it is a steep learning curve for an experienced RTL designer to use Behavioral Compiler. The benefits of design reuse are further enhanced by availability of DesignWare Developer and other tools and methodologies [63] that support design reuse. The object of behavioral synthesis is twofold. the logic that implements the operator is not subjected to changing inputs when the results are not needed. a greater volume of the feasible design space can be explored. Having said that. the largest available library and the most widely reused components are in the DesignWare Foundation Library. This technique depends on the ability to continuously size gates and also that delay is a linear function of capacitive loading. Physical Synthesis During the last few years of the Millennium. These are necessary for understanding how to interface and verify a design generated by behavioral synthesis. there is a boost in design productivity. 19. Datapath design has traditionally been done manually by an expert designer. for thinking in terms of FSM is ingrained and it take a conscious . Synopsys first shipped Behavioral Compiler in November of 1994. In this approach. the optimization routines can have access to the placement of the design. Today. digital cameras and a wide range of image processing HW. which should result in better designs. A recent white paper [BLA99] details one design team’s experience in using Behavioral Compiler on a design of a packet engine with 20 000 (40 000 after M4 macro expansion) lines of behavioral Verilog. mobile phones. These techniques can net over 40% savings in power with a minimal impact to the design cycle and the design quality. the chip is synthesized as a number of blocks below some threshold. and also for early estimates of delays on control signals as well [83]. DECEMBER 2000 to achieve high productivity. this approach can yield improvements in timing QoR when used with libraries that have a rich set of drive strengths. Working against this are the modest number of discrete sizes in many ASIC libraries. and bound HW processors.

This is due to the limited selection of drive capabilities for synthesis tools. Thus. Specialized datapath placers read this tiling information and convert it into real legal placement. I. Most of the years of testing of logic networks have been centered on testing for stuck-at-faults [49] in sequential networks. routing. particularly. The most popular fault model that is used in the industry is the single stuck-at fault model which assumes that every net in the design is stuck at a logic value (zero or one). These shorts. which had only 500 logic gates on them with packages which had only one or two logic gates per module. These rules dynamically configure the datapath design being synthesized based on the timing context of the surrounding logic. the cells are each assigned to a row and column position called its relative placement. Due to effects such as aliasing. which makes the network function incorrectly. FPGA specific mapping algorithms have been developed to take advantage of the special characteristics of LUT FPGA and anti-fuse FPGA technologies to make them more efficient in run-time and yield better quality of results as compared to the typical ASIC technology mapping algorithms. There should be no surprise that this fault model has been used over the years as it epitomizes the reason why fault models exist namely. There are also large challenges for performance-driven design for FPGAs. opens. logic. This fault model is still widely used today even though we have made many technology changes. A key element of the constructive datapath synthesis technique is that the placement can also be constructively determined. an abstraction of the defects is used to generate tests and evaluate tests. Paul Roth [94].MACMILLEN et al. In this area. An interesting approach to FPGA implementation would be to leverage the new unified synthesis and placement solution developed for ASICs in the FPGA environment. let alone generate tests for each of them. or switching. as an industry we started to have difficulty in generating tests for boards. multiple inheritance. Defects are introduced during the manufacturing process to create connections where none were intended or to break intended connections in one or more of the layers in the fabrication process or to have changes in the processes variables. In 1966. A C/C tool will leverage state-of-the-art and mature behavioral and RTL synthesis and logic optimization capability. causes the design to become infeasible. Other algorithms have been developed over the years that essentially do the same thing as the D-Algorithm with different ways of exhausting the search space. analysis of pointers and where they point to is nontrivial. and modified in a way that it could be used in a limited scope for sequential logic. Another challenge is in extending the synthesizable subset to include object-oriented features like virtual functions. Additionally. very simply. and process variation can manifest themselves in too many different ways in the IC that makes it impractical to create an exhaustive list of them. efficiency. In tiling. For C/C example. to efficiently identify networks with one or more defects. The ASIC approach of using a library of logic gates is not practical for FPGA technologies. Datapaths are often placed in a bit-sliced fashion that is also called tiling. J. when the designer knows what kind of structure is wanted. In order to generate test patterns to detect these fault models. these tests were generated by hand. Because of these difficul- . The objective of testing is. Initial circuit construction is done in timing-driven fashion. FPGA Synthesis FPGA design presents a different paradigm to logic optimization techniques. Test generation algorithms such as the D-Algorithm target faults one at a time and create tests that stimulate the faulty behavior in the circuit and observe it at a measurable point in the design. the algorithms used to solve these two problems differ drastically in today’s synthesis tools. However. In a sense. the first complete test generation algorithm was put forth by J. The FPGA technology-mapping problem and ASIC technology-mapping problem at a basic level are very similar. The rows represent bit slices and the columns represent separate datapath functions. The Future The future of synthesis is one in which it has to grow and stretch to meet the demands of the new millennia. Because the chip has already been designed and fabricated with dedicated resources. Stuck-at faults are easy to model and grow linearly with design size. C/C with class libraries is emerging as the dominant language in which system descripsynthesis tions are specified (such as SystemC [75]). The abstract form of the defects is known as faults and there are many different types of them.: AN INDUSTRIAL VIEW OF ELECTRONIC DESIGN AUTOMATION 1443 logic optimization will usually create a better circuit than providing functionally correct input that is not well structured. challenges is ASIC synthesis have caught up to challenges in FPGA synthesis from a timing convergence standpoint. the initial circuit structure is not statically determined only by the functionality of the circuit. synthesis of descriptions using pointers is difficult. etc. However. this can allow the creation of higher quality designs with faster compile times. Once a design fits onto a specific FPGA part. in the typical test methodology. inasmuch as much of the buffering is hidden in the routing architecture. However. Instead of depending upon a down-stream logic synthesis tool to restructure the circuit based on timing constraints. These algorithms are aided by numerous heuristics that guide the decisions of the algorithm to converge on a solution faster for the nominal fault being targeted. especially in LUT based FPGAs (a LUT can be programmed to compute a function of its -inputs). presents some unique challenges for synthesis [53]. This algorithm was for combinational networks. Hence. FPGAs have always experienced significant delay in the interconnect because of the interconnect’s programmable nature. V. D-Algorithm performs a methodical search over the input space for a solution to the problem. excess demand on any one of them. This means coming to terms with the heterogeneous environment of system on a chip designs. the challenge is to reach the performance goals. The D-Algorithm was used for combinational logic when it was applicable. TESTING The last 35 years have been a very exciting time for the people involved in testing. Integrating datapath synthesis and physical synthesis leads to placed gates that simultaneously enables structured placement and timing convergence. A datapath synthesis tool like Synopsys’s Module Compiler implements the datapath design rules as part of its synthesis engine.

The core idea of this new approach was to make all memory elements. In the coming decade. An added challenge in platform design consists in matching the performance of the processor and . In this scenario. the emergence of platform based reconfigurable design seems likely. To date. in fact. no need to model via overlaps. These race conditions. This problem is exacerbated by the higher sensitivity to crosstalk. only very limited success has been obtained with sequential test generation tools. the demand for ever-more complex electronic systems will continue to grow.. Furthermore. At the highest level. Thus. VOL. the overall technology growth will enable more programmability. This would allow every memory element to be controllable and observable from primary inputs and outputs of the package. Today we have a large portion of designs. such as delay testing [85]. such as other techniques are being called into use. [86]. The to the formidable design problem that faces the design community in getting timing closure. performance and power than hard-wired ones. it is always DRC correct. As stated in Section I. and [71]. We will have come full circle back to pre-VLSI days. system design tools can direct the partitioning of system functionality between SW and HW that meet performance goals while not overflowing any of the fixed resources. the design community was hoping that some new test generation algorithm would be developed that would solve the sequential test generation problem and have the ability to grow with Moore’s Law. when verification was done in the end product form itself.1444 IEEE TRANSACTIONS ON COMPUTER AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS. VI. if sufficient facilities for internal debugging visibility are provided on the platform. in real time HW. to efficiently identify networks with one or more defects which makes these network function incorrectly and discard them and only ship the “good“ ones. as well as a crop of reconfigurable processors. The first industrial users to publish an indication of a dramatic change in the way designs are done because of automatic test generation complexities were NEC in 1975 [122] and IBM in 1977 [48]. While these design techniques have been around for a number of years in has not been until last 5–7 years that the industry has started to do SCAN designs in earnest. and the IBM approach which was called level sensitive scan design got the shortened name of LSSD. The rapid growth in gate density and problems in production caused by race conditions has resulted in many changes in the way designs were done. all of which effect testing. etc). but simpler in that the underlying technology is hidden (i. NO. This rapid growth was the tied to the very first days of the Moore’s Law growth rate. In addition to the above-mentioned impacts of scaling on testing. SUMMARY We have explored commercial EDA in the past millenium. there was also a need to have designs which did not result in a race conditions at the tester. particularly in the areas of physical design. the memory elements of the sequential network can be treated as if they were. resulted in a zero yield situation (zero-yield is the result of a difference between the expected test data response and the chips actual response). As a result one could now take advantage of the D-Algorithm’s unique ability to generate tests for combinational logic and be able to keep up with ever increasing gate counts. some testing tech[121] and niques are loosing their effectiveness. The problems of mapping a large netlist onto an FPGA are similar to those of placing a gate array: more complex in that the routing is less easily abstracted. DECEMBER 2000 ties a number of people started to look for different approaches to testing. The NEC approach was called the SCAN approach. then first silicon or even standard product chips can always be shipped. synthesis. which employ SCAN. albeit perhaps better at performance-driven design.e. For market segments where demand for complexity and performance grow at a slower rate than what can be delivered by technology (as described in the scaling roadmap of the ITRS [105]). We are at the point today that the cost of test in terms of NRE is between 30% and 45%. These include GHz cycle time. while the actual cost to test a logic gate will equal the cost of manufacturing that gate in the year 2012. and tester GHz cycle time refers costs. part of a shift register. As we continue scaling down into deep submicrometer or nanometer technologies. real primary inputs and outputs. it is possible to have test automatically generated for networks on the order 10 000 000! However. This work was not discovered in the rest of the world till much later. Today we are looking at an era before us which also has the gate count increasing at exactly the same rate. reconfigurable logic cores for ASIC and SoC use have been announced. However. [SIA97]. simulation and verification and test. As a result a number of the CAD vendors are offering SCAN insertion and automatic test pattern generation tools which require SCAN structures to be able to one generate test for very complex sequential designs and also make the test patterns race free as well. It was clear to many that automatic test generation for sequential networks could not keep up with the rate of increasing network size. as we go forward into these more and more complex and dense technologies the role of testing will be expanded to be even more cost sensitive and to include new fault models to. more often than not. since the platform defines the basic architecture. what is an appropriate EDA tool suite for implementing designs on existing platforms? Little architectural design is required. very simply. flexibility and quick time to market in the form of reprogrammable/reconfigurable chips and systems will increase in importance. 12. Hardware/SW co-verification environments will increase in importance. However. there are a number of other issues which the SIA Roadmap [105] refers to as Grand Challenges that also effect how testing needs to be done in the future. Already. The first reference to the new approach to design was given by Kobayashi [70] in a one-page paper in Japanese. there are significant changes brought on by the technology developments facing us today which have a direct influence on testing. The synthesis and physical design tools would not be fundamentally different from FPGA implementation tools used today. This new approach of designing to accommodate real difficulties in test became known in general as the area of DFT. except RAMs or ROMs. The DFT technique of Full Scan or high percentage of scannable memory elements seems to be commonplace in the industry. Moore’s Law. 19. Even though programmable devices carry a higher cost in area. The cost of testing is becoming a very central issue for both designers and manufactures. Thus. higher sensitivity to crosstalk. In addition to the need for a more powerful automatic test generation tool.

21–39. J. Reading.com/Editorial/1999/coverstory9901. Ma. [32] D. 19th Design Automation Conf. Conf. R.html [6] C. Develop. Babb. Blank.Computer Design. Design. Ackerman. Rudell. and R. “Design automation in IBM. and F. Deibert. T. “Computer-aided verification. Butts. Newell. ACM/IEEE Design Automation Conf. D. Jan. Computer-Aided Design. 631–645. A. and W. 677–691. R.ucla. In any case. H. 1995. San Francisco. June 1996.. “Hierarchical channel router.” in Proc.” in Proc. Burstein.. [9] D. Clarke and R. Sangiovanni-Vincentelli. New York: McGraw-Hill. vol (1–2). M. Darringer.” Electron. pp. (1995) Recent developments in netlist partitioning: A survey.isdmag. 1983. Butts. the emergence of higher level tools to address complexity and time to market is likely to increase the available market for EDA greatly.” IEEE Trans. vanCleemput for help in the preparation of this paper. Design [Online]. R. Chaudhry. Electron. Akers. pp. ACM. Al-Ashari. The nature of the market. 1970. Burstein and R. Simek.: IEEE. Azar and M. E. R. Chen. Denneau. IEEE/ACM Int. W. [28] K.. pp.Computer-Aided Design. et al. “Simulation of IBM enterprise system/9000 models 820 and 900.html [5] C. S. Bryant. A. 25. “Macromodeling CMOS circuits for timing simulation. Application Specific Programmable Products: The Coming Revolution in System Integration: ASIC-WW-PP9811. “Efficient generation of counterexamples and witnesses in symbolic model checking. Coudert. “MIS. 4. Zhao. Verilog Conf.” in Proc. Gianopulos.” IBM System Products Div. M. K. 1981. T. Apr. 5.. G. CA: McGraw-Hill. [Online]. K. NY. L. 1977. McMillan. [21] M. “The Yorktown silicon compiler system. 138–141. J. D. pp. mask verification SW today accounts for roughly half of the physical EDA revenue. no. Nov. pp. [39] G. M. 1981. Aug. vol. Boston: Kluwer. “Verification myths—Automating verification using testbench languages. Lab.” Electron. Chandy and J. [35] G. E. [19] S. 1997. [31] K. Apr. 734–739. and P. pp. M.” IEEE Trans. “Graph-based algorithms for Boolean function manipulation. pp. Gregory. 1994. [34] E. W. Pelavin.” in Proc. 609–615. “The Yorktown simulation engine. not for the programming of that platform. [4] S. mask verification will become significantly more complex and. [37] J.” Commun. M.” in Proc. [17] M. Brayton. At the same time. M. Comput. Chowdhury.. 1062–1081. K. G. and A. “The chip layout problem: An automatic wiring procedure. 40–45. V. But extraction or design rule checking is only necessary for the design of the platform. Mullen. E. C. M. June 1988. Rudell. R. CAD-6. Kelly. G.” in Encyclopedia of Electrical and Electronics Engineering. Otten. J.” in Proc. could change significantly. New York: Wiley. Aug. Agarwal. Schmidt.: IEEE. Bell and A. R. C. [41] M.. June 1982. 1996.” in Silicon Compilation. L. H. 1999. Oct. [18] L. M. Sept. R. Dec.” in Proc. The electronics industry depends on it. the economics of commercial EDA for platform design are likely to change. ACM/IEEE Design Automation Conf. Gajski.. Rep. Available: http://www.” presented at the Panel Session 2. “Mixed 2–4 state simulation in VCS. 1985. 24. July 1992. J. vol. 1981. Endicott. Moceyunas. S. M. D. Develop.. vol.” presented at the IEEE Int. Francis. 751–764. 32nd Design Automation Conf. W. held every other year from 1973 to 1997. ACM/IEEE Design Automation Conf. CA. J. Rose. [27] “CHDL. Decker.” IBM J. DeMichelli.. May) Testbench vs. Available: http://www.. pp. [11] T.” presented at the IEEE Workshop on FPGAs for Custom Computing Machines. P. UCB/ERL M95/104. “Hardware accelerator picks up the pace across full design cycle. Haddad. Camposano and W. Grumberg. [29] D. REFERENCES [1] D. Chapiro. Wang. [20] R. B. Varghese. Computer-Aided Design. no. “PLEASURE: A computer program for simple/multiple constrained/unconstrained folding of programmable logic arrays. Tech. pp. K. Snyder. Brown. pp. [12] J.. ACKNOWLEDGMENT The authors would like to thank M. N. Correia. June 1988. pp. Chapiro. pp. Joyner. F. 11. For example. Sept. 1986. H. [2] P. Hachtel. Stieglitz. Brayton. 45–51. 1985. Papp. The intellectual challenge of solving these problems remains the same. ACM/IEEE Design Automation Conf. Tech Rep.” in Proc. 272–280.: IEEE. P. 1971.” in IEEE Spectrum. and B. hence. Damiano. and J. Vranesic. L. DeMicheli and A. pp. W. H. Available: http://vlsicad. Mag.”.com/Editorial/1999/05/0599testbench_vs_high. Batcheller.” in Proc. and S. “VIS: A system for verification and synthesis. Feuer. MA: Kluwer Academic. Lasko. 1987.” IEEE Trans. Chapiro.. “The IBM engineering verification engine. “10 . CA.” IBM J. Boston. With a narrowed user base but a more intensive tool use. vol. J. van Eijndhoven.. R. Eng. 1–81. Butts. Raymond. Sangiovanni-Vincentelli. Wolf. R. a multiple-level logic optimization system. [26] P. June 1990. ACM/IEEE Design Automation Conf. P. July 1981. [36] O. R. pp. Brayton and G. 1237–1249. 25th Design Automation Conf.” IEEE Design Test. 25th Design Automation Conf. Kurshan.. [25] R. “Asynchronous distributed simulation via a sequence of parallel computations. 218–224. J. IEEE Design Automation Conf. but with the added functionality for dealing with scaling effects and ever-increasing complexity. We also expect that the semiconductor vendors and the EDA vendors will continue to form partnerships at an increased pace to tackle the complexities of fielding platform based solutions. and C.. as mentioned above. “A survey of hardware accelerators used in computer-aided design. 160–165.edu/ cheese/survey. Meier. Syst. and X. Eds. Wile. Computer Structures: Readings and Examples. “On the use of the linear assignment algorithm in module placement. 36.MACMILLEN et al. [8] M. 1992. Integration: VLSI J. and L. Manne. Synthesis and Optimization of Digital Circuits. T. P. “LSS: Logic synthesis through local transformations. [13] R. pp. S. 1989. The EDA tools suite for the design of these platforms looks much like the tool suite of today. G. Clow. Case. A. K. Chatterjee.. Villante. Beece.computer-design. High-Level VLSI Synthesis. San Jose. Valera. (1999.html [30] D. 4. 1984. Technol. and A. and J. Allen.. E. vol. Dec. 427–432. H. Camposano. 198–206. Brocco. and J. 1977. C-35. “Virtual wires: Overcoming pin limitations in FPGA-based logic emulators. “Near-optimal placement using a quadratic objective function. De Micheli. (1999. Res. Alpert and A. [15] K. “An efficient logic emulation system. 1991. however. M. 1993. Integrated Syst. Res. W. [10] C. TR01.) System verification from the ground up. [16] R. B. “New algorithms for gate sizing: A comparative study. B. “Emulators. 33rd ACM/IEEE Design Automation Conf. [24] M. Bailey and L. [38] Dataquest. Nan. Misra. R. pp.cs.. 100 and 1000 speed-ups in computers: What constraints change in CAD?. Trevillyan. no. N. 530–537. “An empirical study of on-chip parallelism.: AN INDUSTRIAL VIEW OF ELECTRONIC DESIGN AUTOMATION 1445 ASIC portions with the performance of the reconfigurable portions. ACM/IEEE Design Automation Conf.” in Proc. and S. Clarke. [40] G.” UC Berkeley Electronics Research Laboratory. “The VMS algorithm. McCormick. 1988. ACM/IEEE 21st Design Automation Conf. 2 2 2 . Butts. Oct.” IBM J. 1988. “A class of min-cut placement algorithms. G. Laudes. 1992. Blanks. M. L. MA: Addison-Wesley. vol. In-house EDA development at the semiconductor vendors who become the winners in the platform sweepstakes is likely to continue in niche applications. 17. . Tessier. and Z. no. B. N. Heller. O.” in Proc. IEEE Conf. Ed. “A global routing algorithm for general cells. high-level languages for verification. This performance matching will ultimately be reflected back into the implementation tools. Design [Online].: IEEE. more time consuming. Santa Clara. 7. 1977. [14] R. [3] S. 61–67. 1999. Berman. Res. pp. June 1995.1338. Madre. Gosselin. [7] J. vol.: IEEE. pp. [33] E. Bryant.” in Proc. Rosa. “Efficient implementation of a BDD package.. Brace. 25. H. Kahng. M. Maturana. Field-Programmable Gate Arrays. Ofek. Shenoy. pp.” in Proc.” in Proc. T. Develop. R. Agnew and M. 1984. J. Khokhani. Eichenberger. P. [23] M. Breuer. L. [22] M.

1994. ACM/IEEE Design Automation Conf. H.. 2000. pp. and D. McMillan. [86] E. [79] E. Y. and R. Tech.. 10. “Development and application of a designer oriented cyclic simulator. CA. Gregory. [93] W. 2nd ed: Kluwer Academic.” in Proc. 1992. p.. et al. 1992.. Ghosh. Kunkel.” Algorithmica. Munich. 1988.” Integration. [77] T. D. Gregory. “Diagnosis of automata failures: A calculus and a method. Washington. 78. 6. Feb. B. vol.” IEEE Design Test. Int. Camposano. Pixley.. J. “RTL: Avanced by three companies. “An algorithm to path connections and its applications. “LTX—A system for the directed automatic design of LSI circuits. D. Ly. [65] K. [57] G. 4. MA: Kluwer Academic. Int. pp. 1969.” Bell Syst. A. [102] N. Lin. 32nd Design Automation Conf. pp. Ly. 13th Design Automation Conf. Harer. 237–242. pp.D. 1984. Persky. 220. 521–531.” U. and K.: IEEE. S. 671. [68] J.. W. 556–7. NO. Symp. Internal Tech. 5–35. [51] R. and P. “A practical approach to multiple-class retiming. T. 1999. [70] A. Knapp. DECEMBER 2000 [42] D. 3. Köenemann et al. 1986.” [53] A. C. “Behavioral synthesis methodology for HDL-based specification and validation. 1994.” in Proc. W. Shenoy. and D. “Delay testing quality in timing-optimized designs. M. and R. Payne. pp. 1991. Price. Conf. [56] B. Logic Synthesis. 191–196. vol. “Dynamic variable ordering for ordered binary decision diagrams. [100] R. California. Miller. “Bold: The boulder optimal logic design system. Develop. [46] L.” IEEE Trans. 59. Williams. and T. Mattheyses. IEEE Design Automation Conf. Test Conf. 21st Design Automation Conf. van Cleemput. M. Test Conf. 23rd Design Automation Conf. 443–458.” Bell Syst. 1991. Thilking. . Boston. IEEE Design Automation Conf. “Bdsyn: Logic description translator.. p.: IEEE. 862. Mercer.” in Proc. “The problem of simplifying truth functions. pp. 2. Rep. 1979. T. [76] Y. Jan. W. Sechen and A. “Scheduling using behavioral templates. Deutsch. 32nd ACM/IEEE Design Automation Conf. “Retiming: Theory and practice. R. multialgorithm approach to PCB routing.: IEEE. Gupta.” in Proc.-K. Computer-Aided Design. G. 1997. Shiba. [61] D. Mailhot. 48–53. Sangiovanni-Vincentelli.” in Proc. DC. 286–291. van Cleemput. “Hardware synthesis from C/C presented at the Design Automation and Test in Europe. pp. 1961. Lee. MacMillen. Dunn. 1. Dev. October 1991. June 1999. 2–21. June 1976. T. “Tutorial: Design of a logic synthesis system.” in Proc. R. Devadas. Jan. [84] G. et al. Mar. Pittsburgh. [101] N. May 1984. 12. Aug. Ly. Hong. 1985. Keating and P.. Williams. [48] E. “Statistical delay fault coverage and defect level for delay faults. “ISIS: A system for performance-driven resource sharing. Park. pp. vol. D.” in Proc. Sept. [95] R. pp. 2000. PA.” Electron. 1974. Ostapko. P. W. MacMillen. p. [94] J. Miller. Moceyunas. LA. pp. [99] C. Santarini.” IRE Trans. “An efficient implementation of reactivity for modeling hardware. CA: Synopsys.. pp.” in EE Times: CMP Media Inc. J. Gelatt.. 33–36. and M. 1991 IEEE Int. dissertation. [47] K. Martin. “IBM’s engineering design system support for VLSI design and verification. P. “UltraSPARC-I emulation. Hactel. C. Feb. and R. Design Automation of Elec. “Symbolic model checking: An approach to the state explosion problem. [64] B. vol. vol. M. 462–468. vol. [83] R. pp.. “Socrates: A system for automatically synthesizing and optimizing combinational logic. 1984. K. MacMillen. Rudell.Computer-Aided Design. Schaefer. D. pp. [89] C. Dunlop. pp. [75] S. 18. NY. and J. L. Conf. [103] N.” in Digest of Tech. 356–365. 692. De Geus.. [74] C. Leiserson and J. 999–1006. 33rd ACM/IEEE Design Automation Conf. June 1979. B. and C. VLSI J. A. VOL. “A ‘dogleg’ channel router. 15.” IBM J..” in Hawaii Int. and M. 4. 31st ACM/IEEE Design Automation Conf. Res.” in Proc.” presented at the Association.” in DAC. Systems Sciences. Shenoy and R. 1994.. Reed. 278–291. D’Amour. [81] A. Syst. 11–13. and G. et al. vol. March 1999. TN. Computer Design. Legl. A. R. T. Fisher. ACM/IEEE 34th Design Automation Conf. Miller. 1991. [43] S. New Orleans.S. 226–233. 1999. July 1966.” in IEEE/ACM Int. “Apparatus for emulation of electronic hardware systems. 175–181. Papers.” Masters thesis. 1417–1444. IEEE/ACM Int. N. pp. “Getting to the bottom of deep submicron. Comput.. [52] J. [62] S. Santamauro. C. 1952. May 1987. Rudell. “The TimberWolf placement and routing package. June 1995. vol. 1990. 1997.1446 IEEE TRANSACTIONS ON COMPUTER AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS. Lightner. Comput. Kim. Saxe. C. 1997. E. 1978. pp. Lin. Cain. Test Conf. 32nd ACM/IEEE Design Automation Conf. 13–18. [59] D.” in Proc. “DesignCon keynote speech. vol.” IEEE Trans. . ACM/IEEE Design Automation Conf.. Knapp. R. Coelho. Eckl.. [50] C.. Conf. Computer-Aided Design. R.. Gateley. Hadlock. D. 101–106. 1982. Electron. Roth. Carnegie Mellon Univ. 22. June 1995. Krishnamoorthy and F. 285–290. Berkeley. Reuse Methodology Manual for System on a Chip. Res.. “Recent developments in high-level synthesis. 70–75.” American Math. R.Computer Aided Design-93. [67] S. Univ. M. B. IEEE Int... Mercer. Parker. 133–136. Patent 5 109 353. [91] W. [82] K.” IEEE Trans. McCluskey. “The high-level synthesis of digital systems. [55] B. [87] G. “Delay test: The next frontier for LSSD test systems. Roelandts. June 1995. multilevel simulations.” in IEEE/ACM Int. Feb. E. Patent 6 026 219. 2000. Segal. D.” in Proc. IEEE.” EE Times. Ma. Sylvester. [98] M. S. Ghosh. “A comparison of CMOS delay models. no. [90] B.” VLSI Technology. Feb. S. [72] S. 433–438.” in Proc. Nov. “A simulation-based protocoldriven scan test design rule checker. pp. ACM/IEEE Design Automation Conf. Park.” in Synopsys Methodology Notes. 1983. 36th ACM/IEEE Design Automation Conf. 16th Design Automation Conf.. L. McKay. S.” in Proc. Bricuad. and D. “HDL synthesis pitfalls and how to avoid them. 1998. 1996. Rochester. Jao. Dewey. Nov. Damiano.. 897–905.” Networks. Sept. vol. “An efficient heuristic procedure for partitioning graphs. Schweikert. and M. Multi-Level Simulation for VLSI Design. and D. vol.” presented at the Custom Integrated Circuits Conf.-L. MacMillen. “Arithmetic optimization using carrysave-adders. [69] D. 1.” in Proc. Bartlett. Sept. pp. “Flip-flop circuit with FLT capability” (in Japanese).. Liao. R. “A shortest path algorithm for grid graphs. “Gordian: VLSI placement by quadratic programming and slicing optimization..” in Proc. Deutsch. Hill and W. ++ [73] C. N. Quine. and D. “Minimization of Boolean functions.” IBM J.. M.” U. Nashville. 42–47. 1988.” in Proc. Hightower. pp. 1970. 323–334. J. 1989. Kirkpatrick. [92] J. [54] D.: IEEE. no. Preas and W... 1969. “SABLE: A tool for generating structured. 1999. pp.Computer-Aided Design.” Ph. ACM/IEEE 35th Design Automation Conf. pp. 1958. and S. 35. [58] F. 250–257. Sangiovanni-Vincentelli. N. “A new symbolic channel router: Yacr2. pp. [45] A.S. (Design Technology Display File 48). Vecchi. [78] D.” in Proc. “Placement algorithms for arbitrarily shaped blocks. Madre. Germany. “Chip layout optimization using critical path weighting. no. IECEO Conf. 346–365. McFarland. pp. Sept. Gregory. 1976. pp. 1976. C. and P. B.. [49] R. Hactel. ACM Trans. and T. and H. Knapp. San Francisco.” in Proc. Zepter.” in Proc. “A multipass. 578–587. pp.. 1998. Fogg. pp. “Retiming synchronous circuitry. September 1988. pp. [97] S. 492–499. pp.” (Survey Paper). “Test routines based on symbolic logical statements. M.. “MINI: A heuristic approach for logic minimization. 301–318. in Proc. 291–307. [85] E. [60] D. IEEE Design Automation Conf. Morrison. [44] A. Proc. Conf. [71] B. “Comment on computer-aided design: Simulation of digital design logic. Tjiang. Kobayashi. no. 691–697. 14th Design Automation Conf.” in Proc.” Science. Underwood. 29th ACM/IEEE Design Automation Conf. Iyer.-K. 1968.. “The VHSIC hardware description language (VHDL) program.” Proc.” in Proc. BDSIM: Switch level simulator. Tech. A. “Efficient implementation of retiming. “A robust solution to the timing convergence problem in high-performance design. 1. 79–85. Rudell. 10. “Behavioral synthesis links to logic synthesis. 1993. vol. [63] M. Sys.. “Integrating model checking into the semiconductor design flow. “A linear-time heuristic for improving network partitions.” in Proc. Tjiang. CA: McGraw-Hill. 1977. 4598. and H. MacMillen and T. Ma.. June 1977. [66] T. Jacoby. “Boolean matching of sequential elements. p. Monthly. Mar. Shenoy. 1992. pp. vol. [80] M. Kleinhans. Kernighan and S. Eichelberger and T. M. 1988 Int. J.. S. [88] E. pp. Fiduccia and R.” in Proc. [96] R. Sample. M. Test Conf. Parasch and R.. pp. “A solution to line-routing problems on the continuous plane. Matsue. Pitty. Keutzer. 19. pp. D. 67–74. pp. 1994. pp. O. Liao.” in Proc. ACM/IEEE Design Automation Conf. Keutzer and D. Mountain View. W.” in Proc. Hill and D. pp. June 1984. Eldred. “Optimization by simulated annealing. 7. 203–219. K. 1986.” in Proc. “A logic design structure for LSI testability.. pp. 1956. C-18. May 12.

“Optimal layout of CMOS functional arrays. May 1999. 9th Design Automation Conf. He has coauthored two books on VLSI CAD. NY. degree in electrical engineering and computer science in 1977. Wang. Smith and D. Kapur.S. His responsibilities include system design. M. and held various individual contributor and management positions in the Design Tools Group. he has been Principal Engineer in the Large Systems Technology Group of Synopsys. pp. Synopsys. pp.J. White Paper. E. he was Vice President of Software Development and later Vice President of Engineering at LightSpeed Semiconductor. and VLSI 93. MacMillen has served on the Technical Program Committee. Urbana. Res. R. [119] A. Uehara and W. pp. Don MacMillen (M’83) received the B. The Verilog Hardware Description Language. 318–323.S. in 1981. 1988. [110] I. Buffalo. Smith II. Inmos Corp. Between 1986 and 1991. the M. He holds six patents/ Dr. Smith.. He has been the Vice President of the Advanced Technology Group at Synopsys. His research interests include electronic systems design automation. from Stanford University. Black. [Online]. Inc. From 1984 to 1986. Gupta.html [113] (1998) Formal verification equivalence checker methodology backgrounder. 12th Design Automation Conf. “Algorithms for multilevel logic optimization. Algorithms for VLSI Physical Design Automation. and the doctorate degree in computer science from the University of Karlsruhe. and W. In 1990. [120] N. His work included photolithography process research and development.. S. Berkeley. Fukui. After graduation. Inc. degree in 1974 from Canisius College. Mountain View.. reconfigurable computing and FPGA architecture. H. a practical guide to HW/SW co-verification.” ACM Trans. He has worked on VLSI layout. OR. vol. and S. T. physical synthesis. Mercer. Dennard. Boston. Inc. Cambridge. high-level synthesis and design technology for System on a Chip. R. He is Editor in Chief of The International Journal on Design Automation for Embedded Systems. He was the Technical Chair of ISPD99 and the Program Chair of ISPD2000. Available: http://www.html [115] S.synopsys.S. “Fundamentals of parallel logic simulation. [116] D.E. Halliwell. including ICCD. He has published four papers and holds 17 patents in logic emulaton.com/products/verification/formality_bgr. .com. Wakatsuki. behavioral synthesis. libraries. He is a Member of ACM. CA: Morgan Kaufmann. CA. device characterization and RTL and high-level synthesis. Raul Camposano (M’92–SM’96–F’00) received the B. Szygenda. Hsu. degree in electrical engineering in 1974. IEEE Design Automation Conf. Synopsys. Watson Research Center. he went to work at AT&T Bell Labs. Synopsys. Prior to that he was with Quickturn Design Systems and Mentor Graphics. Stanford. Available: http://www. CA. October 1991. Dwight Hill (S’74–M’76–SM’90) received the B. he worked at General Electric. Rochester. CA. the Design Automation Conference. A. 1991. 1990.” in Proc. Boston. L. [108] S.”. device modeling and development of device characterization SW. CA. NY.. He worked as Architect of the Chip Architect product. In 1982 and 1983. Sherwani. Bahnsen. Sutherland. “Automatic systemlevel test generation and fault location for large digital systems. degree in theoretical chemistry from University of Rochester in 1980. [118] T. June 1986. [112] (1998) Multimillion-gate ASIC verification. R. He serves as an Associate Editor for the IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS and the Integration Journal. 3rd ed.E. He currently is Chief Technical Officer of Synopsys. Mountain View. “Iddq test: Sensitivity analysis of scaling. [117] R. California. 1999. primarily at the Murray Hill.synopsys. “An evaluation of the Chandy–Misra–Bryant algorithm for digital logic simulation.: IEEE. “TEGAS2—Anatomy of a general purpose test generation and simulation system. MA. Thomas and P. Maly. Modeling Comput. where he lead the project on high-level synthesis.S. Develop.. High–Level Synthesis for the International Conference on Computer-Aided Design (ICCAD). where his research interests are highperformance system verification and reconfigurable computing. CA. Mountain View. Dr.com/products/hwsw/practical_guide. From 1997 to 1998. test and formal verification. and D. Hill has been active in many conferences and workshops. and deep submicrometer design challenges. [122] A. he was with the University of Siegen and Professor of Computer Science at the University del Norte in Antofagasta.. N. he was a Research Staff Member at the IBM T. 26. [106] G.” IBM J.D. 4. Inc.” in Proc. Degree in 1976 from University of Rochester. he was a Professor at the University of Paderborn and the director of the System Design Technology department at GMD (German National Research Center for Computer Science). P. “PROUD: A fast sea-of-gates placement algorithm. From 1991 to 1994. and VLSI Technology.D.S. “Virtual grid symbolic layout. Available: http://www. 1996. His research interests include RTL synthesis. 1989. . [Online]. which was the first product in the Synopsys Physical Synthesis system.” in Proc. Mountain View. 1981. “Boolean comparison of hardware and flowcharts. San Francisco. W. the diploma in electrical engineering in 1978 from the University of Chile. van Cleemput. 117–127. Germany. he joined Synopsys. degree in computer engineering from the University of Illinois. and on FPGAs. He also serves on the Editorial Board of IEEE DESIGN AND TEST. pp. J.” in Proc.” Ph. In 1994. R. Moorby. ACM/IEEE Design Automation Conf. pp. Inc. process simulation.: IEEE. Michael Butts (M’88) received the B. and C. CA. 1999..” in Proc. 1.: AN INDUSTRIAL VIEW OF ELECTRONIC DESIGN AUTOMATION 1447 [104] N. logic modeling and RTL synthesis. Since September 1998. CA. ACM International Symposium on Field-Programmable Gate Arrays and the International Conference on Field Programmable Logic and Applications. Kuh. degree in 1977. IEEE Design Automation Conf. June 1975. He has published technical papers in process development. M. Harris. he jwas with the Computer Science Research Laboratory at the University of Karlsruhe as a Researcher. His thesis was a multilevel simulator called ADLIB/SABLE.MACMILLEN et al.. [121] T.synopsys. in 1980.D. Mountain View. [105] Semiconductor Industry Association (SIA).. Mr. his M. “Pushing the limits with behavioral compiler. NJ. 308–347. 1. “The international technology roadmap for semiconductors. E. Weste. From 1980 to 1990.. San Jose. Simulat. Dr.sysopsys. [Online]. no.. both from the Massachusetts Institute of Technology. in electrical engineering. Camposano is also a member of the ACM and the IFIP WG10.. Inc. B. and H. 1979. CA. Karlsruhe. dissertation. in 1975. Soule and A. Butts has been a Technical Program Committee Member for the IEEE Symposium on Field-Programmable Custom Computing Machines. 23rd Design Automation Conf. He was responsible for the architecture of Lucent’s “ORCA” FPGA.” in Proc.”. he joined Synopsys. June 1972.html [114] (1999) Balancing your design cycle. location. 106–116. Williams. [109] L.com/products/staticverif/staticver_wp.S. Beaverton. 786–792. Funatsu.” in Int. Logical Effort: Designing Fast CMOS Circuits. deep-submicrometer analysis. no. and Ph. J. and the M. vol.S. Sproull. Univ. Available: http://www. and the Ph. Boston. R. Since 1998. Tsay. Test Conf. January 1982. He has published over 50 technical papers and written/edited three books. 2–12. 347–352. pp. [107] R. Sierra Semiconductor. Yamada. pp. including layout synthesis and interactive CAD systems. which was a forerunner to VHDL. 1999. [111] Synopsys Test Complier Manual. MA: Kluwer Academic. MA: Kluwer Academic.

and in 1985 and 1997 he was a Guest Professor and Robert Bosch Fellow at the Universitaet of Hannover. Hannover Germany. as well as being a keynote or invited speaker at a number of conferences. including the 1987 Outstanding Paper Award from the IEEE International Test Conference for his work in the area of VLSI Self-Testing. B. and J. R. degree from Clarkson University. Dr. synthesis and fault-tolerant computing. and the Ph. Underwood and M.1448 IEEE TRANSACTIONS ON COMPUTER AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS. Lindbloom. defining. as Manager of the VLSI Design for Testability Group.” In 1989. E. He is the co-founder of the European Workshop on Design for Testability. His research interests are in design for testability (scan design and self-test). and a 1991 Outstanding Paper Award from the ACM/IEEE Design Automation Conference for his work in the area of Synthesis and Testing (with B. a 1989 Outstanding Paper Award (Honorable Mention) from the IEEE International Test Conference for his work on AC Test Quality (with U. edited a book and recently co-authored another book entitled Structured Logic Testing (with E. Mercer). 12. fault simulation. a 1987 Outstanding Paper Award from the CompEuro’87 for his work on Self-Test. degree in electrical engineering from Colorado State University. Dr. VOL. R. the M. He has been a program committee member of many conferences in the area of testing. Corvallis.E. Dr. NY. Potsdam. Eichelberger. Eichelberger shared the IEEE Computer Society W. Daehn. Mountain View. and promoting design for testability concepts. and C. “for leadership and contributions to the area of design for testability. Boulder. He is also the Chair of the IEEE Technical Subcommittee on Design for Testability. Mercer). Park and M. He is engaged in numerous professional activities. and abroad. He was selected as a Distinguished Visiting Speaker by the IEEE Computer Society from 1982 to 1985. Inc.E. 19. W. Williams was named an IEEE Fellow in 1988. B.” .S. CO. both in the U. Prior to joining Synopsys. Williams and Dr. He is a Chief Scientist at Synopsys. Wallace McDowell Award for Outstanding Contribution to the Computer Art. Inc. Waicukauski). degree in pure mathematics from the State University of New York at Binghamton. E. which dealt with DFT of IBM products. Williams (S’69–M’71–SM’85–F’89) received the B. (with W. He has received a number of best paper awards. and was cited “for developing the level-sensitive scan technique of testing solid-state logic circuits and for leading. Williams is the founder and chair of the annual IEEE Computer Society Workshop on Design for Testability. test generation. DECEMBER 2000 Thomas W. he was a Senior Technical Staff Member with IBM Microelectronics Division in Boulder. Gruetzner. A. Starke). M. CA. NO.A. He has written four book chapters and many papers on testing. He is an Adjunct Professor at the University of Colorado.D.S. He has been a Special-Issue Editor in the area of design for testability for both the IEEE TRANSACTIONS ON COMPUTERS and the IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS.

Sign up to vote on this title
UsefulNot useful