You are on page 1of 13


10, OCTOBER 2008


Multiprocessor System-on-Chip (MPSoC) Technology
Wayne Wolf, Fellow, IEEE, Ahmed Amine Jerraya, and Grant Martin, Senior Member, IEEE
Abstract—The multiprocessor system-on-chip (MPSoC) uses multiple CPUs along with other hardware subsystems to implement a system. A wide range of MPSoC architectures have been developed over the past decade. This paper surveys the history of MPSoCs to argue that they represent an important and distinct category of computer architecture. We consider some of the technological trends that have driven the design of MPSoCs. We also survey computer-aided design problems relevant to the design of MPSoCs. Index Terms—Configurable processors, encoding, hardware/ software codesign, multiprocessor, multiprocessor system-on-chip (MPSoC).

I. I NTRODUCTION ULTIPROCESSOR systems-on-chips (MPSoCs) have emerged in the past decade as an important class of very large scale integration (VLSI) systems. An MPSoC is a systemon-chip—a VLSI system that incorporates most or all the components necessary for an application—that uses multiple programmable processors as system components. MPSoCs are widely used in networking, communications, signal processing, and multimedia among other applications. We will argue in this paper that MPSoCs constitute a unique branch of evolution in computer architecture, particularly multiprocessors, that is justified by the requirements on these systems: real-time, low-power, and multitasking applications. We will do so by presenting a short history of the MPSoCs as well as an analysis of the driving forces that motivate the design of these systems. We will also use this opportunity to outline some important computer-aided design (CAD) problems related to MPSoCs and describe previous works on those problems. In an earlier paper [1], we presented some initial arguments as to why MPSoCs existed—why new architectures were needed for applications like embedded multimedia and cell phones. In this paper, we hope to show more decisively that MPSoCs form a unique area of multiprocessor design

that is distinct from multicore processors for some very sound technical reasons. We will start with our discussion of the history of MPSoCs and their relationship to the broader history of multiprocessors in Section II. In Section III, we will look at several characteristics of embedded applications that drive us away from traditional scientific multiprocessing architectures. Based on that analysis, Section IV more broadly justifies the pursuit of a range of multiprocessor architectures, both heterogeneous and homogeneous, to satisfy the demands of high-performance embedded computing applications. Section V describes some important CAD problems related to MPSoCs. We close in Section VI with some observations on the future of MPSoCs. II. M ULTIPROCESSORS AND THE E VOLUTION OF MPS O C S MPSoCs embody an important and distinct branch of multiprocessors. They are not simply traditional multiprocessors shrunk to a single chip but have been designed to fulfill the unique requirements of embedded applications. MPSoCs have been in production for much longer than multicore processors. We argue in this section that MPSoCs form two important and distinct branches in the taxonomy of multiprocessors: homogeneous and heterogeneous multiprocessors. The importance and historical independence of these lines of multiprocessor development are not always appreciated in the microprocessor community. To advance this theme, we will first touch upon the history of multiprocessor design. We will then introduce a series of MPSoCs that illustrate the development of this category of computer architecture. We next touch upon multicore processors. We conclude this section with some historical analysis. A. Early Multiprocessors Multiprocessors have a very long history in computer architecture; we present only a few highlights here. One early multiprocessor was Illiac IV [2], which was widely regarded as the first supercomputer. This machine had 64 arithmetic logic units (ALUs) controlled by four control units. The control units could also perform scalar operations in parallel with the vector ALUs. It performed vector and array operations in parallel. Wulf and Harbison [3] describe C.mmp as an early multiprocessor that connected 16 processors to memory through a crossbar. Culler et al. [4] describe the state-of-the-art in parallel computing at the end of the 1990s. They identified Moore’s Law as a major influence on the development of multiprocessor architectures. They argue that the field has converged on a


Manuscript received June 11, 2007; revised November 20, 2007. Current version published September 19, 2008. The work of W. Wolf was supported in part by the National Science Foundation under Grant 0324869. This paper was recommended by Associate Editor V. Narayanan. W. Wolf is with the Georgia Institute of Technology, Atlanta, GA 30332 USA (e-mail: A. A. Jerraya is with CEA-LETI, 38054 Grenoble Cedex 9, France (e-mail: G. Martin is with Tensilica, Inc., Santa Clara, CA 95054 USA (e-mail: Color versions of one or more of the figures in this paper are available online at Digital Object Identifier 10.1109/TCAD.2008.923415

0278-0070/$25.00 © 2008 IEEE

A third important class of MPSoC applications is multimedia processing. is another MPSoC for cell phones. shown in Fig. At this level. The ARM acts as a master. Daytona is a symmetric architecture with four CPUs attached to a high-speed bus. most of which were not generalpurpose computers. VOL. and near. These accelerators perform computations such as color space conversion. communicate over an interconnection network. However. It uses an ARM926EJ as its host processor. Each CPU has an 8-KB 16-bank cache. Three buses handle different types of traffic in the processor. database operations. The MIPS acted as a master running the operating system. p. scaling. two of the units on the bus are programmable accelerators. Several single-chip very long instruction word (VLIW) processors were developed to provide higher levels of parallelism along with programmability. Another common approach was the application-specific IC. 3. 1. we have tried to include many representative processors that illustrate the types of MPSoCs that have been designed and the historical trends in the development of MPSoCs. in which identical signal processing is performed on a number of data channels. Almasi and Gottlieb provide a definition of highly parallel processors that they believe covers many such designs: “A large collection of processing elements that communicate and cooperate to solve large problems fast” [5. They identify several types of problems that are suitable for parallel processors: easy parallelism such as payroll processing (today. The system includes three buses. The C-5 Network Processor [7] was designed for another important class of embedded applications. we will survey the history of MPSoCs. However. The architecture of these systems often closely corresponded to the block diagrams of the applications for which they were designed. which is packet processing in networks. ray tracing for computer graphics. etc. one for each CPU and one for the external memory interface. The architectures of the video and audio accelerators are shown in Figs. The 1990s saw the development of several VLSI uniprocessors created for embedded applications like multimedia and communications. This model is particularly aimed at unifying the shared memory and message passing models for multiprocessors. History of MPSoCs In this section. They see three motivations for the design of parallel processors: solving very large problems such as weather modeling. Chapter 2 of their book describes parallel programs.and long-term artificial intelligence. which make use of the MMDSP+. touch instruction. Each bank can be configured as instruction cache. whereas the Trimedia acted as a slave that carried out commands from the MIPS. and improving programmer productivity. we do not claim that the list of MPSoCs given here is complete. several other MPSoCs were announced. which is a small DSP. The OMAP 5912. B. Daytona was designed for wireless base stations. The CPU architecture is based on SPARC V8 with some enhancements: 16 × 32 multiplication.1702 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS. simulating the evolution of galaxies. and data mining. 4. 1.3 V in a 0. Early cell phone processors performed baseband operations. generic model of multiprocessors in which a collection of computers. A fourth important class of MPSoC applications is the cell phone processor. data cache. 2. whereas the executive processor is a reduced instruction set computing (RISC) CPU. one for audio and another for video. Shortly after the Daytona appeared. which stitched together a number of blocks. and the DSP acts as a slave that performs signal processing operations. 27. designing systems that satisfy limited budgets. The STMicroelectronics Nomadik [10]. and vector coprocessor. VLSI design automation. They discuss four applications that they consider representative: ocean current simulation. The Viper could implement a number of different mappings of physical memory to address spaces. The first MPSoC that we know of is the Lucent Daytona [6]. the multiprocessing picture is more complicated because some hardware accelerators are attached to the buses. the caches snoop to ensure cache consistency. division step. Bridges connect the buses. The C-5 uses several additional processors. or scratch pad. 10. Lucent Daytona MPSoC. including an MMDSP+ and hardwired units for several important stages .25-μm CMOS process. 6 and 7. as shown in Fig. The Daytona bus is a split-transaction bus. 5]. OCTOBER 2008 Fig. some of which are very specialized. The video accelerator is a heterogeneous multiprocessor. respectively. each potentially including both CPU and memory. These processors were designed to support a range of applications and exhibit a wide range of architectural styles. However. Packets are handled by channel processors that are grouped into four clusters of four units each. shown in Fig. The Texas Instruments (TI) OMAP architecture [9] has several implementations. including both communication and multimedia operations. The architecture of the C-5 is shown in Fig. 5. we would put Web service in this category). scientific and engineering calculations. The Viper includes two CPUs: a MIPS and a Trimedia VLIW processor. An early example of a multimedia processor is the Philips Viper Nexperia [8]. shown in Fig. the architecture appears to be a fairly standard bus-based design. Given the large number of chip design groups active around the world. The processors share a common address space in memory. The chip was 200 mm2 and ran at 100 MHz at 3. NO. has two CPUs: an ARM9 and a TMS320C55x digital signal processor (DSP).

8. which is built from four 16-B-wide rings. Sixteen microengines are organized into two clusters to process packets. one CPU may be able only to read one part of the memory space. is a network processor.WOLF et al. in video processing. 9. TI OMAP 5912. Two rings run clockwise. and the other two run counterclockwise. which can be configured to offer varying degrees of access to different parts of memory for different CPUs. Viper Nexperia processor. The Intel IXP2855 [12]. Each ring can handle up to three nonoverlapping data transfers at a time. Fig. shown in Fig. and I/O interfaces are Fig. Fig. the ARM MPCore [11] is a homogeneous multiprocessor that also allows some heterogeneous configurations. PowerPC. As shown in Fig. 4. It has four Starcore SC140 VLIW processors along with shared memory. Two cryptography accelerators perform cryptography functions. thanks to the lower throughput requirements of audio. .: MULTIPROCESSOR SYSTEM-ON-CHIP (MPSoC) TECHNOLOGY 1703 Fig. 3. C-5 network processor. The processing elements. 5. ST Nomadik SA. The Cell processor [13] has a PowerPC host and a set of eight processing elements known as synergistic processing elements. An XScale CPU serves as host processor. connected by the element interconnect bus. The architecture can accommodate up to four CPUs. For example. Some degree of irregularity is afforded by the memory controller. The audio processor relies more heavily on the MMDSP+. The Freescale MC8126 was designed for mobile basestation processing. 2. whereas another part of the memory space may not be accessible to some CPUs.

Several multiprocessor architectures are being developed for cell phones. 6. so that the final product may have up to 400 000 processors in it. OCTOBER 2008 Fig. These two approaches are not mutually exclusive. and the extra four processors are added and dynamically used in order to increase the yield of the SoC. FPGA platforms allow designers to use hardware/software codesign methodologies. a cell phone processor. Fig. including a NEC V850 control processor and six Tensilica Xtensa LX processors. The Infineon Music processor is another example of . An increasing number of platforms include field programmable gate array (FPGA) fabrics: Xilinx. VOL. It also uses an ARM as a host processor. They found that the CMP provided higher performance on most of their benchmarks and argued that the relative simplicity of CMP hardware made it more attractive for VLSI implementation. ST Nomadik video accelerator. the CPUs are separately implemented as custom silicon from the FPGA fabric. which are dedicated to network packet routing. 10. It can process four threads with full interlocking for resource constraints. The Intel Core Fig. [15]. Each chip includes 192 configured extended Tensilica Xtensa cores. and multiple racks in a router. the architecture requires 188 working processors on a die. the CPU is implemented as a core within the FPGA. and Actel all manufacture devices that provide programmable processors and FPGA fabrics. 7. 8. Intel IXP2855. It has an ARM9 host and a set of singleinstruction multiple-data (SIMD) processors. C. Fig. In other cases. NO. each with its own first-level cache and a shared second-level cache. ST Nomadik audio accelerator. One example is the Sandbridge Sandblaster processor [14]. In some cases. The Cisco Silicon Packet Processor [16] is an MPSoC used in its high-end CRS-1 router. multiple boards in a rack. A Seiko-Epson inkjet printer “Realoid” SoC [17] incorporates seven heterogeneous processors. 27. Altera.1704 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS. ARM MPCore. The first multicore generalpurpose processors were introduced in 2005. [18] proposed a chip multiprocessor (CMP) architecture before the appearance of the Lucent Daytona. They used architectural simulation methods to compare their CMP architecture to simultaneous multithreading and superscalar. MPSoCs versus Multicore Processors Hammond et al. 9. Multiple chips are placed on a board. Commercial multicore processors have a much shorter history than do commercial MPSoCs. This is an interesting MPSoC example in that it incorporates some design-for-manufacturability considerations in its architecture. each of them differently configured with different instruction extensions to handle different parts of the printing image processing task. Their proposed machine included eight CPU cores.

The Niagra SPARC processor [21] has eight symmetrical four-way multithreaded cores. Reference implementations are generally very single threaded.: MULTIPROCESSOR SYSTEM-ON-CHIP (MPSoC) TECHNOLOGY 1705 Fig. Nexperia. which runs into the tens of millions of U. and platform-based design. Xu et al. As we have seen. activity profiles. we will consider how the applications for which MPSoCs are designed influence both the design process and the final architecture. are generally not included in standards. memory bandwidth and access patterns. the presence of standards is not an unalloyed blessing for MPSoC designers. for example. Complex Applications Many of the applications to which MPSoCs are applied are not single algorithms but systems of multiple algorithms. Breaking these programs into separate threads for multiprocessor implementation requires considerable effort. III. Multiple products with different characteristics can occupy the design space for the standard. D. Complex reference implementations compel system implementations that include a significant amount of software. Depending on the system architecture (SA). Analysis Let us now try to fit MPSoCs into the broader history of multiprocessing. as shown in Fig.S. However. The reference implementation for H. The two most computationally intensive tasks are motion estimation and discrete cosine transform . We will study three influences: complex applications. The CPUs in multicore processors have fairly independent hardware units and provide a fairly independent programming model. Block diagram of an MPEG-2 encoder. We believe that MPSoCs provide at least two branches in the taxonomy of multiprocessors: the homogeneous model pioneered by the Lucent Daytona and the heterogeneous model pioneered by the C5. and OMAP. B. many MPSoCs are very heterogeneous. Standards committees often provide reference implementations. Whereas these reference implementations may seem to be a boon to system designers. One common technique is to define the bit stream generated or consumed but not how that bit stream is generated. dollars. Motion estimation uses 2-D correlation with fairly simple operations performed on a large number of data elements. which are executable programs that perform the operations specified by the standard. Improving the cache behavior of the threads requires more effort.264 reference video decoder and described some of those effects in more detail. Other tasks. that model was generally conceived for fairly homogeneous multiprocessors. MPSoC design teams generally spend several personmonths refining the reference implementation for use. most committees try to balance standardization and flexibilities. Standards provide large markets that can justify the cost of chip design. standards-based design. Another common technique for balancing standards with competition is to exclude some aspects of the final product from the standard. User interfaces. some of the software modules may need to operate under strict real-time requirements as well. Standard-Based Design MPSoCs exist largely because of standards—not standards for the chips themselves but for the applications to which the MPSoCs will be put. 10. which have very different computational profiles. efficient motion estimation algorithms are also highly data dependent. for example. (DCT) computation. DCT computation performs a large number of multiplications and additions but with well-established and regular data access patterns. H OW A PPLICATIONS I NFLUENCE A RCHITECTURE In this section. such as variable length coding. consists of over 120 000 lines of C code.WOLF et al. consider the structure of an MPEG-2 encoder [22]. [23] studied the operational characteristics of the H. As an example. However. This allows implementers to develop algorithms that conform to the bit stream but provide some competitive advantage such as higher quality or lower power consumption. Standards that allow for multiple implementations encourage the design of MPSoCs. The AMD Opteron dual-core multiprocessor [20] provides separate L2 caches for its processors but a common system request interface connects the processors to the memory controller and HyperTransport. When crafting a standard. operate on much lower volumes of data than either motion estimation or DCT. This means that the nature of the computations done in different parts of the application can vary widely: types of operations. A. This software generally must conform to the low power requirements of the system. Second. Duo processor [19] combines two enhanced Pentium M cores on a single die. MPSoCs generally conform to today’s standard multiprocessor model of CPUs and memories attached to an interconnection network. We would argue that these two branches are distinct from the multicore systems designed for general-purpose computers. The processors are relatively separate but share an L2 cache as well as power management logic. etc. the memory bandwidth requirements of the encoder vary widely across the block diagram. As a result. Standards also set aggressive goals that can be satisfied only by highly integrated implementations. they are generally not directly useful as implementations.264. standards have grown in complexity. 10. Such variations argue for heterogeneous architectures. it makes no sense to operate at ultralow power levels in the hardware units while the software burns excessive energy.

Platform-Based Design Platform-based design divides system design into two phases. the platform can then be specialized for use in a number of products. As a result. Embedded computing. in contrast. Standard-based design encourages platformbased design: The standard creates a large market with common characteristics as well as the need for product designers to differentiate their products within the scope of the standard. NO. Today’s software development environments (SDEs) do not provide good support for heterogeneous multiprocessors. Fig. if the computation is not done by a certain deadline. we distinguish between the platform. This benchmark set included high-performance data networking. most reference implementations do not specify the user interface. the system fails. . [24]. the platform is adapted for a particular product in that application space. a platform can be manufactured in the large volumes that are required to make chip manufacturing economically viable. MPSoCs are ideally suited to be used as platforms. the quality and capabilities of the software development environment for the platform in a large part determines the usefulness of the platform. quantitative goals. If the computation is done early. implies realtime performance. a platform is designed for a class of applications. First. because MPSoCs are not limited to regular architectures. we are now ready to confront two important questions. 27. finishing one task early may cause another task to be unacceptably delayed). They estimated the performance and power requirements of an advanced set of applications that might appear on mobile systems such as cell phones in the near future. MPSoC designers have repeatedly concluded that business-as-usual is not sufficient to build platforms for high-performance embedded applications. They estimated that this benchmark set would require a 10 000 SpecInt processor. they can be designed to more aggressive specifications than can be met by general-purpose systems. Their article looked at the performance and power requirements of next-generation mobile systems. In a very few cases. Two-stage platform-based design methodology. As a result. which is used in a number of different systems. Because we have strict. Because batteries will only be able to produce 75 mW in that time frame. Performance and Power Efficiency To consider these driving forces in more detail. The term “high-performance computing” is traditionally used to describe applications like scientific computing that require large volumes of computation but do not set out strict goals about how long those computations should take. They then showed that publicly available information estimated that general-purpose processors in this class would consume several hundred watts of power. Those requirements may be expressed in a more general form than would be typical in a nonplatform design. Here. 10. CPUs can be used to customize systems in a variety of ways. these high-performance systems must often operate within strict power and cost budgets. Furthermore. but this negates many of the benefits of platform-based design. Applications like multimedia and high-speed data communication not only require high levels of performance but also require implementations to meet strict quantitative goals. The platform design must take into account both functional and nonfunctional requirements on the system. platform vendors may allow a customer to modify the platform in ways that require new sets of masks. High-performance embedded computing is a very different problem from high-performance scientific computing. for example. which is one of those end-use systems. Platform-based design leverages semiconductor manufacturing. Later. why have so many different styles of MPSoC architecture appeared? In particular. 11 shows the methodology used to design systems using platforms. 12 shows a plot of power consumption trends that was taken from a paper by Austin et al. The differing nature of the computations done in different applications drives designers to different parts of the design space. Fig. why design heterogeneous architectures that are hard to program when software plays such a key role in system design using MPSoCs? The answer to the first question comes from the nature of the applications to be implemented. C. general-purpose processors consume several orders of magnitude more power than that available to support this mobile application set. and other applications. 11. the system may not benefit (and in some pathological schedules. Moreover. Much of the product customization comes from software. Fig. VOL. A RCHITECTURES FOR R EAL -T IME L OW -P OWER S YSTEMS Based upon our survey of MPSoC architectures and applications. why have so many MPSoCs been designed—why not use general-purpose architectures to implement these systems? Second. we will first start with the power/performance budget. First. OCTOBER 2008 IV. A reference implementation may provide detailed functional requirements for part of the system but no guidance about other parts of the system. A. voice recognition. we can design more carefully to meet them.1706 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS. and the product. Product design tends to be software driven. This argument also helps to explain why so many radically different MPSoC architectures have been designed. each of which is sold in smaller volumes. video compression/decompression. In real-time systems.

consider a shared memory multiprocessor in which all CPUs have access to all parts of memory. circuit. if that memory block is addressable only by certain processors. they have no choice but to do so because they have eschewed applicationspecific optimizations. Heterogeneous architectures are designed for heterogeneous applications with complex block diagrams that incorporate multiple algorithms. we will survey previous works in several problem areas that we consider important CAD problems for MPSoCs: configurable processors and instruction set synthesis. irregular interconnection networks save power by reducing the loads that must be driven in the network. Heterogeneous architectures. A complete video compression system is one important example of a heterogeneous application. CAD C HALLENGES IN MPS O C S MPSoCs pose a number of design problems that are amenable to CAD techniques. In this section. in which the same algorithm is applied to several independent data streams.” V. in which different parts of the image can be treated separately. Embedded computing. as evidenced by Almasi and Gottlieb’s statement cited previously that multiprocessors were designed to “solve large problems fast. An MPSoC can save energy in many ways and at all levels of abstraction. [Aus04] 2004 IEEE Computer Society. in contrast. C. motion estimation. interconnect-driven design. The scientific computing community. The homogeneous architectural style is generally used for data-parallel systems. can provide improved real-time behavior by reducing conflicts among processing elements and tasks. However. The hardware cannot directly limit accesses to a particular memory location. was mainly concerned with single programs that were too large to run in reasonable amounts of time on a single CPU. device. and logic-level techniques [25] are equally applicable to MPSoCs as to other VLSI systems. Furthermore. access to any memory location in a block may be sufficient to disrupt real-time access to a few specialized locations in that block. noncritical accesses from one processor may conflict with critical accesses from another. Clearly. encoding. is one example.WOLF et al. B. Wireless base stations. core.: MULTIPROCESSOR SYSTEM-ON-CHIP (MPSoC) TECHNOLOGY 1707 Fig. they can apply optimization algorithms to solve many design problems. although they are harder to program. Similarly. eliminating ports in parts of the memory where they are not needed saves energy. Irregular memory systems save energy because multiported RAMs burn more energy. is another. Application Structure The structure of the applications for which MPSoCs are designed is also important. often using producer–consumer data transfers. Power consumption trends for desktop processors from Austin et al. For example. in contrast. Using different instruction sets for different processors can make each CPU more efficient for the tasks it is required to execute. Different applications have different data flows that suggest multiple different types of architectures. This hardware/software duality allows the same basic design challenge to be solved in very different ways. is all about tuning the hardware and software architecture of a system to meet the specific requirements of a particular set of applications while providing necessary levels of programmability. therefore. Software methods can be used to find and eliminate such conflicts but only at noticeable cost. then programmers can much more easily determine what tasks are accessing the locations to ensure proper real-time responses. Real-Time Performance Another important motivation to design new MPSoC architectures is real-time performance. Because embedded system designers have specific goals and a better-defined set of benchmarks. 12. some of which are quite different than the multiprocessors designed for scientific computing. General-purpose computer architects [26] perform extensive experiments but then go on to apply intuition to solve specific design problems. Many embedded system design and optimization problems can be approached from either the hardware or software perspective.and platform-based .

and semantics. and the numbers. A. These might include system bus interfaces. Holmer and Despain [32] used a 1% rule to determine what instructions to add to the architecture—an instruction must produce at least 1% improvement in performance to be included. ASIP Meister [29] is a processor configuration system that generates Harvard architecture machines. trace and JTAG. instruction set extensions. Configurable processors usually fall into two categories: those that are based on a preexisting processor architecture. Huang and Despain [33]. Critical Blue Cascade [37]) and generate a custom highly application-tuned coprocessor.. VOL. data and unified memories. NO. LISA was used as the basis for a commercial company. Code compression generates optimized encodings of instructions that are then decoded during execution. OCTOBER 2008 designs. and energy reduction that best meets their design objectives. This adds extra instructions directly into the datapath of the processor that are decoded in the standard way and may even be recognized by the compiler automatically or least as manually invoked instrinsics. The XPRES tool from Tensilica. including instruction sets. coprocessor interfaces. LISA [30] uses the LISA language to describe processors as a combination of structural and behavioral features. kinds. [35] combines the notions of configurable processor. They used microcode compaction techniques to generate candidate instructions. and extra load-store units. multiplyaccumulate units. memory system optimizations. Compressed instructions do not have to adhere to traditional constraints on . area increase. These start with either application source code (e. Several commercial approaches that generate applicationspecific instruction set processors or coprocessors from scratch exist. encoding. Atasu et al. The user may be presented with a Paretostyle tradeoff curve that graphically illustrates the choices available between area and cost increases versus performance improvement. Additional structural parameters may include the degree of pipelining. Encoding MPSoCs provide many opportunities for optimizing the encoding of signals that are not directly visible by the user. which is based on research from HP Laboratories [36]) or compiled binary code (e. both RAM and ROM. Instruction set synthesis is a form of architectural optimization that concentrates on instructions. the locations and nature of reset vectors. The Toshiba MeP core is a configurable processor optimized for media processing and streaming. and shifters) or the inclusion of various subsets (such as DSP instruction sets. 10. Architectural optimization refers to the design or refinement of microarchitectural features from higher level specifications such as performance or power. which is driven by a specification that may be based on a more formalized architectural definition language or may be based on parameter selection and structural choices given in a processor configuration tool. The design of customized processors entails a substantial amount of work but can result in substantially reduced power consumption and smaller area. LISATek. XPRES uses optimization and design space exploration techniques that allow users to select the right combination of performance improvement. and in one or more instantiations). Processor configuration is usually of two types. 1) Coarse-grained structural configuration. and exception vectors. usually implementing a RISC instruction set. timers.. Code compression and bus encoding are two well-known examples of encoding optimizations. and those that create a complete new instruction set architecture during processor generation as specified by the user. all fall into this structural configuration category. used simulated annealing. The Tensilica Xtensa processor [31] is a commercial configurable processor that allows users to configure a wide range of processor characteristics. and SDEs.g. and I/O. CPU configuration refers to tools that generate a hardware description language (HDL) of a processor based on a set of microarchitectural requirements given by a user. caches. nML or LISA) or may be defined through a combination of HDL code together with templates for instruction formats. dividers. direct IO ports for the same purpose. levels. This is often in conjunction with configurable extensible processors or coprocessors. The interface widths and specific protocols may also be configurable or selectable.g. This includes the presence or absence of a variety of interfaces to associated objects. and allowing multioperation instructions and encodings. vectorizing or SIMD instructions. B. Configurable processors utilize an automated processor generation process and toolset.1708 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS. [34] developed graph-oriented instruction generation algorithms that generate large candidate instructions. in contrast. Inc. register file size. Signal encoding can improve both area and power consumption. on-chip debug. Configurable Processors and Instruction Set Synthesis Instruction sets that are designed for a particular application are used in many embedded systems [27]. which is supported by tools of various levels of sophistication. 2) Fine-grained instruction extensions. and priorities of interrupts.g. and floating point units). The optimization flow is usually an automated one. possibly including disjoint operations.. multipliers.. Synfora PICO. which was acquired by CoWare. 27. Other parameterized structural choices may include special instruction inclusion (e. direct first-in first-out queues that can be used to interconnect processors or connect processors to hardware accelerating blocks and peripherals. and automated synthesis into a tool flow that starts with user application code and ends up with a configured instruction-extended processor tuned to the particular application(s). The instruction extensions are usually offered in some kind of compiled architectural description language (e. local memory interfaces (instruction. The MIMOLA system [28] was an early CPU design tool that performed both architectural optimization and configuration.g. hardware/software codesign. and the technology continues to be offered on a commercial basis.

Larin and Conte [41] tailored the encoding of fields in instructions based on the program’s use of those fields. Sonics offers a TDMA-style interconnection network with both fixed reservations of bandwidth along with the grantback of unneeded communication allocations. CoWare N2C [56] and Cadence VCC [57] were very early tools for SoC integration and platform-based design from the mid to late 1990s. Buses are good candidates for encoding because they are heavily used and have large capacitances that consume large amounts of power. Data compression is more complex because data is frequently written. the design of on-chip interconnects has been an afterthought or a choice made obvious by the buses available with the single processor chosen for the SoC. which improves the handling of branches.and Platform-Based Design Bergamaschi et al. Their Coral tool automates many of the tasks required to stitch together multiple cores using virtual components. as well as bridges to other processors with their own high performance system buses. [54] developed a core-based synthesis strategy for the IBM CoreConnect bus. in SoC design. [47] clustered correlated address bits.WOLF et al. most interconnect choices were based on conventional bus concepts. Sonics SiliconBackplane was the first commercial NoC [51]. AMBA is actually a bus family. AMBA buses include APB for peripherals. Lefurgy et al. A golden abstract architecture defines the platform. several other vendors provide NoCs. A variety of links to both hardware and software implementation were also generated from VCC. with bridges between buses of different performance and sometimes to split the bus into domains.: MULTIPROCESSOR SYSTEM-ON-CHIP (MPSoC) TECHNOLOGY 1709 instruction set design such as instruction length or fields. Benini et al. N2C supported relatively simple platforms with one main processor. The classic bus-based architecture places low-speed peripherals on a lower speed peripheral bus. a single shared bus) so that multiple communications channels can be simultaneously operating. which divides programs into sets of addresses for encoding. N2C was gradually replaced by . A number of groups have studied code compression since then. Cesario and Jerraya [55] use wrappers to componentize both hardware and software. Code compression was introduced by Wolfe and Chanin [38]. Today. [40] introduced the branch table. see the book edited by De Micheli and Benini [50]). After several years of primarily being a research area. Thus. the energy and performance inefficiencies of shared-medium bus-based communications can be ameliorated. or multiple master bus controls. VCC survived for a few years as a commercial product before being abandoned by Cadence. the capability of directly generating relevant software and hardware that would implement the selected protocols. Lv et al. The search for alternative architectures has led to the concept of network-on-chip (NoC) (for a good survey. Mussoll et al. in addition. [44] used a data compression unit to compress and decompress data transfers. A similar scheme was provided on one model of PowerPC [39]. In this. depending on the bits of the previous transfer. The Silistix interconnect is an asynchronous interconnect that has adaptors for common bus protocols and is meant to fit into an SoC built by using the globally asynchronous locally synchronous paradigm. who used Huffman coding. which means that the architecture must be able to both compress and decompress on the fly. the two best known SoC buses are ARM AMBA (ARM processors) and IBM CoreConnect (PowerPC and POWER processors). Each virtual component describes the interfaces for a class of real components. C. VCC was a slightly more generic tool that implemented the function-architecture codesign concept. to reduce the number of bit flips on the bus. Benini et al. D. It has long been predicted that as SoCs grow in complexity with more and more processing blocks (and MPSoCs will of course move us in this direction faster than a single-processor SoC). Lekatsas and Wolf [42] used arithmetic coding and Markov models. and an explicit mapping between functional blocks and architectural elements and communications channels between functional blocks and associated architectural channels would be used to carry out the performance estimation of the resulting mapped system. and now a crossbar-based split transaction bus for multiple masters and slaves (AXI). following conventional memory-mapped. The function of an intended system would be captured independent of the architecture. consume far more energy than desirable to achieve the required on-chip communications and bandwidth [49]. [43] developed an architecture that uses Lempel-Ziv compression to compress and uncompress data moving between the level 3 cache and main memory. for example. Core. Tremaine et al. Coral also checks the connections between cores using Boolean decision diagrams. Stan and Burleson introduced bus-invert coding [45]. which is a way of describing high-level communication protocols between software functions and hardware blocks. which transmits either the true or complement form of a word. current bus-based architectures will run out of performance and. which can then be specialized to create an implementation. Coral can synthesize some combinational logic. Interconnect-Driven Design Traditionally. we mention only a few here. The key idea behind an NoC is to use a hierarchical network with routers to allow packets to flow more efficiently between originators and targets and to provide additional communications resources (rather than. and hardware peripherals of various types including accelerators. and via template-driven generation. including Arteris [52] and Silistix [53]. Wrappers perform low-level adaptations like protocol transformation. single. a standard bus architecture. [46] developed working-zone encoding. there are now several credible commercial NoCs. Its primary distinguishing feature was interface synthesis. which is bridged to a higher speed system bus on which sits the main processor and high speed hardware accelerators. the AMBA AHB for the main processor. [48] used a dictionary-based scheme. Many NoC architectures are possible and the research community has explored several of them. If the glue logic supplied with each core is not sufficient. Because early SoCs were driven by a design approach that sought to integrate onto a single chip the architecture of a complete board.

However. [64] developed algorithms to allocate variables to the scratchpad. a variety of IP blocks such as processors. they are arguably more general and also easier for a design team to adopt as part of a more gradual move to system-level MPSoC design. and the system of Kalavade and Lee [69] were early influential cosynthesis systems. and Xie and Wolf [73] among others. with each taking a very different approach to solving all the complex interactions due to the many degrees of freedom in hardware/software codesign: Cosyma’s heuristic moved operations from software to hardware. variable and interference access counts. These more modern system level design tools offer less methodology assistance in deciding on the mapping between software tasks and processors and between communication abstractions and implementations than N2C and VCC offered. Hardware/Software Codesign Hardware/software codesign was developed in the early 1990s. and other peripherals are selected from libraries and are interconnected. and Serra [80] combined static and dynamic task scheduling. F. and Masselos et al. Vulcan started with all operations in hardware and moved some to software. [70]. MPSoC development environments tend to be collections of tools without substantial connections between them. Vahid and Gajski [72]. Scratch pad memories have been proposed as alternatives to address-mapped caches. 10. OCTOBER 2008 CoWare with the Platform Architect tool. programmers and debuggers often find it very difficult to determine the true state of the system. A number of other algorithms have been developed for cosynthesis: Vahid et al. including . De Greef et al. Panda et al. It provides four levels of abstraction. 27. among others. The SA level models the system as a set of functions grouped into tasks connected by abstract communication links. [65] optimized the cache hierarchy: first. [75] compared simulated annealing and tabu search. some of which were replicated. Cosyma [67]. [81] developed a software development platform that can be retargeted to different multiprocessors. E. Their algorithm for placing scalars used a closeness graph to analyze the relationships between accesses. assuming that the right kinds of protocol matching bridges exist. They defined a series of metrics. [66]. As a result. COSYN [79] used heuristics optimized for large sets of tasks. The SA model is formulated by using standard Simulink functions. Dick and Jha [78] applied genetic algorithms. the cache size for each level. Henkel and Ernst [71]. then line size for each level. [63] combined data and loop transformations to optimize the behavior of a program in cache. From the architecture. Vulcan [68]. Catthoor et al. such as that by Franssen et al. SDEs SDEs for single processors can now be quickly retargeted to new architectures. performance. because they are not restricted in the architectural domain to just the elements that supported the N2C and VCC methodologies. Cost estimation methods were developed by Herman et al. Eles et al. Panda et al.” and various system performance information can be tracked and analyzed. giving more control of access times to the program. Kandemir et al. [60]. but its contents are determined by software. Bus contention and traffic and the relationship between processor activity and bus activity can all be assessed. simulation models based on SystemC can be generated and precompiled software is loaded into the appropriate processor instruction set simulator models. NO. The transaction accurate architecture level models the machine and operating system in sufficient detail to allow the debugging of communication and performance. Lycos [76] used heuristic search over basic scheduling blocks. In these kinds of tools. memory organization design. G. They statically allocated scalars.1710 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS. developed an influential methodology for memory system optimization in multimedia systems: data flow analysis and model extraction. Gordon-Ross et al. often in a choice between a cycle-accurate mode and a fast functional mode as a “virtual prototype. used binary search [74]. Cost estimation is one very important problem in cosynthesis because area. leading to a refined microarchitecture that is more optimized to the application. global loop and control flow optimization. A great deal of work at the IMEC. and power must be estimated both accurately and quickly. to analyze the behavior of arrays that have intersecting lifetimes. and Kalavade and Lee used iterative performance improvement and cost reduction steps. [62] developed algorithms that place data in memory to reduce cache conflicts and place arrays accessed by inner loops. [61]. [59]. Hardware/software cosynthesis can be used to explore the design space of heterogeneous multiprocessors. buses. Popovici et al. have proposed configurable caches that can be reconfigured at run time based on the program’s temporal behavior. As a result. Balasubramonian et al. No comparable retargeting technology exists for multiprocessors. then associativity per level. Additional information about system level design tools is available in the book by Bailey et al. DMA controllers. They used an interference graph to analyze arrays. Memory System Optimizations A variety of algorithms and methodologies have been developed to optimize the memory system behavior of software [58]. and in-place optimization. The virtual architecture level refines software into a C code that communicates by using a standard API. The virtual prototype level is described in terms of the processor and low-level software used to implement the final system. The systems can be simulated. data reuse optimization. memories. VOL. A variety of commercial and open-source SDEs illustrate this technique. global data flow transformations. A scratch pad memory occupies the same level of the memory hierarchy as does a cache. Platform architect follows a more conventional line in common with other existing commercial system level tools such as ARM Realview ESL SoC Designer (now Carbon Design Systems). [53]. Wolf [77] used heuristic search that alternated between scheduling and allocation. developed algorithms for data transfer and storage management.

com [21] A. Blaauw.WOLF et al. Customizable Embedded Processors. J. O’Neill. and low-cost requirements. Test. May/Jun. 807–812. pp. 26. T. Moturi. Comput. San Francisco. even today’s applications can be handled by future generations of general-purpose systems. Nov. Goodwin and D. 2001. no. “Design of a printer controller SoC using a multiprocessor configuration. San Francisco. Syst. J. “Using coprocessor synthesis to accelerate embedded software. Nicol. 681–685. 81–83. “Automatic application-specific instruction-set extensions under microarchitectural constraints. vol. “The future of multiprocessor systems-on-chips. Mudge. W. and H. Kobayashi. Jan. May 31.” ACM Trans. vol. 16-b MAC/s multiprocessor DSP. 2006. A. 42–50. then the ease with which they can be programmed pulls system designers toward uniprocessors. [20] K. 2005. 79–85. 31. 153–162. Bryg. vol. In conclusion. Commun. 2006. Huang and A. 89–97. “Synthesis of application specific instruction sets. 2003. CA. Princeton. CA: Morgan Kaufmann. 939–951. G. Apr. A. 107–113. Engineering the Complex SoC: Fast. technical article TA305. vol. 2001. Montoye. [3] [16] W. pp. Morgan. and M. [10] A.” in Proc. no. D. Moudgill.. [Online]. G. K. 10–23. However. 663–675. and M.” Computer.” in Proc. vol. [33] I. 120–128.. is performed on DSPs. E. no. 4. 137–147. J. 3. Almasi and A. Strong. Conf. [25] W. Petkov. no. Auerbach. [28] P. [13] M. pp. 1984. 18.. and D. Singh. 41. 21st Des. The Next Evolution in Enterprise Computing: The Convergence of Multicore X86 Processing and 64-bit Operating Systems. 2005. J. M. Austin. [4] D. [9] OMAP5912 Multimedia Processor Device Overview and Architecture Reference Guide. and A. 4th ed. L. Nomadik—Open Multimedia Platform for Next Generation Mobile Devices. S. J. pp. 587–593. pp. Tokyo. no. Kemp. 1995. M. 1338–1354. Comput. Rowen. [Online]. and M. Nohl. M. Oct. 40th Des.” in Proc. 412–424. Rossi. Naveh. V. Artieri. Whereas DSPs have somewhat different architectures than do desktop and laptop systems. Raja. Conf. “Automatic generation of application specific processors. and V. 2. Singh. 41st Annu. [37] R.. 2003. S. E.6-billion. pp. 2000..” IEEE Commun. D. K. Taylor and P. P. S. Brinthaupt. Solid-State Circuits Conf. Gochman. Embed. CA: Morgan Kaufmann. The design methods and tools that have been developed for MPSoCs will continue to be useful for these next-generation systems.mmp/Hydra. [6] B. P. Hokenek. no. Acoust. Intel Corp. Speech. 2006. 2003. pp. C ONCLUSION We believe that MPSoCs illustrate an important chapter in the history of multiprocessing. C.pdf?fsrch=1 [8] S. J. vol. Schliebusch. CA: Benjamin Cummings. “A power-efficient high-throughput 32-thread SPARC processor. NJ. S. Othmer. “Rapid prototyping of JPEG encoder using the ASIP development system: PEAS-III. low-power.” Computer. 2. Conf. 1997. 12th Int.” in Proc. “Reflections in a pool of processors—An experience report on C. Apr. pp. 1999. pp. 10. we need to ask whether that chapter will close as general-purpose architectures take over high-performance embedded computing or whether MPSoCs will continue to be designed as alternatives to generalpurpose processors. Takeuchi. Available: http://www. Syst. vol. Wahlen. Denenberg. Anesko. [Online]. “Parallelism and the ARM instruction set architecture. [22] B. McIntyre. 30. F. J. Wolfe and A. Aditya. and E.” IEEE Trans. Synthesis. 7. K. [38] A. 1994. [26] J. 35. 1972. 5. MA. 2nd ed. pp. Ackland and S. J.. J. Hopkins. Chakrabarti. J. C-Port Corp. Chakradhar. Conf. A.” IEEE Trans. K. 2005. they are von Neumann machines that support traditional software development methods. [31] C. “The Illiac IV system.” in Proc.. O. M. Pozzi. IEEE Int. C.. vol. Autom. vol. Flexible Design With Configurable Processors. 2003. Sloss. [39] M. H. Lu. pp. P. Wolf.. Harbison. Iancu. May 2004. IEEE. R. Circuits Syst. R. no. S. 2005. pp. Randall. 21–31. Braun. vol. pp. Jun. and V. Quinn. Available: http://www. 2006. Eds. 2003. pp. [30] A. Conf. IEEE Int. ACKNOWLEDGMENT The authors would like to thank B..amd. B. T. [12] Intel IXP2855 Network Processor. 3. 26–28. Terman. Turner.. The Sandbridge Sandbridge Convergence Platform. Otsuka. [2] W. D. Japan. J. Gupta. Tam. and J. O. 2001. pp. Kongetira. 7. 6. Ienne. [23] J. Henkel. C. 98–107. no. W.” in Proc. [Online]. [Online]. Mahlke.. S. Moudgill. Parallel Computer Architecture: A Hardware/Software Approach. 14. Nayfeh. Kalavade. Presentation in Japanese/English. Int. Chanin. Chesson. and S. Jun. North Andover. Petrini. Rotem. Atasu. [32] B. and F. J. Dallas. Symp. Y. “A single-chip multiprocessor. vol. R. Haskell. pp. [5] G. and P. Olukotun. 1998. . vol. Comput. Englewood Cliffs. T. 485–488. L. M. C. pp. we believe that new applications that will require the development of new MPSoCs will emerge. and A. Sep. [14] J. CA: Morgan Kaufmann.freescale. “The MIMOLA design system: Tools for the design of digital processors. 1997. Leupers. 2004. 38. 6. “A novel methodology for the design of application-specific instruction-set processors (ASIPs) using a machine description language. Nikkei Embedded Processor Symp. Bouknight.” IEEE J. 9. Redwood City.” Intel Technol..” IBM J. pp. D.” Computer. pp. Kogel. A. Computer Architecture: A Quantitative Approach. Imai. [27] [11] J. Symp. Embedded computer vision is one example of an emerging field that can use essentially unlimited amounts of computational power but must also meet real-time. Sackinger. 1978. [17] S. H. Despain. Netravali. 1999.” IEEE Micro. Wolf. Mita. Rau. Santa Clara. Res. Highly Parallel Computing. [18] L. R EFERENCES [1] W. 81–91. Kistler. Glossner. “Viper: A multiprocessor SOC for advanced set-top box and digital TV systems. N. 2006. 1992. Comput. [7] C-5 Network Processor Architecture Guide. K. If processors can supply sufficient computational power. Available: http://www. Schumacher. 2005. Hoffman. Dutta for the helpful discussions of their MPSoCs. “Introduction to Intel Core Duo processor architecture.ti.-Aided Design Integr. Sameh. “The push of network processing to the top of the pyramid. 263–280. Meyr. Autom. pp. TX. 11. J. San Francisco. and A. netcomm/doc/ref_manual/C5NPD0-AG. A. A. 3rd ed. L. Holmer and A. [15] J. Prui. Patterson. “Mobile supercomputers. May 2006. NJ: Prentice-Hall. A. Goodacre and A. 1. AFIPS Nat. Wolf. K.” in Proc. T. A. 60. 24th Int. Mar. Williams. NJ: Prentice-Hall. [29] S. 2004. Harper. 369–388. D’Alto. Available: http://www.. Mendelson. S. Architectures Netw. 20. G. D. pp. Hennessy and D..” in Proc. Des. and K. Nov. Jinturkar.” in Proc. 2006. “Viewing instruction set design as an optimization problem. Palmer.” in Proc. Texas Instruments Inc. “Executing compressed programs on an embedded RISC architecture. Develop. “A single-chip. “A decompression core for PowerPC. Daubert.-J.” in Proc. M. E. Conf. Wulf and S. Microarchitecture. Microarchitecture. Jensen. 256–261.” Proc. Eatherton. “Cell multiprocessor communication network: Built for speed.. no. J. pp. A. J. S.-Aided Design Integr. Rieckmann. [Online]. May 15.: MULTIPROCESSOR SYSTEM-ON-CHIP (MPSoC) TECHNOLOGY 1711 VI. Autom.” in Proc. Modern VLSI Design: System-on-Chip Design. Circuits Syst.. SolidState Circuits. except for very low-cost and low-power applications. “Automatic architectural synthesis of VLIW and EPIC processors. Leon. [24] T. [36] V. 35. 1. Nacer. Weisner. Yang. 2005. Wolf. Signal Process. Hammond. Jul. Culler. Sweet. Mag. J. thanks to Moore’s Law advances. 25th Annu. 1991. 2. J. Much audio processing. [19] S. Embedded Syst. Iancu. Sep.. [35] D.. Glossner. and W. Stanley. pp. Gottlieb. Slotnick. Schulte. no. Samori. 2004. Upper Saddle River. D./Oct. N. Symp. [34] K. Xu. Available: http://www. 42. vol. vol. J. and D. Shin. B.” IEEE Des. Mar. Perrone. Marwedel. D. E. and A. A. no. Symp. M. Despain. 2004. Comput. D. “A design methodology for application-specific networks-on-chips. “A softwaredefined communications baseband design. Ienne and R. R. Dutta. no. Digital Video: An Introduction to MPEG-2. Ackland. New York: Chapman & Hall. W. J. Oct.” in Proc. Kathail.. Micca. Available: http:// www. E. M. Knobloch. and M. Syst. A.

Cortadella.” in Multiprocessor Systems-on-Chips.. Macii.1712 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS. vol. IEEE Rapid Syst. He was with the faculty of Princeton University. Des. 1993. vol. Very Large Scale Integr. Gajski. R. vol. F. Eds..” IEEE Des. Benner. “Compiler-driven cached code compression schemes for embedded ILP processors. 45. vol. 261–270. San Francisco. no.. 13. [81] K. CA: Morgan Kaufmann. 1. 2. no. no. De Man. 1999.” ACM Trans. Eur. Hermann. 1. D. no. [49] W. 48. Mar. Jha. Ernst. Doboli. pp. no. pp. A. and Ph. Xie and W. and T. [42] H. no. [60] E.. F. vol. Very Large Scale Integr. White. “CoWare—A design environment for heterogeneous hardware/software systems. [46] E. Princeton. [62] P. [43] R. 1999. ch. “Improving cache locality by a combination of loop and data transformations. Conf. 2001. 2. pp. NO. 4. “A binary-constraint search algorithm for minimizing hardware during hardware/software partitioning. E. 100–107. and 1984. He is a Fellow of the Association for Computing Machinery. J. [73] Y.. 1995. Peng. Autom. 1997. [68] R. Quer. 12. “On-chip vs. and J. pp. no. . and D. Eds. Symp. A. Res. Schultz.-C. K. 82–92. not wires: On-chip interconnection networks. Lekatsas and W. 2006 [51] D. Dec. 194–203. 1994. and implementation-bin selection. Madsen. Circuits Syst. 10. Very Large Scale Integr. 1998. S.. Piziali.” in Proc. vol. Dutt. Burleson. pp. 6. Autom. [55] W. B. 2004. pp. 3. Wagner. Syst. Stanford.” in Proc. 2. 2001. Kuchinski. “Hardware/software co-synthesis with custom ASICs. Conte. De Micheli. Autom. Circuits Syst. 1. Gordon-Ross. vol. 2002. Franssen. Macii. Autom. 64–75. no.” IEEE Trans. 129–133. no. Lekatsas. 1999. pp. P. embedded computing systems. He is Farmer Distinguished Chair and a Research Alliance Eminient Scholar with the Georgia Institute of Technology. vol. Dutt. “An approach to automated hardware/software partitioning using a flexible granularity that is driven by high-level estimation techniques. and VLSI systems. Syst. Mar. [74] F. NJ. vol. no.. B. J. and T. Lang. Dick and N. 684–689. Mar. 1993. 943–951. (VLSI) Syst.-M. Cesario and A. 18. De Micheli and L. M. Workshop Hardware/Software Codes. [52] A. 1994. Y. [53] B. J.-Aided Design Integr. “An approach to the adaptation of estimated cost parameters in the COSYMA system. Comput. 29–41.. Jha. pp. Wayne Wolf (F’98) received the B. 1996. pp.” ACM Trans. 384–409.. Fanet... pp. no. “Automating the design of SOCs using cores. Grode. 208–213.. P.. 1994. Dworkadas. [45] M.. and H. and R. D. 1997. Benini. Goutis. 2.. 3rd Int. pp. degrees in electrical engineering from Stanford University. [80] V.S. Benini. pp. “Power optimization of core-based systems by address bus encoding. 5–32. O. and A. pp. 2005. 568–572. Wolf. [57] S. 6.” IEEE Trans. no. and A. pp. 2007. [79] B. 2003. 554–562. 214–219. Henkel. 92–104. Lee. and S. no. Bolsens./Oct. Verkest. Des. “LYCOS: The Lyngby co-synthesis system. and A. 2007. “MicroNetwork-based integration for SOCs. Benini. (VLSI) Syst. 2000. 32nd Annu. Conf..” in Proc. 2. Embed. “Memory system optimization of embedded software. Very Large Scale Integr. from 1989 to 2007.” IBM J. J. CA. H. 32–45. 1999. Jerraya and W. San Francisco. Popovici. off-chip memory: The data partitioning problem in embedded processor-based systems. OCTOBER 2008 [40] C. “A performance oriented use methodology of power optimizing code transformations for multimedia applications realized on programmable multimedia processors.” IEEE Des. Autom. Krolikoski. Jul. Wolf is the recipient of the American Society for Engineering Education Terman Award and the IEEE Circuits and Systems Society Education Award. Wolf and M. Feb. Mar. 1998.. Gong. Buyuktosunoglu. “SAMC: A code compression algorithm for embedded processors. 1999. Paolucci. “System level hardware/ software partitioning based on simulated annealing and tabu search. Salefski. F. pp. “Hardware-software cosynthesis for microcontrollers. [47] L. and H. Des. Stan and W. 2004. 271–285.” in Proc. Vahid.” in Proc. and A. K. pp. R. “Improving code density using compression techniques.. Panda. and W. 1997. J. Henkel. 2000. 20–24. 1. pp.” in Proc. no. Conf. “Memory data organization for improved cache performance in embedded processor applications. pp. W. F. Nicolau. 1689–1701. no. Dec. pp. Wolf. Atlanta. Comput. 3. 4. [75] P. no. Des. Int. I. Autom. “IBM memory expansion technology (MXT). G. “Component-based design for multiprocessor systems-on-chips. Test Comput. Rousseau. pp. San Francisco. Des.” Des. Test Eur. Larin and T. A. [44] L. Kandemir. 552–557. “Methodology and technology for virtual component driven hardware/software co-design on the system-level. “NoC: The arch key of IP integration methodology. Ramanujam. T. Symp. “An architectural co-synthesis algorithm for distributed. 2001. Schirrmeister. 7. M. vol. 1243–1258. [56] K. 1997. Circuits Syst. respectively. H. “Route packets. vol. (VLSI) Syst. Syst. no. De Man. 9.. Dally and B. Bruni. pp. Autom. 10. 159–167. Sep. 1. [78] R. Smith. pp.” Des. vol.” IEEE Des. 17. Panda. 11. “Memory organization for video algorithms on programmable signal processors. Poncino. “COSYN: Hardwaresoftware co-synthesis of heterogeneous distributed systems. R. Henkel. Bailey. C. He was with AT&T Bell Laboratories from 1984 to 1989. no.” IEEE Trans. 113–122.” IEEE Trans. Martin. and D. Embed.. P. and A. 2. J. vol. Wolf.. Choudhary. Nachtergaele. pp. pp. pp. no. Sep. “Efficient software development platforms for multimedia applications at different abstraction levels. III and G. E. Samsom. Henkel and R. 5. “Bus-invert coding for low-power I/O.. 4. Bird. (VLSI) Syst.. 5. 2. Daveau. F. K. 1998. Z.. “Incremental hardware estimation during hardware/software functional partitioning. [63] M.. and N. 1981. 52. and S. Microarchitecture. Wingard. Martin. A.” Des. M. 10. vol. 4. 2. 218–229. pp. Eles. Bland. 2. Oct. vol. Rowson. 125–163.” IEEE Trans. IEEE Workshop Signal Process. Vahid and D. D. 1997. I. no. [66] R. Very Large Scale Integr. Oct. [77] W. 27. 273–289. N.” in Proc. DATE. Haxthausen. Dec. scheduling. Tremaine. Syst. 682–704. 459–464. Int. ESL Design and Verification: A Prescription for Electronic System Level Methodology. M. 2003. Fellenz. IEEE/ACM Int. pp. Des.” Des. Robinson.-Aided Design Integr. [69] A. Balasubramonian. 18. [71] J. pp. Syst. vol. Embed. Very Large Scale Integr. Mooney. MPSoC Symp. Macii. CA: Morgan Kaufmann. 920–935.” in Proc.” in Proc. Lakshminarayana. vol. vol. Syst. Very Large Scale Integr. vol. 1995. Prototyping Workshop. 165–182. “The extended partitioning problem: Hardware/software mapping.” IEEE Trans. Sep. N. J.” IEEE Trans. (VLSI) Syst. Catthoor. IEEE. C. pp. P. and N. 1997. Embed. 1995. A. K. 6.” Proc.. embedded video and computer vision. T. pp. H. 2003.S. Musoll. Dec. [61] K. P. F. pp. A. A. pp. Lefurgy. E. ICCD. De Micheli. [48] T. Embed. Syst. M. Kalavade and E. Lee. “MOGAC: A multiobjective genetic algorithm for hardware-software cosynthesis of distributed embedded systems. pp. C. Conf. Int. [59] F. R. 5. Jun. [70] D. Jerraya. 30th Annu. “Working-zone encoding for reducing the energy in microprocessor address buses. Networks on Chips: Technology and Tools.” IEEE Trans. Dave. D. and G. D.” IEEE Trans. 2000. R. [67] R. F. and H. 252–257. Apr. Bergamaschi. His research interests include embedded computing. Eur. vol. Wolf. Lv. Jerraya. Oct. [58] W. 49–58. G. Autom. 10. in 1980. “A dynamically tunable memory hierarchy. Conf. 89–144.. S. T. Petersen. (VLSI) Syst. no. “Automatic tuning of two-level caches to embedded applications.. Des. Comput. and E. Conf. 2001. 2001. I.” IEEE Trans.” in Proc. Very Large Scale Integr. G. 456–459. Chen. VOL. no. Mudge. vol.D. Vahid. J. Franaszek.” in Proc. Test. 273–677.. Van Rompaey. and A. Nicolau. P. CA: Morgan Kaufmann. M. [41] S. 5. 449–453. Ernst. Dr. (VLSI) Syst. “Control flow optimization for fast system simulation and storage minimization. and P. Mar. Knudsen.” IEEE Trans. Wazlowski. Sep. Des. V. pp.” IEEE Trans. 3. Ernst. pp. [65] A. Catthoor. pp. Test Comput. [76] J. De Micheli. Catthoor. Masselos.. Gajski. Microarchitecture.” in Proc. 1999.. [64] P. pp. Jan. [50] G. Comput. 195–235. B. O.” in Proc. no. E. Develop. Gupta and G. J. D. 2. vol. J. and A. Conf. Albonesi..” in Proc. “Hardware/software co-design of run-time schedulers for real-time systems. (VLSI) Syst. Muhlada. Oct. vol.” in Proc. Bhattacharya. 3. X. DeMan. Jan. 91.. “Hardware-assisted data compression for energy minimization in systems with embedded processors. Wolf. Embed.. P. De Greef. M. Dutt.” in Proc. Test. [54] R. Kandemir. Autom. Comput. Guerin. “A dictionarybased en/decoding scheme for low-power data buses. ASPDAC. 10. 3. De Man. vol. [72] F. and J. Autom. 6. “Hardware-software cosynthesis for digital systems. Towles. pp. Autom.

Canada for ten years. He is the coauthor or Co-editor of nine books dealing with system-on-chip (SoC) design. Since February 2007. and published more than 200 papers in international conferences and journals. He served as the General Chair for the Conference Design. Grant Martin (M’95–SM’04) received the B. in 1980 and the “Docteur Ingénieur” and “Docteur d’Etat” degrees from the University of Grenoble. Tunisia.S. CA. SystemC. he held a full research position with the Centre National de la Recherche Scientifique (CNRS). and M. and with Cadence Design Systems for nine years. in 1983 and 1989. Santa Clara. was published by Elsevier Morgan Kaufmann in February. Tunis. UML. Inc. including the first book on SoC design published in Russian. ON. He is a Chief Scientist with Tensilica.WOLF et al. Jerraya is the recipient of the Best Paper Award at the 1994 European Design and Test Conference for his work on hardware/software cosimulation. eventually becoming a Cadence Fellow in their laboratories.: MULTIPROCESSOR SYSTEM-ON-CHIP (MPSoC) TECHNOLOGY 1713 Ahmed Amine Jerraya received the Engineer degree from the University of Tunis. “ESL Design and Verification. respectively. 2007. IP-based design of SoC. and system-level design. His particular areas of interest include system-level design. Canada. he was a Member of the Scientific Staff at Nortel in Canada. France. he has been with CEALETI. where he worked on linking system design tools and hardware design environments. Before that. in 1977 and 1978. His most recent book. he was with Burroughs. modeling. platform-based design. He got the grade of Research Director within CNRS and managed at the TIMA Laboratory the research dealing with multiprocessor systems-on-chips. with Nortel/BNR. Waterloo. He was the Cochair of the Design Automation Conference Technical Program Committee for Methods for 2005 and 2006. where he is the Head of Design Programs. Dr. and embedded software. all in computer sciences. respectively. Automation and Test in Europe in 2001. Scotland for six years. In 1986..S. . electronic design automation for integrated circuits. Grenoble. From April 1990 to March 1991. coauthored eight books.” which was written with Brian Bailey and Andrew Piziali. degrees in combinatorics and optimisation from the University of Waterloo. France.