1

Analysis of Error Recovery Schemes for Networks-on-Chips
Srinivasan Murali, Theocharis Theocharides, Luca Benini, Giovanni De Micheli, N. Vijaykrishnan, Mary Jane Irwin

Abstract Network on Chip (NoC) interconnects based on packet-switched communication have been recently proposed for providing scalable and reliable on-chip communication. Due to shrinking transistor sizes and power optimized voltage swings, on-chip interconnects are increasingly susceptible to various noise sources such as cross-talk, coupling noise, soft errors and process variations. Providing resilience from such transient delay and logic errors is critical for proper system operation. Reliability, performance and energy consumption are constraints involving several design trade-offs which have to be considered. In this paper, we investigate the power-performance efficiency of several error recovery schemes applied at the network and link levels to handle transmission errors in on-chip networks. We present architectural details of the schemes and compare them based on power consumption, error detection capability and network performance. The objective of this work is twofold: one is to identify the major power overhead issues of various error recovery schemes, so that efficient mechanisms can be designed to address them. The other is to provide the designer with the necessary information, aiding in the choice of appropriate error control mechanism for the targeted application. Index Terms Networks on Chips, Reliability, Power, Performance, Error Recovery

I. I NTRODUCTION As devices shrink towards the nanometer scale, on-chip interconnects are becoming a critical bottleneck in meeting performance and power consumption requirements of the chip design. Industry and academia recognize the interconnect problem as one of the important design constraints, and consequently, packetbased on chip communication networks (known as Network on Chips (NoCs)) have been proposed to address the challenges of increasing interconnect complexity [3], [8], [10], [11], [20]. NoC designs promise to deliver fast, reliable and energy efficient communication between on-chip components. As
S.Murali, G.De Micheli, Stanford University, CA-94305, USA. Contact: {smurali,nanni}@stanford.edu T.Theocharides, N.VijayKrishnan, M.J.Irwin, Pennsylvania State University, PA-16802, USA. Contact: {theochar,vijay,mji}@cse.psu.edu L.Benini, University of Bologna, Bologna-40136, Italy. Contact: lbenini@deis.unibo.it

Nevertheless. packet switched networks are suitable for NoC design [11]. incorporating features that have been successful in most existing NoC design methodologies. Error protection can be applied at several levels within a NoC design. it can request the sender to retransmit the data. [18]. In this work. In order to protect the system from transient errors that occur in the communication sub-system. Providing resilience from such transient delay and logic errors is critical for proper system operation. As the error detection/correction capability. [8]. flow control) exist. the choice of the error recovery scheme for an application involves multiple power-performance-reliability trade-offs that have to be explored. The use of aggressive voltage scaling techniques to reduce the power consumption of the system further increases the susceptibility of the system to various noise sources. making generalized quantitative comparisons difficult. soft errors and process variations [4]. aiding in the choice of appropriate error control mechanism for the targeted application. Alternatively. error detection codes (parity or Cyclic Redundancy Check (CRC) codes) can be added to the original data by the sender and the receiver can check for the correctness of the received data. The objective of this work is twofold: one is to identify the major power overhead issues of various error recovery schemes. switch architecture. For example. In practice. [17]. The error detection/correction schemes can be based on end-to-end flow control (network level) or switch-to-switch flow control (link-level). Another effect of the shrinking feature size is that the power supply voltage and device Vt decreases and the wires become unreliable. so that efficient mechanisms can be designed to address them. If an error is detected. we collectively relate these three major design constraints in an attempt to characterize efficient error recovery mechanisms for the NoC design environment. [20]. error protection efficiency and impact on performance of various error recovery mechanisms. routing. Hybrid schemes with combined retransmission and error correction capabilities can also be envisioned. We investigate the energy efficiency. error correcting codes (such as Hamming codes) can be added to the data and errors can be corrected at the receiver. E RROR C ONTROL M ECHANISMS AND O N -C HIP N ETWORKS The quest for reliable and energy efficient NoC architectures has been the focus of multiple researchers. area-power overhead and performance of the various error detection/correction schemes differ. fault-tolerant routing . as they are increasingly susceptible to various noise sources such as crosstalk. different network architectures (topologies.2 most application traffic is bursty in nature. this paper presents a general methodology and attempts to provide comparisons based on reasonable assumptions on network architecture. [19]. The other objective is to provide the designer with the necessary information. We explore error control mechanisms at the data link and network layers and present the architectural details of the schemes. we can use error detection/correction mechanisms that are used in traditional macro-networks. II. coupling noise. In a simple retransmission scheme.

III. however these works do not capture the details as well as the trade-offs involved in such mechanisms. the use of single error correction and parity based error detection schemes for NoCs is explored. Architecture for End-to-End and Switch-to-Switch Retransmission (b) Switch-to-Switch Retransmission algorithms have been proposed in [12]. the energy behavior of different error detection and correction schemes for on-chip data buses is explored. the efficiency of various error detection/correction mechanisms are yet to be looked in detail. In [13]. In [14]. [19] have been proposed that decrease cross-talk between wires and avoid adversarial switching patterns on the data bus. [9]) refer to incorporating such mechanisms into the network and data link layers. we chose one with characteristics presented in Figure 2.3 Core sender NI Encoder receiver NI Core Core sender NI Encoder swtich A Decoder credit signal ACK Core data switch B Decoder receiver NI Decoder swtich A queuing buffers packet Buffers mf mf mf switch B credit signal packet buffers valid nodata circular buffers (queuing + retransmission buffers) TMR (a) End-to-End Retransmission Fig. In the NoC domain. error protection efficiency and impact on performance of the various schemes. In [24]. Many bus encoding techniques such as [17]. [5]. S WITCH A RCHITECTURE FOR E RROR D ETECTING AND C ORRECTING S CHEMES We identify 3 different classes of error recovery schemes as explained in the following subsections: . In [1]. the supply voltage is varied dynamically based on the error rate on the links. In this work. which incorporates features that have been successful in many NoC designs and is representative of a reasonable design point. we explore error control mechanisms at the data link and network layers and investigate the energy efficiency. Among the multitude of NoC architectures proposed in the literature. [8]. 1. However. trade-offs of various error detection/correction schemes are not presented in that work. a fault model notation is presented and the use of multiple encoding schemes for different parts of a packet is explored. the data bus is monitored to detect adverse switching patterns (that increase the wire delay) and the clock frequency is changed dynamically to avoid timing errors on the bus. In [16]. A methodology for trading off power and reliability using error control codes for Systems on Chip (SoC) signaling was first presented in [4]. [18]. Some existing architectures (such as [7].

we also have a time-out mechanism for retransmission at the sender. We use an input-queued router with credit-based flow control [23]. where Nl is the number of cycles needed to cross the link between adjacent switches. B. We assume static routing with paths set up at the sender NI and wormhole flow control for data transfer. the flit-type (that identifies header. End-to-End Error Detection In the end-to-end error detection (ee) scheme. it takes 1 cycle to generate a credit. If a switch detects an error on the header flit of a packet. The packet is segmented into multiple flits (flow control units). Switch-to-Switch Error Detection In switch-to-switch error detection schemes. depending on whether the data had an error or not.4 —– The processor/memory cores communicate with each other through the network components: switches. the number of queuing buffers needed at each input of the switch should be at least 2Nl + 1 flits. parity (ee-par) or CRC codes (ee-crc) are added to the packet (refer Figure 1(a)). We use sequence identifiers for packets to detect reception of duplicate packets. then it drops the packet. A CRC or parity encoder is added to the sender NI and decoders are added at the receiver NI. The payload and tail flits do not carry any routing information. For maximum network throughput. Each core has a sender and receiver NI for sending and receiving data from and to the core. Nl cycles for the credit to reach the preceding switch and Nl cycles for a flit to reach the switch from the preceding switch. it is protected with parity/CRC codes that are checked at each hop traversal. the error detection hardware is added at each switch input and retransmission of data is between adjacent switches. in credit based flow control. Characteristics of the NoC Architecture A. The NIs packetize data from the cores and build the routing information for data communication. Fig. body or tail flit) bits are protected using redundancy. if this is a request-response transaction (as in Open Core Protocol [22]). The additional buffers added at each input of the switch are used to store packets till an ACK/NACK comes from the next switch/NI. routing field and part of the data to be transmitted. There are three types of flits in a packet: a header flit. Here we identify two different schemes: parity/CRC at flit-level and at packet level. This is because. followed by payload flits and a tail flit. The sender NI has one or more packet buffers in which it stores the packets that are transmitted. The number of buffers needed to support switch-to-switch . links and Network Interfaces (NIs). The ACK/NACK signal can be piggy-backed on the response packet. The switch architecture is modified to support these schemes. as shown in Figure 1(b). Also. The receiver NI sends a NACK or an ACK signal back to the sender. The header flit has source and destination addresses for data transfer. To account for errors on the ACK/NACK packets. As header flit carries critical information (like routing information). 2.

At each input of the switch. the number of retransmission buffers needed at each switch input is 2Nl + f . We also need header flit protection. as the error checking is done only when the tail flit reaches the next switch. C. the retransmission buffers at each switch input should have a capacity of 2Nl + 1 flits for full throughput operation. For correct functionality of the system. Multiple Error Detecting scheme: In this scheme (ec+ed). Similar to the case of queuing buffers. the error detection/correction circuitry and the retransmission buffers need to be error-free. We use relaxed scaling rules and soft-error tolerant design methodologies for designing these components [2]. In the packet level error detection (ssp) scheme. the parity/CRC bits are added to the tail flit of the packet. From this. In the switch-to-switch flit-level error detection (ssf) scheme. we take into account the additional overhead incurred in making these components error-free (which increases the power consumption of these components by around 8% − 10%) . We expanded this estimation a step further by designing and characterizing the circuit schematics of individual components of the switch in 70nm technology using Berkeley Predictive Technology Model [6]. E NERGY E STIMATION A. Hybrid Single Error Correcting. For this scheme.5 retransmission depends on whether error detection is done at the packet level or flit level. We imported these values into our architectural level cycle-accurate NoC simulator and simulated all individual components in unison to estimate both dynamic and leakage power in routing a flit. We use redundancy (such as Triple Modular Redundancy (TMR)) to handle errors on the control lines (such as the ACK line). the parity/CRC bits are added to each flit of the packet by the sender NI. IV. We do not consider pure error correcting schemes in this work. there are two set of buffers: queuing buffers that are used for the credit-based flow control as in the base switch architecture and retransmission buffers for supporting switch-to-switch retransmission mechanism. but for multiple bit errors. AND M ODELS . as in the ee scheme. Energy Estimation A generic energy estimation model in [21] relates the energy consumption of each packet to the number of hop traversals and the energy consumed by the packet at each hop. the receiver corrects any single bit error on a flit. it requests end-to-end retransmission of data from the sender NI. we estimated the average dynamic power as well as the leakage power per flit per component. it is difficult to recover from the situation as there is no mechanism for the sender to retransmit the packet. where f is the number of flits in the packet. In our power estimations. as in such a scheme when a packet is dropped by a switch (due to errors in the header flit).

For a 200 MHz system. E XPERIMENTS AND S IMULATION R ESULTS A.2 flits/cycle from each core and for uniform traffic pattern. We assume the number of flits in a packet to be 4 and the flit-size to be 64 bits. we assume the voltage for the various schemes to be the same. 3.1211 cycles (assuming 16 cores. In another set of experiments. for an injection rate of 0. Note that for . For this. we consider only end-to-end schemes in this sub-section. CRC based encoding and hybrid single error correcting multiple error detecting encoding with that of the original system (without error protection codes). with each core injecting 0. Latency of error detection and correction schemes B. As an example. but investigate the effect of different error rates on the schemes. The residual flit-error rates in the x-axis represent the Mean Time To Failure (MTTF) for the systems. this represents an MTTF of 26 minutes. V. We consider a 4×4 mesh network with 16 cores and 16 switches. a residual flit-error rate of 10−12 signifies that on average the system operates for 3. Power consumption of schemes Fig. The network power consumption for the various schemes are presented in Figure 3. so that 1012 flits are generated in 3. We compare the power consumption of systems with parity based encoding.2 flits/cycle. we make use of the error models from [1].1211 cycles). we assume that the power supply voltage is chosen for each of the error detection/correction schemes based on the residual flit-error rate that the system needs to support.6 300 Power Consumption (mW) 250 200 150 100 50 1e−7 orig ee−par ec+ed ee−crc 1e−9 1e−11 1e−13 1e−15 Residual Flit Error Rate 1e−17 Fig. we fix a constraint on the residual flit error probability. 4. that is we impose each scheme to have the same probability of an undetected error (per flit) at the decoder side. We assume that an undetected error in the system causes the system to crash. We consider two set of experiments: in one set of experiments we assume the operating voltage of the system (with different error recovery schemes) is varied to match a certain residual flit-error rate requirement. Error Models In order to analyze the error recovery schemes. As our objective is to compare the error protection efficiency of various coding schemes. before an undetected error causes the system to crash. Power Consumption of schemes for Fixed Residual Error Rates In this sub-section.

87 19.02 1. in the ee-crc scheme there is more traffic injected in the network.1 flits/cycle) most applications reasonable MTTF values would be of the order of months or years. For low flit-error rate and low injection rate.31 ee ssp ssf ec+ed 0. parity-based end-to-end error detection scheme (ee-par).001 % C OMPONENT. the end-to-end retransmission scheme (ee) incurs a large latency penalty compared to the other schemes. the average packet latency for the various schemes (Figure 4) are almost same. at high error rates. Performance Comparison of Reliability Schemes In this sub-section we investigate the performance of pure end-to-end and switch-to-switch error detection schemes (ee. CRC based error detection scheme (ee-crc) and hybrid single error correcting. the power overhead due to error correction in the ec+ed scheme is more than the power consumed in retransmission in the ee-crc scheme. multiple error detecting scheme (ec+ed). Also. as the error rate and/or the flit injection rate increases.71 0. The orig and ee-par schemes have higher power consumption than the ee-crc and ec+ed schemes. the number of bits needed for error correction and detection codes is more than the pure detection scheme. The hybrid ec+ed scheme has lower power consumption at high residual flit error rates and the ee-crc has lower power consumption for lower residual error rates.01 % 0. thereby causing more power consumption than the ec+ed scheme.12 0. At lower error rates. Flit Buffer (1 flit) (Psrf b ) Packet Buffer (1 packet) (Ppb ) Dynamic Power(mW) 13. The power numbers are plotted for the original system (orig) that has no error control circuitry.1 4. we assume that the operating voltage for the system is fixed at design time (to be equal to 0.29 Static Power(mW) 1. In this and following experiments. as the error detection capability of these schemes is lower and hence they require a higher operating voltage to achieve the same residual flit-error rate.7 TABLE I Power Consumption (mW) 130 120 110 100 90 80 0. This is because.WISE P OWER C ONSUMPTION Component Switch (5x5) Buffers Crossbar Control Total (Psw ) CRC Encoder (Pcrce ) CRC Decoder (Pcrcd ) SEC Encoder (Psece ) SEC Decoder (Psecd ) Switch Retrans.69 0.1 % 0.15 0. ssf.57 1. B. The packetbased switch-to-switch retransmission scheme (ssp) has slightly higher packet latency than the flit-based .85 V) and investigate the effect of varying error rates in the system. We use the flit-error rate (flit error rate is defined as the probability of one or more errors occurring in a flit) metric for defining the error rate of the system.54 0. in the ec+ed scheme.5 % Flit Error Rate 1% Fig.15 0.07 0.52 2. 5. However. We perform experiments on the 16-core mesh with varying injection rates for uniform traffic pattern. ssp) and the hybrid error detection/correction scheme (ec+ed). Power consumption of error recovery schemes (0.22 0.

Let sw traf be the rate of traffic injected at each switch. let the increase in traffic at each switch due to retransmission be represented by sw incrtraf . We assume an operating frequency of 200 MHz. the hybrid single error correcting multiple error detecting scheme (ec+ed) has the least average packet latency of the schemes. flit-size of 64 bits and packet size of 4 flits. Let Ppacketsizeinc be the total power overhead due to increase in packet size due to addition of code words and other control bits. Also. as it is invariant for all the schemes. We need the following definitions to formulate analytical expressions for the power overhead for the schemes: Let inj rate be the traffic injected by each of the NI. switches and the traffic overhead for retransmission (in ee. buffering) to be same for all the NIs and all the switches. the traffic rates from/to the NIs. there are two major components of power overhead: one is the power overhead associated with the packet buffers at the NIs for retransmission and the other is due to the increase . As expected. In the above set of parameters.8 switch-to-switch retransmission scheme (ssf). as in the flit-based scheme errors on packets are detected earlier. For the ee scheme. In this paper. error detection/correction coders. ec+ed schemes) are obtained from simulations. only the component of dynamic power consumption is scaled. retransmission and packet buffers (for 50% switching activity at each component. for simplicity of notation. In the formulation of the power overhead. The link lengths are decided by the physical implementation of the topology. The power overhead associated with the ee scheme is given by: Poverhead ee = ∀ N Is (inj rate × (Pcrce + Pcrcd + Npb × Ppb ))+ (sw incrtraf × Psw ) + Ppacketsizeinc ∀ Switches (1) In this equation. 5 outputs. Power Consumption Overhead of Reliability Schemes The power consumption of a switch (with 5 inputs. The number of packet buffers required in the ee scheme to support an application performance level can be obtained from (possibly multiple sets of) simulations. link length. we represent the parameters (such as traffic rate. For the ee. It is assumed that when the power numbers are scaled based on the traffic through the components. we assume that the base NI power consumption (when there are no packet buffers for retransmission) is taken to be part of the processor/memory core power consumption. Nl = 2). let the number of packet buffers required at each NI for retransmission be Npb . C. each cycle) are presented in Table I. ec+ed schemes. for simplicity of notation we represent both dynamic and static power consumption by single set of variables (refer Table I for notations). we analyze the power overhead associated with the schemes for error detection and recovery. To facilitate comparison of the various error recovery schemes.

The network power consumption for the various error recovery schemes for the 16-core mesh network is presented in Figure 5. We assumed the link lengths to be 2 cycles long. Effect of Hop Count in power consumption due to increased network traffic.76 4 121.9 TABLE II PACKET B UFFERS . 6. WITH Nl = 2 TABLE III L INK L ENGTH Power Consumption (mW) 140 120 100 80 60 40 2 3 4 Average Hop Count 5 ee ssf Npb 1 2 3 4 5 6 ee power (mW) 75. we need to have sequence identifiers for packets and mechanisms to detect reception of duplicate packets. As the Ppacketsizeinc affects the schemes almost in a similar manner (as the ssf needs code bits on each flit.1 flits/cycle. with each core injecting 0. An optimization can be performed in this case.8 134.76 172. We consider the power consumption due to look-up tables and control circuitry associated with these mechanisms to be part of the packet buffer power consumption (these typically increase packet buffer power overhead by 10%). we found that this increase has much lower impact than the above case.12 59. respectively. and as explained in section III B. they need to be sent as separate packets. the network traffic increases due to retransmission of packets. (b) at higher error rates. For the ee and ec+ed schemes. ”writes” to memory locations normally do not require response back to the source). as the ACK/NACK packet needs to be only one flit long.5 84 93 102 111 120 ee. Npb = 2 ssf power power (cycles) (mW) (mW) 1 65. while the ee scheme needs additional information for packet identification. The increase in traffic in the ee scheme is due to two reasons: (a) when ACK/NACK packets cannot be piggy-backed to the source (as an example. header flit protection and packet code words) this has lesser effect on deciding the choice of scheme. However even at flit error rates of 1%. We performed simulations with uniform traffic pattern. it depends linearly on the link lengths. The power overhead of the ssf scheme is represented by: Poverhead ssf = ∀ N Is (inj rate × Pcrce ) + (sw traf × ((2Nl + 1) × Psrf b + Pcrcd )) + Ppacketsizeinc ∀ switches (2) The power consumption of the switch retransmission buffers is the major component of the overhead.52 5 141.24 2 84 97 3 102.52 Nl Fig. we found that total power consumption increases by 10% − 15% due to this overhead. Even with this optimization. The power overhead of (ssp) and ec+ed schemes can be easily derived from the overhead equations for ssf and ee schemes. .22 216. For the ee scheme to work.

The flit-based scheme also incurs more power consumption with increasing flits/packet as the ratio of useful bits to overhead bits (i.e. we assume that the packet size is kept constant (256 bits) and we vary the number of flits/packet. As the number of flits/packet increases. Effect of Buffering Requirements. when the link lengths are large. The results are presented in Tables II and III. if the parameters (such as link length. . As the average hop count for data transfer increases. Thus for traffic flows that traverses longer number of hops or when the network size is large. varying the number of packet buffers and link lengths (and hence the number of retransmission buffers for ssf scheme). the ssf scheme is more power efficient than the ee scheme. thereby consuming more power on the switch retransmission buffers. To see the impact of buffering requirements. Nl = 2) for the different scenarios is shown in Figure 6. ssp schemes for this set-up is large compared to the packet buffering needs of the ee. This is attributed to two factors: (a) the switch buffering needed for retransmission in ssf. packet buffering needs. For this experiment we observe that the power consumption of switch-based error detection schemes (ssf. the CRC code bits) decreases as flits/packet increases. In this experiment. D.10 the number of packet retransmission buffers needed to support the application performance level were obtained from simulations (which turned out to be 2 packet buffers/NI). ee scheme is more power efficient. switch-to-switch retransmission schemes incur a large power penalty. However. we performed experiments on the mesh network. For small link lengths and when the packet buffering requirements of the ee scheme is large. the power overhead of ssf increases rapidly. it is difficult to make generalization on the efficiency of the schemes. We examine these two points in detail in the following sub-section. The power consumption of the flit-based (ssf) and packet-based (ssp) schemes for varying number of flits/packet is presented in Figure 7. the buffering needs of the packet-based scheme increases. we performed experiments varying the average hop delay for data transfer. etc. Traffic Patterns and Packet Size One of the major power overheads for the schemes is the amount of packet and switch buffering needed for retransmission.) are obtained from user input and simulations. they can be fed into the above methodology to compare the error recovery schemes. On the other hand.ec+ed). as more traffic passes through each switch. In the figure.ec+ed schemes (b) due to uniform traffic pattern. But in the realm where link lengths are short and packet buffering needs are small. average hop count of 2 corresponds to neighbor traffic pattern and other hop delay numbers can be interpreted as representing other traffic patterns.ssp) is higher than end-to-end retransmission schemes (ee. the traffic through each switch is more (as the average number of hops is more). The power overhead of the ee and ssf schemes (assuming Npb = 2. thus increasing ssf and ssp retransmission overhead. To see the impact of various traffic scenarios. Another important parameter that affects the choice of the schemes is the application traffic characteristics. hence the power consumption of the packet based scheme increases rapidly.

Methods that reduce the ACK/NACK traffic (such as multiple packets sharing a single ACK/NACK signal) would be interesting to explore at. VI. For the ssf and ssp schemes. packet schemes However. Flit vs. end-to-end detection schemes are power efficient. At higher error rates. Design methodologies that trade-off application performance for the buffering needs would result in smaller power overhead. . the major components of power overhead are the packet buffering needs at the NIs and the increase in network traffic due to ACK/NACK packets. a hybrid error detection and correction mechanism has higher performance than other schemes. we found that flit-based scheme is more power efficient than the packet based scheme. Methods from queuing theory can be explored to design these buffers.76 5. The area overhead of the schemes are comparable. Further work in this area can be done to investigate the effects of application and software level reliability schemes and to provide online adaptation capabilities like reconfigurable designs for error resilience.4 5. From the experiments we observe that for networks with long link lengths or hop counts.11 200 PB FB Power Consumption (mW) TABLE IV N O C A REA 150 100 Scheme 50 0 4 8 16 32 Number of flits in packet orig ee ssf ec+ed area mm2 3. Another avenue is to explore mechanisms that reduce the control overhead associated with duplicate packet reception in the ee scheme. 7. the major power overhead is due to the retransmission buffers needed at the switches. for reasonable flit-sizes. depending on the error rates prevailing in the system. At low error rates the average latencies incurred in all the schemes are similar. For hierarchical networks.36 4.3 Fig. switch based error control can be implemented for local communication and end-to-end error control can be implemented for global communication (that traverses longer links and hop counts). the error correction circuitry can be selectively switched on/off. The area of network components (of switches and the additional hardware for error recovery) for various schemes for the 16-node mesh network (with Npb = 2 and Nl = 2) is presented in Table IV. D ISCUSSIONS & C ONCLUSIONS For the ee and ec+ed schemes. Switch level detection mechanisms are power-efficient when the link lengths are small and when the end-to-end scheme needs large packet buffering at the NIs. As the ee scheme uses a subset of the hardware resources used for the ec+ed scheme.

[12] R. L. February 2004 [17] K. pp. [8] ”The Nostrum Backbone”.se/info/FOFU/Nostrum/ [9] C.. [18] S. Shanbhag. Tutorial. 2000.Benini.“A Fault Model Notation and Error-Control Scheme for switch-to-Switch Buses in a Network-on-Chip”. [19] K. 16th Symposium on Integrated Circuits and Systems Design. pp. “http://www. 2003. ”Low power error resilient encoding for on-chip data buses”. 2002. Proc.“ Coding for system-on-chip networks: a unified framework”. Pirretti et al. Dally. March.1080. Benini. Proc.Dallosso et. IEEE Computers. on VLSI Systems.. GLSVLSI. “A Crosstalk Aware Interconnect with Variable Cycle Transmission”. R.J. 2003. of the 36th MICRO. Rijpkema et al. pp.berkeley. Patel et al. Pirretti et al. [4] R. ”A generic architecture for on-chip packet-switched interconnections”. [16] L. Feb 2004. DAC. June. ”Networks-On-Chip: The Quest for On-Chip Fault-Tolerant Communication”. 11. [20] L. Part II.. [14] F. ” Quality-of-Service and Error Control Techniques for Network-on-Chip Architectures”. “A mathematical basis for basis for power-reduction in digital VLSI systems”. Bertozzi. Worm et al. G. . DATE 2003. 1997. vol. 2004. “A Bus Delay Reduction Technique Considering Crosstalk”. Guerrier. ASPLOS XI. IEEE Transactions on VLSI. of ISVLSI..S. DATE’04. [13] H. DATE. Zimmer et al. Wang et al. IEEE Trans. Narayanan. 1052-1058. Greiner. R. ”Trade offs in the design of a router with both guaranteed and best-effort services for networks on chip”. Nov. 2004. pp. 45-50. Proc. al. Zeferino. pp. available at http://www. Proc. IEEE Trans. 250-256.. Proc. “An Adaptive Low-power Transmission Scheme for On-chip Networks” . ”Fault Tolerant Algorithms for Network-On-Chip Interconnect”.org/” [23] W. Vellanki et al. 2000. 2003. pp.. Susin. pp. 8(4):379-391. ”Networks on Chips: A New SoC Paradigm”. Li et al. 2004. Morgan Kaufmann. Shanbhag. Oct 2004. pp. B. ISSS/CODES. “Computing in the presence of soft errors”.“Fault Tolerant Algorithms for Network-On-Chip Interconnect”. [10] E. R EFERENCES [1] D. [5] M. 2003. 44. pp 1076..Xie.ocpip. 2003. [2] V. Y. [22] Open Core Protocol. 92-100 [15] M. Towles. [21] H. DATE. De Micheli.edu/ ptm/ [7] M. [24] P. ”Principles and Practices of Interconnection Networks”. A.eecs. [3] N.. of IEEE ISVLSI. ”xpipes: a Latency Insensitive Parameterized Network-on-chip Architecture For Multi-Processor SoCs”. available at: http://www-device. October 2002. Proc. Nov. Marculescu. Proc. 2002. Oct 2004. Hirose et al. ISSSA ISSS. N. ”SoCIN: A Parametric and Scalable Network-on-Chip”.imit. [11] P. ”Towards Achieving Energy Efficiency in Presence of Deep Submicron Noise”.kth.Srinivas et al. 935-951. Hegde.12 VII. 536-539. [6] ”Berkeley Predictive Technology Model”.. 103-106. G.. Jan. August 2000. no. DATE ’00. Proc. A.De Micheli. ICCD 2003.. ACKNOWLEDGMENT This research is supported by MARCO Gigascale Systems Research Center (GSRC) and NSF (under contract CCR-0305718). 70-78. on Circuits and Systems. ”Power-Driven Design of Router Micro-architectures in On-Chip Networks”. “Error-Correction and Crosstalk Avoidance in DSM Busses”. ISVLSI.