This action might not be possible to undo. Are you sure you want to continue?

Welcome to Scribd! Start your free trial and access books, documents and more.Find out more

**Simon Favre, Mentor Graphics 12/19/2011 11:03 AM EST
**

Whether you are fabless, fab-lite, or IDM, the goal of reducing a design’s sensitivity to manufacturing issues should ideally be handled by the design teams. The farther downstream a design goes, the less likely a manufacturing problem can be addressed without costly redesign. By addressing design-for-manufacturing (DFM) problems early, when the design is still in progress, manufacturing ramp-up issues can be avoided. One aspect of DFM is determining how sensitive a physical design (layout) is to random particle defects. The probability of a random particle defect is a function of the spacing of layout features, so tighter spacing increases random defects. Because memories are relatively dense structures, they are inherently more sensitive to random defects, so when they are embedded in an SOC design, they can impact the overall yield of the device. Critical Area Analysis for Embedded Memory Critical Area Analysis (CAA) is a DFM technique that measures the susceptibility of a specific layout to random defects and indicates areas of the layout where design modifications can have the greatest positive impact on overall yield. One way to improve yield of an SOC design with embedded memory is to increase the layout spacing in some areas to achieve a better CAA score. Another way is to build redundancy into the memory design so faulty cells can be bypassed during final production test. Of course, redundancy also has a cost in terms of real estate. So deciding how much to employ DFM techniques versus adding more redundant cells is an engineering optimization problem. Understanding how to employ CAA becomes more important at each successive node. Memories just keep getting bigger and smaller dimensions introduce new defect types. The tradeoffs that have worked well on previous nodes may not give optimal results at 28 nm. For example, although row redundancy has been avoided in the past because it has been considered too costly in terms of access time, at 28 nm it may be required to achieve acceptable yields. All of these factors make careful analysis more valuable as a design optimization tool. Background on CAA Critical area is the area of a layout where a particle of a given size will cause a functional failure. Critical area depends only on the layout and the range of particle sizes being simulated. CAA calculates values for the expected Average Number of Faults (ANF) and yield based on the dimensions and spacing of layout features and the particle size and density distribution measured by the fab (Fig.1). In addition to classic shorts and opens calculations, current practice in CAA is to include via and contact failures in the analysis. In fact, after analysis, it is often the case that via/contact failures are the dominant failure mechanism. Other failure mechanisms may also be incorporated into the analysis, depending on the defect data provided by the fab.

Fig. 1. it is the area where a particle can cause an open. a defect density needs to be available for every layer and defect type to which CAA will be applied. In this equation. At the limit. The fabs curve-fit the opens and shorts defect data for each layer to an equation of this form to support CAA. In practice. In order to be used for CAA. 2. x is the defect size. . the entire area of the chip will be critical for a large enough defect size. layer thickness and design rules typically use the same defect density values.Fig. critical area increases with increasing defect (particle) size. In principle. For opens. Defect Densities Semiconductor fabs have various methods for collecting defect density data. the defect density data must be converted into a form compatible with the analysis tool. The most common format is the simple power equation shown in (1). Definition of critical area. For shorts the critical area (red) is area of the layout where a particle of a given size can cause a short. 2. k is a constant derived from the density data. based on the range of defect sizes they can detect and measure with test chips or metrology equipment. layers that have the same process steps. As shown in Fig. Critical Area CA(x) in square microns as a function of defect size in nanometers for one defect type. most fabs limit the range of defect sizes that can be simulated. However in practice. and the exponent q is called the fall power.

The CA(x) and the defect density data D(x) are then used to calculate the expected average number of faults (ANF) according to (2). Most foundries define a probabilistic failure rate for all single vias in the design. and . the individual ANF values may simply be added to arrive at a total ANF for all layers and defect types. so care must be taken when attempting to correlate to actual die yields. Naturally. most widely used yield models is the Poisson model (3). This calculation is performed by the tool using numerical integration.Defect density data may also be provided in table form where each specific defect size listed has a density value. A simplifying assumption is that outside the range of defect sizes the fab has data for. This is done by measuring the actual layout and determining all the areas where a particle of a given size could result in a failure. DLY cannot account for parametric yield issues. it may be added to the ANF values for all the other defect types. Note that ANF is not strictly a probability of failure. it is usually desirable to apply one or more yield models to make a prediction of the Defect Limited Yield (DLY) of a design. and the ANF is simply calculated as the product of the count and the failure rate (4). All that is needed is a sum of all the single cuts on a given layer. Once the ANF(via) has been calculated. the defect density is 0. Calculation of ANF To determine ANF for a design. The dmin and dmax limits are the minimum and maximum defect sizes according to the defect data available for that layer. a tool that supports CAA. ANF and Yield for Cut Layers Calculation of ANF and yield for cut layers (contacts and vias) is generally simpler than for other layers. is used to extract the Critical Area CA(x) for each layer over the range of defect sizes. as ANF is not constrained to be less than or equal to 1. Calculation of Yield Once the ANF has been calculated. In fact the classic CAA technique described above need not be used at all. One of the simplest. such as Calibre. This simplifying assumption neglects the problem that a large enough particle will cause multiple failures. but it greatly simplifies the calculation of ANF as well as reducing the amount of data needed from the fab. In most cases. and assume that via arrays do not fail.

Multiplexing with fuses requires that the die be tested. We’ll ignore these in this article and focus on repairable SBRC defects. and the associated fuses blown to accomplish the repair. The fuse approach has the advantage of simplicity and reduced area impact. in this article we’ll assume the design uses embedded SRAM. Redundant columns tend to be easier to apply. sense amp. Obviously. as the address decoding is not affected. the Metal1 layer has to be shared between row and column failures. Memory Redundancy As described in the introduction. or separate ones based on the design rules for that via layer. one could associate poly and contact to poly on row lines with row failures. Again. such as shorts between power and ground. so not all defects on these layers will cause failures that are associated with repair resources. and logic failures. it will need to know the repair resources available in each memory block. the primary interest is in single-bit. some simple associations may be made. and . SRAM IP providers make redundancy an option designers can choose. The contact layer can be separated into contacts to diffusion (N+ and P+ separately. The goal of analyzing memory redundancy with CAA is to ensure that DLY is maximized. There are also non-repairable fatal defects. and associate diffusion and contact to diffusion on column lines with column failures. by looking at the connections of the word lines and bit lines to the bit cell.used in a yield equation as described above. Failure Modes Every physical structure in a memory block is potentially subject to failures caused by random defects. Typically. most of the layers in the memory design are used in multiple places. BISR has greater complexity. Additional test time also increases cost. and peripheral failures. having redundant structures in the design adds area. The most common classifications are single-bit failures. the CAA tool can perform the required measurements and calculations. Failures may be classified according to the structures affected. For ex-ample. as well as a breakdown of the failure modes by layer and defect type. embedded memories can account for significant yield loss due to random defects. typically at wafer sort. row and column failures. each with separate failure rates. or together). it is important to define which layers and defect types are associated with which memory failure modes. The most common form of redundancy is redundant rows and columns. Specification of Repair Resources In order for a CAA tool to accurately analyze memory redundancy. and designers may not have a good basis for calculating that cost. row and column failures (SBRC) occurring in the core of the memory array. and contacts to poly. By examining the layout of a typical 6-T or 8-T SRAM bit cell. with greater impact on die area. Because contacts to poly and contacts to diffusion both connect to Metal1. Vias between metal layers may all use one failure rate. Although other types of memories can be used in SOC designs. address decoder. Regardless of the method of applying the repair. which may be further subdivided into I/O. which directly increases the cost of manufacturing the design. only the multiplexing of bit lines and IO ports. A complete discussion of failure mechanisms in memory designs is beyond the scope of this article. In order to analyze SBRC failures with CAA. but requires additional test time. while minimizing the impact on die area and test time. Repair Resources Embedded SRAM designs typically make use of either Built-In Self Repair (BISR) or fuse structures that allow multiplexing out the failed structures and replacing them with the redundant structures. but in terms of repair using memory redundancy.

the type of defects that can occur) that have redundant resources for a particular family of memory blocks. NF. the count of total and redundant rows and columns is also needed. but are independent of the number of rows and columns and the redundancy resources.VIA2} {M3} } R128x32 sramConfig 34 2 128 0 0 0 ram6t R2048x32 sramConfig 34 2 2048 0 0 0 ram6t Calculation of Yield with Redundancy Once the CAA tool has performed the initial analysis providing ANF without redundancy. y y yp .e.OPEN} {DIFF. . total rows.y .. and do not need to be repaired. NR is the redundant units. dummy rows. dummy units do not cause functional failures. followed by row rules. In the initial analysis. First we calculate the number of active units in the core (5). Calculation Method The calculation method is based on the well known principle of Bernoulli Trials. Calibre calculates the unrepaired yield using the defect density data provided by the fab.VIA1} {M2} } { {PO.VIA1} {M2} {single. In order to specifically identify the areas of the memory that can be repaired. the rule configuration name. The goal is to ensure the required number of good units out of some total number of units. redundant columns. In this example. NA is the required number of active units. both block specifications refer to the same rule configuration (sramConfig).4}} {single. or use a marker layer in the layout database to allow the tool to identify the core areas of the memory. Since the calculation method is the same. the block name. The CAA tool determines the ANF for the total core area of each memory block listed in the redundancy configuration file.SHORT} {single.ODCO} {M1 {0. The last two lines describe a particular SRAM block design and specify in order. Given these parameters. one can either specify the bit cell name used in each memory block. redundant rows. Below is an example of such a memory redundancy specification called “sramConfig. We can calculate the unit ANF using (7). For each memory block. and lastly the name of the bit cell.4}} {single. If present.” The first two lines list the CAA rules (i. the total columns.OPEN} {single. as the CAA tool has no knowledge of whether or not they are functional. In (6) we derive the number of functional non-dummy units. each row or column in a memory core may simply be referred to as a “unit” and the calculation method may be described only once. The first line contains the column rules. sramConfig = { {DIFF. which repair resource they are associated with. the yield with redundancy can be calculated.POCO} {M1 {0. These can be specified in Calibre as a series of CAA rules. These are dependent on the type and structure of the memory block. dummy units do contribute to the total ANF. NT is the total units. and ND is the dummy units. dummy columns.

q.999. A Working Example To see how effective memory redundancy can be. If we carry through the analysis for all defects types.8798. In our example. unrepaired yield. The memory of interest is a 4Mbit SRAM structured as 32K x 128 bits. The result calculated in (10). the expected yield is ~0. we convert ANF(unit) back to a yield using the Poisson equation in (8). For any memory core or rule where no repair resources exist. The probability of failure. Now we need to add together the probabilities of all cases that satisfy the requirement of getting at least NA good units out of NF available units. and the result is simply the original.In order to be consistent with probability theory. This is repeated over all rules in the memory configuration specification. let’s look at a hypothetical example. is defined in (9). This value becomes the p term in the Bernoulli equation which denotes probability of success.35. In this case our goal is to realize at least 128 good units (NA) out of the total (NT). is the repaired yield for a memory core for a specific rule. Repair ratio = (Repaired yield–Unrepaired yield) / (1– Unrepaired yield) (11) .999 raised to the 128th power = 0. ND=0 NA=NT–NR–ND=128 Let’s say analysis determines that the unit yield considering one defect type will be 0. Calculating the effective yield for memory blocks with no redundancy is still valuable if the CAA tool has the capability of post-processing the calculations with different memory redundancy specifications. NF=128. Then the unrepaired yield of the entire core will be 0. This enables the tool to present numeric and graphical output that makes it easy to determine the optimal amount of redundancy visually. Memory designers use a metric called the repair ratio to express the efficacy of memory redundancy. the repaired overall yield will also be 0. the calculation in (10) may be skipped. and all memory blocks listed with redundancy. If we add redundancy to repair any unit defects. NR=2. The term C(NF.999. the values are: NT=130. which is a standard mathematical function. (NF-k)) is the binomial coefficient.

OPEN} {DIFF.4} {single.VIA2} {M3.4} {M2. .OPEN} {single.SHORT 0. for just the memory.SHORT} {M1. In this case the repair ratio is (.99 . as well as for various redundancy configurations.SHORT} {M1.SHORT} {single. ANF of the 1024x32 memory core is improved substantially (failure rate cut in half in column 6 compared to column 5) by adding one redundant row. Calibre calculates the ANF for memory with no redundancy.6} } { {PO. For our example 4Mbit SRAM we would have the following configuration entries: sramConfig = { {DIFF.OPEN} {single.OPEN 0. Fig. This enables it to calculate the critical area of the entire memory core (all instantiations of ram6t).VIA1} {M2.35) = .. Now that we have covered the methodology.OPEN 0.. but adding a second redundant row shows almost no further improvement (column 7).ODCO} {M1. With this configuration information.OPEN} } spram_2048x32_core sramConfig 34 2 2050 2 0 0 ram6t spram_128x32_core sramConfig 34 2 130 2 0 0 ram6t The bit cell name (ram6t) tells the tool the name of the hierarchical layout element that describes a memory unit that can be repaired and should be considered in the analysis.SHORT 0. Notice in the highlighted row. The table rows show the results for the entire design.985. CAA using Calibre showing effects of memory redundancy on average number of faults (ANF).VIA1} {M2. First we need to set up a configuration file for the tool.A value in the high 90’s is considered good.OPEN} {single.POCO} {M1. let’s walk through an example using Calibre to determine an optimum redundancy configuration. 3. 0 rows and 2 columns. In Figure 3 we see the results shown as a table with values of ANF for different redundancy configurations in columns 5-8: no redundancy. 0 rows and 1 column redundancy.SHORT} {PO.35) / (1 .6} {M2.SHORT} {M3. and for specific types of defects. etc.

4. by block. . total of all analysis layers. CAA results showing memory repair ratio. The value of these results is that the expected ANF is based on the actual layout of the memory under consideration and the specific defect density of the fab and process used for manufacturing. Figure 5 shows a plot produced by the CAA tool depicting ANF for each redundancy configuration and for each type of defect. and by layers or groups. Memory plot showing the average number of faults for different memory redundancy configurations. while adding additional resources has little further impact. Fig. Figure 4 shows the effects of different redundancy schemes in terms of repair ratio. It’s clear that the combination of one redundant row and one redundant column makes a big improvement in ANF. again listed by design total. by memory.Fig. 5. The designer now has a way to determine the impact of specific redundancy configurations on the expected yield of an embedded memory.

Analysis of memory redundancy using CAA and accurate foundry defect statistics is necessary to quantify the yield improvement that can be achieved and determine the optimal configuration. visit EDA Designline where you will find the latest and greatest design.C. custom design. but it's free and painless so don't let that stop you [grin]). In between these two extremes. If redundancy is applied where it has no benefit. which actually increases manufacturing cost. Simon was with Ponte Solutions. Simon has worked at other EDA companies since 1998.com. or reducing defect rates. yield engineering and process development. Also. Simon supports and directs improvements to the Calibre YieldAnalyzer product. redundancy may or may not be applied depending on very broad guidelines.Conclusion Memory redundancy is a design technique intended to reduce manufacturing cost by improving die yield. Prior to joining Mentor Graphics. Simon has BS and MS degrees from U. and news articles with regard to all aspects of Electronic Design Automation (EDA). Prior to 1998. more redundancy may be needed. product. acquired by Mentor in 2008. He can be contacted at simon_favre@mentor. If no redundancy is applied. If defect rates are high. If you found this article to be of interest. then alternative methods to improve die yield may include making the design smaller. Author biography Simon Favre is a technical marketing engineer in the Mentor Graphics Calibre division. then die area and test time are wasted. technology. If defect rates are low. . you can obtain a highlights update delivered directly to your inbox by signing up for the EDA Designline weekly newsletter – just Click Here to request this newsletter using the Manage Newsletters tab (if you aren't already a member you'll be asked to register. redundancy may be unnecessary. Simon worked at semiconductor companies doing library development. Berkeley in EECS.

- Us Senators Presidents Bcburden
- Understand Asymmetric Threat Us
- scotus_07-290_2nd_amend
- Saab 2001-9-5 Se Owners Manual
- Weapons Bay Flow Analysis
- GA-P55-UD3 Manual
- GAO_SEMICONDUCTOR_EXPORT_APR-2002
- Buell_Tube_Frame_Isolators
- Radar Bird Detect by Weather Sys
- Flcs Design Stabilizers
- Mbda Decisions Problems Missile Designs
- Law of Belligerent Occupation 11
- Flcs Fault Tolerant Designs
- Helicopter Aerodynamics
- Sovereignty Paracel Spratly

- New Architecture_powePower_3D IOr Network_3D IC
- Boning Models of Process Variations in Device and Interconnects
- Reducing Area & Power Overhead In Design of Light Weight Sensor For Detection of Reclcled ICS
- Aimspice Ref
- Transistor Sizing
- Slides 4
- Vittoz - 1994 - Analog VLSI Signal Processing Why Where and How
- timing
- Inverter
- Digital Implementation of Fuzzy Logic Controller for Wide Range Speed Control of Brushless DC Motor
- 05713241
- Opx1TJTP
- Design of a Coffee Vending Machine Using Single Electron Devices
- 059444555444
- Pspice Command Summary
- Chap4 Lect11 Logical Effort
- VLSI Full Custom Mask Layout
- 161004
- DATASHEET 74LS83
- 00553403
- 21.Aging-Aware Reliable Multiplier Design With Adaptive Hold Logic Journal
- DC Motor Speed Control .doc
- Crosstalk Scenario in Multiline VLSI Interconnects
- p217-calimera(1)
- 10.1.1.9
- #Digital circuit capacitance and switching analysis for ground bounce in ICs with a high-ohmic substrate2
- Charge Redistribution-I Grey
- Amir Final Project Report
- _2_1
- AOCV

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue listening from where you left off, or restart the preview.

scribd