You are on page 1of 9

Critical Area Analysis and Memory Redundancy

Simon Favre, Mentor Graphics 12/19/2011 11:03 AM EST


Whether you are fabless, fab-lite, or IDM, the goal of reducing a designs sensitivity to manufacturing issues should ideally be handled by the design teams. The farther downstream a design goes, the less likely a manufacturing problem can be addressed without costly redesign. By addressing design-for-manufacturing (DFM) problems early, when the design is still in progress, manufacturing ramp-up issues can be avoided. One aspect of DFM is determining how sensitive a physical design (layout) is to random particle defects. The probability of a random particle defect is a function of the spacing of layout features, so tighter spacing increases random defects. Because memories are relatively dense structures, they are inherently more sensitive to random defects, so when they are embedded in an SOC design, they can impact the overall yield of the device. Critical Area Analysis for Embedded Memory Critical Area Analysis (CAA) is a DFM technique that measures the susceptibility of a specific layout to random defects and indicates areas of the layout where design modifications can have the greatest positive impact on overall yield. One way to improve yield of an SOC design with embedded memory is to increase the layout spacing in some areas to achieve a better CAA score. Another way is to build redundancy into the memory design so faulty cells can be bypassed during final production test. Of course, redundancy also has a cost in terms of real estate. So deciding how much to employ DFM techniques versus adding more redundant cells is an engineering optimization problem. Understanding how to employ CAA becomes more important at each successive node. Memories just keep getting bigger and smaller dimensions introduce new defect types. The tradeoffs that have worked well on previous nodes may not give optimal results at 28 nm. For example, although row redundancy has been avoided in the past because it has been considered too costly in terms of access time, at 28 nm it may be required to achieve acceptable yields. All of these factors make careful analysis more valuable as a design optimization tool. Background on CAA Critical area is the area of a layout where a particle of a given size will cause a functional failure. Critical area depends only on the layout and the range of particle sizes being simulated. CAA calculates values for the expected Average Number of Faults (ANF) and yield based on the dimensions and spacing of layout features and the particle size and density distribution measured by the fab (Fig.1). In addition to classic shorts and opens calculations, current practice in CAA is to include via and contact failures in the analysis. In fact, after analysis, it is often the case that via/contact failures are the dominant failure mechanism. Other failure mechanisms may also be incorporated into the analysis, depending on the defect data provided by the fab.

Fig. 1. Definition of critical area. For shorts the critical area (red) is area of the layout where a particle of a given size can cause a short. For opens, it is the area where a particle can cause an open. As shown in Fig. 2, critical area increases with increasing defect (particle) size. At the limit, the entire area of the chip will be critical for a large enough defect size. In practice, most fabs limit the range of defect sizes that can be simulated, based on the range of defect sizes they can detect and measure with test chips or metrology equipment.

Fig. 2. Critical Area CA(x) in square microns as a function of defect size in nanometers for one defect type. Defect Densities Semiconductor fabs have various methods for collecting defect density data. In order to be used for CAA, the defect density data must be converted into a form compatible with the analysis tool. The most common format is the simple power equation shown in (1). In this equation, k is a constant derived from the density data, x is the defect size, and the exponent q is called the fall power. The fabs curve-fit the opens and shorts defect data for each layer to an equation of this form to support CAA. In principle, a defect density needs to be available for every layer and defect type to which CAA will be applied. However in practice, layers that have the same process steps, layer thickness and design rules typically use the same defect density values.

Defect density data may also be provided in table form where each specific defect size listed has a density value. A simplifying assumption is that outside the range of defect sizes the fab has data for, the defect density is 0. Calculation of ANF To determine ANF for a design, a tool that supports CAA, such as Calibre, is used to extract the Critical Area CA(x) for each layer over the range of defect sizes. This is done by measuring the actual layout and determining all the areas where a particle of a given size could result in a failure. The CA(x) and the defect density data D(x) are then used to calculate the expected average number of faults (ANF) according to (2). This calculation is performed by the tool using numerical integration. The dmin and dmax limits are the minimum and maximum defect sizes according to the defect data available for that layer.

In most cases, the individual ANF values may simply be added to arrive at a total ANF for all layers and defect types. Note that ANF is not strictly a probability of failure, as ANF is not constrained to be less than or equal to 1. Calculation of Yield Once the ANF has been calculated, it is usually desirable to apply one or more yield models to make a prediction of the Defect Limited Yield (DLY) of a design. Naturally, DLY cannot account for parametric yield issues, so care must be taken when attempting to correlate to actual die yields. One of the simplest, most widely used yield models is the Poisson model (3).

ANF and Yield for Cut Layers Calculation of ANF and yield for cut layers (contacts and vias) is generally simpler than for other layers. In fact the classic CAA technique described above need not be used at all. Most foundries define a probabilistic failure rate for all single vias in the design, and assume that via arrays do not fail. This simplifying assumption neglects the problem that a large enough particle will cause multiple failures, but it greatly simplifies the calculation of ANF as well as reducing the amount of data needed from the fab. All that is needed is a sum of all the single cuts on a given layer, and the ANF is simply calculated as the product of the count and the failure rate (4).

Once the ANF(via) has been calculated, it may be added to the ANF values for all the other defect types, and

used in a yield equation as described above. Vias between metal layers may all use one failure rate, or separate ones based on the design rules for that via layer. The contact layer can be separated into contacts to diffusion (N+ and P+ separately, or together), and contacts to poly, each with separate failure rates. Again, the CAA tool can perform the required measurements and calculations. Memory Redundancy As described in the introduction, embedded memories can account for significant yield loss due to random defects. Although other types of memories can be used in SOC designs, in this article well assume the design uses embedded SRAM. Typically, SRAM IP providers make redundancy an option designers can choose. The most common form of redundancy is redundant rows and columns. Redundant columns tend to be easier to apply, as the address decoding is not affected, only the multiplexing of bit lines and IO ports. Failure Modes Every physical structure in a memory block is potentially subject to failures caused by random defects. Failures may be classified according to the structures affected. The most common classifications are single-bit failures, row and column failures, and peripheral failures, which may be further subdivided into I/O, sense amp, address decoder, and logic failures. A complete discussion of failure mechanisms in memory designs is beyond the scope of this article, but in terms of repair using memory redundancy, the primary interest is in single-bit, row and column failures (SBRC) occurring in the core of the memory array. In order to analyze SBRC failures with CAA, it is important to define which layers and defect types are associated with which memory failure modes. By examining the layout of a typical 6-T or 8-T SRAM bit cell, some simple associations may be made. For ex-ample, by looking at the connections of the word lines and bit lines to the bit cell, one could associate poly and contact to poly on row lines with row failures, and associate diffusion and contact to diffusion on column lines with column failures. Because contacts to poly and contacts to diffusion both connect to Metal1, the Metal1 layer has to be shared between row and column failures. Obviously, most of the layers in the memory design are used in multiple places, so not all defects on these layers will cause failures that are associated with repair resources. There are also non-repairable fatal defects, such as shorts between power and ground. Well ignore these in this article and focus on repairable SBRC defects. Repair Resources Embedded SRAM designs typically make use of either Built-In Self Repair (BISR) or fuse structures that allow multiplexing out the failed structures and replacing them with the redundant structures. BISR has greater complexity, with greater impact on die area. Multiplexing with fuses requires that the die be tested, typically at wafer sort, and the associated fuses blown to accomplish the repair. The fuse approach has the advantage of simplicity and reduced area impact, but requires additional test time. Regardless of the method of applying the repair, having redundant structures in the design adds area, which directly increases the cost of manufacturing the design. Additional test time also increases cost, and designers may not have a good basis for calculating that cost. The goal of analyzing memory redundancy with CAA is to ensure that DLY is maximized, while minimizing the impact on die area and test time. Specification of Repair Resources In order for a CAA tool to accurately analyze memory redundancy, it will need to know the repair resources available in each memory block, as well as a breakdown of the failure modes by layer and defect type, and

y , y y yp , which repair resource they are associated with. These can be specified in Calibre as a series of CAA rules. For each memory block, the count of total and redundant rows and columns is also needed. In order to specifically identify the areas of the memory that can be repaired, one can either specify the bit cell name used in each memory block, or use a marker layer in the layout database to allow the tool to identify the core areas of the memory. Below is an example of such a memory redundancy specification called sramConfig. The first two lines list the CAA rules (i.e., the type of defects that can occur) that have redundant resources for a particular family of memory blocks. The first line contains the column rules, followed by row rules. These are dependent on the type and structure of the memory block, but are independent of the number of rows and columns and the redundancy resources. The last two lines describe a particular SRAM block design and specify in order, the block name, the rule configuration name, the total columns, redundant columns, total rows, redundant rows, dummy columns, dummy rows, and lastly the name of the bit cell. In this example, both block specifications refer to the same rule configuration (sramConfig). Given these parameters, Calibre calculates the unrepaired yield using the defect density data provided by the fab. sramConfig = { {DIFF.OPEN} {DIFF.SHORT} {single.ODCO} {M1 {0.4}} {single.VIA1} {M2} } { {PO.OPEN} {single.POCO} {M1 {0.4}} {single.VIA1} {M2} {single.VIA2} {M3} } R128x32 sramConfig 34 2 128 0 0 0 ram6t R2048x32 sramConfig 34 2 2048 0 0 0 ram6t Calculation of Yield with Redundancy Once the CAA tool has performed the initial analysis providing ANF without redundancy, the yield with redundancy can be calculated. Since the calculation method is the same, each row or column in a memory core may simply be referred to as a unit and the calculation method may be described only once. If present, dummy units do not cause functional failures, and do not need to be repaired. In the initial analysis, dummy units do contribute to the total ANF, as the CAA tool has no knowledge of whether or not they are functional. Calculation Method The calculation method is based on the well known principle of Bernoulli Trials. The goal is to ensure the required number of good units out of some total number of units. First we calculate the number of active units in the core (5).

NA is the required number of active units, NT is the total units, NR is the redundant units, and ND is the dummy units. In (6) we derive the number of functional non-dummy units, NF. The CAA tool determines the ANF for the total core area of each memory block listed in the redundancy configuration file. We can calculate the unit ANF using (7).

In order to be consistent with probability theory, we convert ANF(unit) back to a yield using the Poisson equation in (8). This value becomes the p term in the Bernoulli equation which denotes probability of success. The probability of failure, q, is defined in (9).

Now we need to add together the probabilities of all cases that satisfy the requirement of getting at least NA good units out of NF available units. The result calculated in (10), is the repaired yield for a memory core for a specific rule. This is repeated over all rules in the memory configuration specification, and all memory blocks listed with redundancy.

The term C(NF, (NF-k)) is the binomial coefficient, which is a standard mathematical function. For any memory core or rule where no repair resources exist, the calculation in (10) may be skipped, and the result is simply the original, unrepaired yield. Calculating the effective yield for memory blocks with no redundancy is still valuable if the CAA tool has the capability of post-processing the calculations with different memory redundancy specifications. This enables the tool to present numeric and graphical output that makes it easy to determine the optimal amount of redundancy visually. A Working Example To see how effective memory redundancy can be, lets look at a hypothetical example. The memory of interest is a 4Mbit SRAM structured as 32K x 128 bits. In this case our goal is to realize at least 128 good units (NA) out of the total (NT). In our example, the values are: NT=130, NF=128, NR=2, ND=0 NA=NTNRND=128 Lets say analysis determines that the unit yield considering one defect type will be 0.999. Then the unrepaired yield of the entire core will be 0.999 raised to the 128th power = 0.8798. If we carry through the analysis for all defects types, the expected yield is ~0.35. If we add redundancy to repair any unit defects, the repaired overall yield will also be 0.999. Memory designers use a metric called the repair ratio to express the efficacy of memory redundancy. Repair ratio = (Repaired yieldUnrepaired yield) / (1 Unrepaired yield) (11)

A value in the high 90s is considered good. In this case the repair ratio is (.99 - .35) / (1 - .35) = .985. Now that we have covered the methodology, lets walk through an example using Calibre to determine an optimum redundancy configuration. First we need to set up a configuration file for the tool. For our example 4Mbit SRAM we would have the following configuration entries: sramConfig = { {DIFF.OPEN} {DIFF.SHORT} {single.ODCO} {M1.SHORT} {M1.OPEN} {single.VIA1} {M2.SHORT 0.6} {M2.OPEN 0.6} } { {PO.SHORT} {PO.OPEN} {single.POCO} {M1.SHORT} {M1.OPEN} {single.VIA1} {M2.SHORT 0.4} {M2.OPEN 0.4} {single.VIA2} {M3.SHORT} {M3.OPEN} } spram_2048x32_core sramConfig 34 2 2050 2 0 0 ram6t spram_128x32_core sramConfig 34 2 130 2 0 0 ram6t The bit cell name (ram6t) tells the tool the name of the hierarchical layout element that describes a memory unit that can be repaired and should be considered in the analysis. This enables it to calculate the critical area of the entire memory core (all instantiations of ram6t).

Fig. 3. CAA using Calibre showing effects of memory redundancy on average number of faults (ANF). With this configuration information, Calibre calculates the ANF for memory with no redundancy, as well as for various redundancy configurations. In Figure 3 we see the results shown as a table with values of ANF for different redundancy configurations in columns 5-8: no redundancy, 0 rows and 1 column redundancy, 0 rows and 2 columns, etc. The table rows show the results for the entire design, for just the memory, and for specific types of defects. Notice in the highlighted row, ANF of the 1024x32 memory core is improved substantially (failure rate cut in half in column 6 compared to column 5) by adding one redundant row, but adding a second redundant row shows almost no further improvement (column 7).

Fig. 4. CAA results showing memory repair ratio. Figure 4 shows the effects of different redundancy schemes in terms of repair ratio, again listed by design total, total of all analysis layers, by memory, by block, and by layers or groups. Figure 5 shows a plot produced by the CAA tool depicting ANF for each redundancy configuration and for each type of defect. Its clear that the combination of one redundant row and one redundant column makes a big improvement in ANF, while adding additional resources has little further impact. The value of these results is that the expected ANF is based on the actual layout of the memory under consideration and the specific defect density of the fab and process used for manufacturing. The designer now has a way to determine the impact of specific redundancy configurations on the expected yield of an embedded memory.

Fig. 5. Memory plot showing the average number of faults for different memory redundancy configurations.

Conclusion Memory redundancy is a design technique intended to reduce manufacturing cost by improving die yield. If no redundancy is applied, then alternative methods to improve die yield may include making the design smaller, or reducing defect rates. If redundancy is applied where it has no benefit, then die area and test time are wasted, which actually increases manufacturing cost. In between these two extremes, redundancy may or may not be applied depending on very broad guidelines. If defect rates are high, more redundancy may be needed. If defect rates are low, redundancy may be unnecessary. Analysis of memory redundancy using CAA and accurate foundry defect statistics is necessary to quantify the yield improvement that can be achieved and determine the optimal configuration. Author biography Simon Favre is a technical marketing engineer in the Mentor Graphics Calibre division. Simon supports and directs improvements to the Calibre YieldAnalyzer product. Prior to joining Mentor Graphics, Simon was with Ponte Solutions, acquired by Mentor in 2008. Simon has worked at other EDA companies since 1998. Prior to 1998, Simon worked at semiconductor companies doing library development, custom design, yield engineering and process development. Simon has BS and MS degrees from U.C. Berkeley in EECS. He can be contacted at simon_favre@mentor.com.

If you found this article to be of interest, visit EDA Designline where you will find the latest and greatest design, technology, product, and news articles with regard to all aspects of Electronic Design Automation (EDA). Also, you can obtain a highlights update delivered directly to your inbox by signing up for the EDA Designline weekly newsletter just Click Here to request this newsletter using the Manage Newsletters tab (if you aren't already a member you'll be asked to register, but it's free and painless so don't let that stop you [grin]).

You might also like