## Abstract

Integrated Circuit manufacturing complexities have resulted in decreasing product yields and reliabilities. This process has been accelerated with the advent of very deep sub-micron technologies coupled with the introduction of newer materials and technologies like copper interconnects, silicon-on-insulator and increased wafer sizes. The need to improve product yields has been recognized and currently some yield enhancement techniques are used in industry CAD tools. Still, the significant increase in problem size implies that considerable time and effort can be saved if the designer could predict the yield of each design stage.
In this paper we undertake an effort to derive bounds on the yield of the routing for a given placement. When the design is routed, resulting in a yield which is significantly smaller than the bound, the designer can choose to change the router cost functions, modify the placement or even re-design the unit in an attempt to increase the yield.
We compare the bounds on yield obtained for a set of standard benchmarks against exact yield values for the vanilla'' routings, and the run times needed to calculate the two. The results indicate that reasonably good estimates of yield can be obtained in significantly lower amounts of run time. The accuracy of the estimates increases when larger designs are considered as the simplifying assumptions made in the model no longer influence the estimates significantly.

## Abstract

Recent increases in the density and size of memory ICs made it necessary to search for new defect tolerance techniques since the traditional methods are no longer effective enough. Several new such schemes have been recently proposed and implemented. Due to the high complexity of these new techniques compared to the simple row and column redundancy, Monte-Carlo simulations were used to evaluate their yield enhancement. In this paper we present a yield analysis of one such new design and compare its yield to that of the traditional design.

## Abstract

Current VLSI technology allows the manufacture of large-area integrated circuits with sub-micron feature sizes, enabling designs with several millions of devices. However, imperfections in the fabrication process result in yield-reducing manufacturing defects, whose severity grows proportionally with the size and density of the chip. Consequently, the development and use of yield enhancement techniques at the design stage, to complement existing efforts at the manufacturing stage, is economically justifiable. Design-stage yield enhancement techniques are aimed at making the integrated circuit {\em defect-tolerant}, i.e., less sensitive to manufacturing defects, and they include incorporating redundancy into the design, modifying the circuit floorplan and modifying its layout. Successful designs of defect-tolerant chips must rely on accurate yield projections. This paper reviews the currently used statistical yield prediction models and their application to defect-tolerant designs. We then provide a detailed survey of various yield enhancement techniques and illustrate their use by describing the design of several representative defect-tolerant VLSI circuits.

## Abstract

The yield of a VLSI chip depends, among other factors, on the sensitivity of the chip to defects occurring during the fabrication process. To predict this sensitivity, one usually needs to compute the so-called critical area which reflects how many and how large the defects must be in order to result in a circuit failure. The main computational problem in yield estimation is to calculate the critical area efficiently for complicated, irregular layouts. A novel approach is suggested for this problem that results in an algorithm that will solve it efficiently. This paper provides an interactive accurate and fast method for the rapid evaluation of critical area as a design tool with good visual feedback to allow layout improvement for higher yield. The algorithm is compared to other yield-prediction methods, which use either the Monte-Carlo approach (VLASIC) or a deterministic approach (SCA), and is shown to be faster. It also has the advantage that it can graphically show a detailed `defect sensitivity map' that can assist a chip designer in improving the yield of his/her layout.

Complete manuscript in PostScript format.

## Abstract

This paper summarizes a practical experiment in designing a defect tolerant microprocessor and presents the underlying principles. Unlike memory integrated circuits, microprocessors have an irregular structure which complicates both the task of incorporating redundancy for defect tolerance in the design and the task of analyzing the resulting yield increase. The main goal of this paper is to present the detailed yield analysis of a defect tolerant microprocessor with an irregular structure which has been successfully fabricated. The approaches employed for achieving the goal of yield enhancement in the data path and the control part of the microprocessor are described first. Then, the yield enhancement due to the incorporated redundancy is analyzed. Finally, some practical and theoretical conclusions are drawn.

## Abstract

The primary goal of fault-tolerant designs of very large integrated circuits is yield enhancement. Such designs were first employed in memory chips and recently extended to random logic VLSI circuits and wafer scale circuits. The primary motivation for introducing fault tolerance in VLSI circuits is yield enhancement, i.e., increasing the percentage of fault-free chips. The active area of monolithic VLSI chips has always been limited by random fabrication defects, which appear impossible to eliminate in even the best manufacturing processes. The larger the circuit, the more likely it is to contain such a defect and fail to operate correctly. Thus, the defect density (number of defects per unit chip area) in any fabrication line limits the size of the largest defect-free chip size that can be produced with commercially viable yields. Larger circuits must be designed with a fault tolerance capability to overcome fabrication defects, to avoid unreasonable cost.

## Abstract

As IC technology advances, the minimum feature size of VLSI circuitry continues to decrease. The smaller feature size of transistors has the potential of speeding up the circuits and increasing the yield of the manufactured chips. The latter is due to the smaller silicon area of the chip resulting in a lower average number of defects per chip. Manufacturers of existing VLSI chips also attempt to take advantage of the possible reduction in feature size by scaling (shrinking) the existing designs of their VLSI chips. The effect of scaling the physical dimensions of VLSI circuits on their electrical characteristics (and consequently on their speed of operation) has already been studied. The effect of scaling on the yield has been studied until now only for special cases like interconnection buses. The subject of this paper is the effect of scaling on the yield of more general VLSI circuits.

## Abstract

The recent increases in the size of memory ICs have made designers realize that there exists a need for new defect-tolerance techniques, since the traditional methods are no longer effective. One such new technique, the Flexible Multi-Macro (FMM) technique, has recently been suggested and implemented in a 1 Gb DRAM circuit. In this paper we present a yield analysis of the FMM design and compare its yield to that of the most common defect-tolerance technique of adding spare rows and columns to the memory array.

## Abstract

Several 64-bit adders have been designed and their expected yield has been estimated. Our results show that the yield of VLSI adders can be improved by modifying the layout of the original design and/or by choosing a different layout and circuit structure. In certain situations, these approaches can improve the yield by 10% to 17%.