Professional Documents
Culture Documents
Design for assembly (DFA) is a process by which products are designed with ease
of assembly in mind. If a product contains fewer parts it will take less time to assemble, thereby
reducing assembly costs. In addition, if the parts are provided with features which make it easier
to grasp, move, orient and insert them, this will also reduce assembly time and assembly costs.
The reduction of the number of parts in an assembly has the added benefit of generally reducing
the total cost of parts in the assembly. This is usually where the major cost benefits of the
application of design for assembly occur.
Contents
1Approaches
2Implementation
3Notable examples
4See also
5Notes
6Further information
7External links
Approaches[edit]
Design for assembly can take different forms. In the 1960s and 1970s various rules and
recommendations were proposed in order to help designers consider assembly problems during
the design process. Many of these rules and recommendations were presented together with
practical examples showing how assembly difficulty could be improved. However, it was not until
the 1970s that numerical evaluation methods were developed to allow design for assembly
studies to be carried out on existing and proposed designs.
The first evaluation method was developed at Hitachi and was called the Assembly Evaluation
Method (AEM).[1] This method is based on the principle of "one motion for one part." For more
complicated motions, a point-loss standard is used and the ease of assembly of the whole
product is evaluated by subtracting points lost. The method was originally developed in order to
rate assemblies for ease of automatic assembly.
Starting in 1977, Geoff Boothroyd, supported by an NSF grant at the University of Massachusetts
Amherst, developed the Design for Assembly method (DFA), which could be used to estimate
the time for manual assembly of a product and the cost of assembling the product on an
automatic assembly machine.[2] Recognizing that the most important factor in reducing assembly
costs was the minimization of the number of separate parts in a product, he introduced three
simple criteria which could be used to determine theoretically whether any of the parts in the
product could be eliminated or combined with other parts. These criteria, together with tables
relating assembly time to various design factors influencing part grasping, orientation and
insertion, could be used to estimate total assembly time and to rate the quality of a product
design from an assembly viewpoint. For automatic assembly, tables of factors could be used to
estimate the cost of automatic feeding and orienting and automatic insertion of the parts on an
assembly machine.
In the 1980s and 1990s, variations of the AEM and DFA methods have been proposed, namely:
the GE Hitachi method which is based on the AEM and DFA; the Lucas method,
the Westinghouse method and several others which were based on the original DFA method. All
methods are now referred to as design for assembly methods.
Implementation[edit]
Most products are assembled manually and the original DFA method for manual assembly is the
most widely used method and has had the greatest industrial impact throughout the world.
The DFA method, like the AEM method, was originally made available in the form of a handbook
where the user would enter data on worksheets to obtain a rating for the ease of assembly of a
product. Starting in 1981, Geoffrey Boothroyd and Peter Dewhurst developed a computerized
version of the DFA method which allowed its implementation in a broad range of companies. For
this work they were presented with many awards including the National Medal of Technology.
There are many published examples of significant savings obtained through the application of
DFA. For example, in 1981, Sidney Liebson, manager of manufacturing engineering for Xerox,
estimated that his company would save hundreds of millions of dollars through the application of
DFA.[3] In 1988, Ford Motor Company credited the software with overall savings approaching $1
billion.[4] In many companies DFA is a corporate requirement and DFA software is continually
being adopted by companies attempting to obtain greater control over their manufacturing costs.
There are many key principles in design for assembly[5][6][7][8][9].
Notable examples[edit]
Two notable examples of good design for assembly are the Sony Walkman and
the Swatch watch. Both were designed for fully automated assembly. The Walkman line was
designed for "vertical assembly", in which parts are inserted in straight-down moves only. The
Sony SMART assembly system, used to assemble Walkman-type products, is a roboticsystem
for assembling small devices designed for vertical assembly.
The IBM Proprinter used design for automated assembly (DFAA) rules. These DFAA rules
help design a product that can be assembled automatically by robots, but they are useful even
with products assembled by manual assembly.[10]
Design for manufacturability (also sometimes known as design for manufacturing or DFM) is
the general engineering practice of designing products in such a way that they are easy to
manufacture. The concept exists in almost all engineering disciplines, but the implementation
differs widely depending on the manufacturing technology. DFM describes the process of
designing or engineering a product in order to facilitate the manufacturing process in order to
reduce its manufacturing costs. DFM will allow potential problems to be fixed in the design phase
which is the least expensive place to address them. Other factors may affect the
manufacturability such as the type of raw material, the form of the raw material, dimensional
tolerances, and secondary processing such as finishing.
Depending on various types of manufacturing processes there are set guidelines for DFM
practices. These DFM guidelines help to precisely define various tolerances, rules and common
manufacturing checks related to DFM.
While DFM is applicable to the design process, a similar concept called DFSS (Design for Six
Sigma) is also practiced in many organizations.
Contents
Background[edit]
Traditionally, in the prenanometer era, DFM consisted of a set of different methodologies trying
to enforce some soft (recommended) design rules regarding the shapes and polygons of
the physical layout of an integrated circuit. These DFM methodologies worked primarily at the full
chip level. Additionally, worst-case simulations at different levels of abstraction were applied to
minimize the impact of process variations on performance and other types of parametric yield
loss. All these different types of worst-case simulations were essentially based on a base set of
worst-case (or corner) SPICE device parameter files that were intended to represent the
variability of transistor performance over the full range of variation in a fabrication process.
Functional yield loss is still the dominant factor and is caused by mechanisms such as
misprocessing (e.g., equipment-related problems), systematic effects such as printability or
planarization problems, and purely random defects.
High-performance products may exhibit parametric design marginalities caused by either
process fluctuations or environmental factors (such as supply voltage or temperature).
The test-related yield losses, which are caused by incorrect testing, can also play a
significant role.
Techniques[edit]
After understanding the causes of yield loss, the next step is to make the design as resistant as
possible. Techniques used for this include:
Substituting higher yield cells where permitted by timing, power, and routability.
Changing the spacing and width of the interconnect wires, where possible
Optimizing the amount of redundancy in internal memories.
Substituting fault tolerant (redundant) vias in a design where possible
All of these require a detailed understanding of yield loss mechanisms, since these changes
trade off against one another. For example, introducing redundant vias will reduce the chance of
via problems, but increase the chance of unwanted shorts. Whether this is good idea, therefore,
depends on the details of the yield loss models and the characteristics of the particular design.
Material type[edit]
The most easily machined types of metals include aluminum, brass, and softer metals. As
materials get harder, denser and stronger, such as steel, stainless steel, titanium, and exotic
alloys, they become much harder to machine and take much longer, thus being less
manufacturable. Most types of plastic are easy to machine, although additions of fiberglass or
carbon fiber can reduce the machinability. Plastics that are particularly soft and gummy may
have machinability problems of their own.
Material form[edit]
Metals come in all forms. In the case of aluminum as an example, bar stock and plate are the two
most common forms from which machined parts are made. The size and shape of the
component may determine which form of material must be used. It is common for engineering
drawings to specify one form over the other. Bar stock is generally close to 1/2 of the cost of
plate on a per pound basis. So although the material form isn't directly related to the geometry of
the component, cost can be removed at the design stage by specifying the least expensive form
of the material.
Tolerances[edit]
A significant contributing factor to the cost of a machined component is the geometric tolerance
to which the features must be made. The tighter the tolerance required, the more expensive the
component will be to machine. When designing, specify the loosest tolerance that will serve the
function of the component. Tolerances must be specified on a feature by feature basis. There are
creative ways to engineer components with lower tolerances that still perform as well as ones
with higher tolerances.