You are on page 1of 12

1

Building a House of Quality


EXAMPLE PROBLEM

A company that manufactures bicycle components such as cranks, hubs, rims, and so forth wants to
expand their product line by also producing handlebar stems for mountain bikes. Begin the development
process of designing a handlebar stem for a mountain bike by first listing the customer requirements or
WHAT the customer needs or expects in a handlebar stem.
Step 1List Customer Requirements (WHATs)
Quality function deployment starts with a list of goals/objectives. This list is often referred as the WHATs
thata customer needs or expects in a particular product. This list of primary customer requirements is
usually vague and very general in nature. Further definition is accomplished by defining a new,
moredetailed list of secondary customer requirements needed to support the primary customer
requirements. Finaly the list of customer requirements is divided into a hierarchy of primary, secondary,
and tertiary customer requirements, as shown in Figure below.

Step 2List Technical Descriptors (HOWs)


Implementation of the customer requirements is difficult until they are translated into counterpart
characteristics. Counterpart characteristics are an expression of the voice of the customer in technical
language. Each of the customer requirements is broken down into the next level of detail by listing one or
more primary technical descriptors for each of the tertiary customer requirements.

Step 3Develop a Relationship Matrix Between WHATs and HOWs


The next step in building a house of quality is to compare the customer requirements and technical
descriptors and determine their respective relationships

RELATIONSHIP MATRIX

The inside of the house of quality, called the relationship matrix, is now filled in by the QFD team. The
relationship matrix is used to represent graphically the degree of influence between each technical
descriptor am each customer requirement.
It is common to use symbols to represent the degree of relationship between the customer requirements
and technical descriptors. For example, a solid circle represents a strong relationship. A single circle
represents a medium relationship. A triangle represents a weak relationship. The box is left blank if no
relationship exists.

An empty column indicates that a particular technical descriptor does not affect any of the customer
requirements and, after careful scrutiny, may be removed from the house of quality.

Step 4Develop an Interrelationship Matrix Between HOWs


The roof of the house of quality, called the correlation matrix, is used to identify any interrelationship
between each of the technical descriptors. The correlation matrix is a triangular table attached to the
technical descriptors, as shown in Figure below.

Symbols are used to describe the strength of the interrelationship for example,
A solid circle represents a strong positive relationship.
A circle represents a positive relationship.
An X represents a negative relationship.
An asterisk represents a strong negative relationship.
The symbols describe the direction of the correlation. In other words, a strong positive interrelationship
would be a nearly perfectly positive correlation. A strong negative interrelationship would be a nearly
perfectly negative correlation. This diagram allows the user to identify which technical descriptors support
one another and which are in conflict. Conflicting technical descriptors are extremely important because
they arefrequently the result of conflicting customer requirements and, consequently, represent points at
which tradeoffs must be made.
Step 5Competitive Assessments
The competitive assessments are a pair of weighted tables (or graphs) that depict item for item how
competitive products compare with current organization products. The competitive assessment tables are
separated into two categories, customer assessment and technical assessment, as shown in Figures below
CUSTOMER COMPETITIVE ASSESSMENT

"The customer competitive assessment is the block of columns corresponding to each customer
requirement in the house of quality on the right side of the relationship matrix, as shown in Figure below.
The numbers 1 through 5 are listed in the competitive evaluation column to indicate a rating of 1 for worst

and 5 for best. These rankings can also be plotted across from each customer requirement, using different
symbols for each product.
The customer competitive assessment is constructed by assigning ratings for each customer requirement
from 1 (worst) to 5 (best) for the new handlebar stem and major competitor A's and B's handlebar stem.

TECHNICAL COMPETITIVE ASSESSMENT

The technical competitive assessment makes up a block of rows corresponding to each technical
descriptor in the house of quality beneath the relationship matrix, as shown in Figure

Similar to the customer competitive assessment, the test data are converted to the numbers 1 through 5
which are listed in the competitive evaluation row to indicate a rating, 1 for worst and 5 for best. These
rankings can then be entered below each technical descriptor using the same numbers as used in the
customer competitive assessment.

Step 6Develop Prioritized Customer Requirements


The prioritized customer requirements make up a block of columns corresponding to each customer
requirement in the house of quality on the right side of the customer competitive assessment as shown in
Figure below. These prioritized customer requirements contain columns for importance to customer, target
value, scale up factor, sales point, and an absolute weight.
IMPORTANCE TO CUSTOMER

The QFD team or the focus group ranks each customer requirement by assigning it a rating. Numbers 1
through 10 are listed in the importance to customer column to indicate a rating of 1 for least important
and10 for very important
TARGET VALUE

This column is where the QFD team decides whether they want to keep their product unchanged, improve
the product or make the product better than the competition. The target value is determined by evaluating
the assessment of each customer requirement and setting a newassessment value that keep the product as
is, improves the product, or exceeds the competition. For instant if lightweight has a product rating of 3
and the QFD team wishes to improve their product, then the target value could be assigned a value of 4.
SCALE-UP FACTOR

The scale-up factor is the ratio of the target value to the product rating given in the customer competitive
assessment. The higher the number, the more effort is needed. Here, the important consideration is the
level where the product is now and what the target rating is and deciding whether the difference is within
reason.
For instance, if lightweight has a product rating of 3 and the target value is 4, then the scale-up factor is
1.3. Note that the numbers for scale-up factor are rounded off in Figure.

SALES POINT

The sales point tells the QFD team how well a customer requirement will sell. The objective here is to
promote the best customer requirement and any remaining customer requirements that will help in the sale
of theproduct. For example, the sales point is a value between 1.0 and 2.0, with 2.0 being the highest

In our example let an aerodynamic look could help the sale of the handlebar stem, so the sales point is
given a value of 1.5. If acustomer requirement will not help the sale of the product, the sales point is given
a value of 1.
ABSOLUTE WEIGHT

The absolute weight is calculated by multiplying the importance to customer, scale-up factor and sales
point.
Absolute Weight = (Importance to Customer)(Scale-up Factor)(Sales Point)
The weight can then be used as a guide for the planning phase of the product development.
Step 7Develop Prioritized Technical Descriptors
Theprioritized technical descriptors make up a block of rows corresponding to each technical descriptor in
thehouse of quality below the technical competitive assessment, as shown in Figure. These prioritized
technical descriptors contain degree of technical difficulty, target value, and absolute and relative weights.
The QFD team identifies technical descriptors that are most needed to fulfill customer requirements and
need improvement.

7
DEGREE OF DIFFICULTY

Many users of the house of quality add the degree of technical difficulty for implementing each technical
descriptor, which is expressed in the first row of the prioritized technical descriptors.
The degree of technical difficulty, when used, helps to evaluate the ability to implement certain quality
improvements. Degree of difficulty is determined by rating each technical descriptor from 1 (least
difficult) to 10 (very difficult). For instance, the degree of difficulty for die casting is 7, whereas the
degree of difficulty for sand casting is 3 because it is a much easier manufacturing process
TARGET VALUE

This is an objective measure that defines values that must be obtained to achieve the technical descriptor.
How much it takes to meet or exceed the customer's expectations is answered by evaluating all the
information entered into the house of quality and selecting target values.
ABSOLUTE WEIGHT

The last two rows of the prioritized technical descriptors are the absolute weight and relative weight. A
popularand easy method for determining the weights is to assign numerical values to symbols in the
relationship matrix symbols, as shown previously in Figure.
Absolute weight of the jth interval is given by

RELATIVE WEIGHT

In a similar manner, the relative weight for the jth technical descriptor is then given by replacing the
degree importance for the customer requirements with the absolute weight for customer requirements. It is

Higher absolute and relative ratings identify areas where engineering efforts need to be concentrated. The
primary difference between these weights is that the relative weight also includes information on customer
scale-up factor and sales point.Along with the degree of technical difficulty, decisions can be made
concerning where to allocate resources for quality improvement.

The Design FMEA Document


FMEA Number
On the top left corner of the document is the FMEA Number, which is only needed for tracking.
System, Subsystem, Component, Model Year/Number
This space is used to clarify the level at which DFMEA is performed. Appropriate part or other number
shod be entered along with information about the specific model and year.
Design Responsibility
The team in charge of the design or process should be identified in the space designated Design
Responsibility. The name and company (or department) of the person or group responsible for preparing
the document should also be included.
Prepared By
The name, telephone number, and address should be included in the Prepared By space for use when parts
of the document need explanation.
Key Date
The date the initial FMEA is due should be placed in the Key Date space.
FMEA Date
The date the original FMEA was compiled and the latest revision date should be placed in the FMEA Date
space.
Core Team
In the space reserved for Core Team, the names of the responsible individuals and departments that have
authority to perform tasks should be listed. If the different people or departments involved are not working
closely or are not familiar with each other, team members' names, departments, and phone numbers should
be distributed.
Item/Function
In this section, the name and number of the item being analyzed is recorded. This information should be as
precise as possible to avoid confusion involving similar items. Next, the function of the item is to be
entered below the description of the item. No specifics should be left out in giving the function of the

10

item. If the item has more than one function, they should be listed and analyzed separately. The function
of the item should be completely given, including the environment in which the system operates.
Potential Failure Mode
The Potential Failure Mode information may be one of two things. First, it may be the method in which
the item being analyzed may fail to meet the design criteria. Second, it may be a method that may cause
potential failure in a higher-level system or may be the result of failure of a lower-level system. All
potential failure modes must be cons including those that may occur under particular operating conditions
and under certain usage conditions even if these conditions are outside the range given for normal usage.
A good starting point when listing potential failure modes is to consider past failures, concern reports, and
group "brainstorming.' Some typical failure modes may include cracked, deformed, loosened, leaking,
sticking, short circuited, oxidized, and fractured.
Potential Effect(s) of Failure
The potential effects of failure are the effects of the failure as perceived by the customer. The effects of
failure must be described in terms of what the customer will notice or experience, so if conditions are
given by the customer there will be no dispute as to which mode caused the particular failure effect. It
must also be stated whether the failure will impact personal safety or break any product regulations. This
section of the document must also forecast what effects the particular failure may have on other systems or
sub-systems in immediate contact with the system failure. For example, a part may fracture, which may
cause vibration of the sub-system in contact with the fractured part, resulting in an intermittent system
operation. The intermittent system operation could pause performance to degrade and then ultimately lead
to customer dissatisfaction. Some typical effects of failure may include noise, erratic operation, poor
appearance, lack of stability, intermittent operation, and unpaired operation.
Severity (S)
Severity is the assessment of the seriousness of the effect of the potential failure mode to the next
component, sub-system, system, or customer if it occurs. It is important to realize that the severity applies
only to the effect of the failure, not the potential failure mode. Reduction in severity ranking must not
come from any reasoning except for a direct change in the design. It should be stressed that no single list
of severity criteria is applicable to all designs; the team should agree on evaluation criteria and on a
ranking system that are consistent throughout the life of the document. Severity should be rated on a l-to10 scale, with a 1 being none and a 10 feeing the most severe.
Classification (CLASS)
This column is used to classify any special product characteristics for components, sub-systems, or
systems that may require additional process controls. There should be a special method to designate any
item that may require special process controls on the form.
Potential Cause(s)/Mechanism(s) of Failure
Every potential failure cause and/or mechanism must be listed completely and concisely. Some failure
modes nay have more than one cause and/or mechanism of failure; each of these must be examined and
listed separately. Then, each of these causes and/or mechanisms must be reviewed with equal weight.
Typical failure causes may include incorrect material specified, inadequate design, inadequate life
assumption, over-stressing, insufficient lubrication capability, poor environment protection, and incorrect

11

algorithm. Typical failure mechanisms may include yield, creep, fatigue, wear, material instability, and
corrosion.
Current Design Control Prevention
There are two approaches to Design Control:
Prevention of the cause of failure
Detection of the cause of failure
Prevention controls should always be preferred over detection controls where possible. Examples of
design controls to prevent occurrence of the failure mode can be a design review, design calculations,
finite element analysis, computer simulation, mathematical modeling, tolerance stack-up study, etc. These
activities tend to prevent the failure mode before the design is released for production.
Occurrence (O)
Occurrence is the chance that one of the specific causes/mechanisms will occur. This must be done for
every cause and mechanism listed. Reduction or removal in occurrence ranking must not come from any
reasoning except for a direct change in the design. Design change is the only way a reduction in the
occurrence ranking can be effected. Occurrence is based on 1 to 10 scale, with 1 being the least chance of
occurrence and 10 the highest.
Current Design Control Detection
These are design controls which are expected to detect the failure mode before the release of the design.
These include the reliability tests such as fatigue tests, salt spray tests, prototype tests, functional tests, etc.
Detection (D)
This section of the document is a relative measure of the assessment of the ability of the design control to
detect either a potential cause/mechanism or the subsequent failure mode before the component,
subsystem, or system is completed for production.
Risk Priority Number (RPN)
By definition, the Risk Priority Number is the product of the severity (S), occurrence (O), and detection
(D) rankings, as shown below:
RPN=(S) X(O) X(D)
This product may be viewed as a relative measure of the design risk. Values for the RPN can range from1
to 1000, with 1 being the smallest design risk possible. This value is then used to rank order the various
concerns in the design. For concerns with a relatively high RPN, the engineering team must make efforts
to take corrective action to reduce the RPN. Likewise, because a certain concern has a relatively low RPN,
the engineering team should not overlook the concern and neglect an effort to reduce the RPN.

12

PROCESS FMEA

You might also like