You are on page 1of 71

KMBOM03 Quality Toolkit for Managers

UNIT 1

1 Evolution of Quality Management


AKTUTHEINTACTONE2 MONTHS AGO 1 COMMENT

Quality management has been a part of many different cultures throughout history. It is
nothing new and can be traced as far back as 2000 BC in Babylonia. King Hummurabi of
Babylon introduced the concept of product quality and liability into the building industry of
the times. In the time of Egyptian pharaohs, the burial of the nobility was documented
systematically. The manner of carrying out the necessary rituals and the funerary goods to be
buried with the deceased are stated in each Book of the Dead. A systematic document is one
of the fundamentals in quality management to ensure consistency. The same steps will be
followed by different persons performing the same task. In doing so, the deviation from the
requirements can be minimized.

The first Emperor of China, Qin Shi Huangdi, decreed that all goods supplied for use in the
imperial household should carry a mark so that any maker who produced goods with faults
could be traced and punished. During the Middle Ages, merchant guilds were established to
guarantee the quality of workmanship and to define the standards to be expected by the
buyers. The emergence of mass production in the twentieth century increased the demand of
control of product quality. In addition, the demand of consistency in ammunition in war times
pushed the need for more stringent product quality. If we look back at this brief history of the
development of quality management, it can be seen that we have gone through various phases
from quality control to quality assurance and ultimately to total quality management.

(i) Quality Control

Quality control is basically concerned with complying with requirements by inspecting the
products and eliminating nonconforming items. It does not address the root causes of
nonconforming. This type of control was developed during World War II to ensure the
consistency of ammunition being produced.

(ii) Quality Assurance

Similar to quality control, quality assurance originated from the military’s need for
consistency of military hardware. The success of Japanese manufacturers during the 1960s
and 1970s shifted the focus from quality control to quality assurance. In comparison to
quality control though, quality assurance focuses on the procedure of compliance and product
conformity to specification through activities such as vendor appraisal, line or shop floor
trouble shooting, laboratory work, field problems and method development in the production
process. However, quality assurance is still basically an inspection process, though it checks
more than just the product.

(iii) Total Quality Control


Total quality control is an expansion of quality control from manufacturing to other areas of
an organization. The concept was introduced by the American scholar Dr Armand
Feigenbaum in the late 1950s. The Japanese adopted this concept and renamed it as
company-wide quality control (CWQC). It tries to look for long term solutions rather than
responding to short term variations. It focuses on pursuit of quality through elimination of
waste and non-value-added process. Also, the concept is to expand quality control beyond the
production department. Quality control should be covered all the other departments of an
organization such as marketing, design, accounting, human resources, logistics and customer
services. Quality is not just the responsibility of production.

(iv) Total Quality Management (TQM)

Total quality management (TQM) evolved from the Japanese after World War II with the
inspiration from quality experts like Juran and Deming. As it evolved, it changed from
process driven by external controls to a customer oriented process. Quality is achieved
through prevention rather than inspection. It shifts the main concept from control to
management. No matter how stringent the control there is still a chance to have mistakes or
defectives. The concept of management is to have a strategic plan starting from identifying
customer requirements to after-sales services to producing product meetings or exceeding the
customer requirements.

From the evolution of quality management, we can also identify some key attributes of
quality. We start by producing product in a consistent manner by meeting the necessary
requirements. It is important to trace and isolate defective items preventing further usage. If it
is found that a certain batch of products has safety problems after being sold in the market, it
is important that it can be identified and recalled. Means are developed to control quality at
the initial stage.

This is primary achieved through inspection. Later, the scope was shifted to quality
assurance. That is, mechanisms are developed to ensure the production process conforms to
the requirements of producing good products. The concept of control was extended beyond
the production department to include all departments of an organization. To deliver quality
product, it requires cooperation and integration of all departments. If the logistics department
does not ship the products on time, the customer will not be happy. At the final stage, quality
should not rely on control only. Quality is built upon by customer focus, defect prevention
and so on. This is total quality management.

2 Concepts of Product and Service Quality


AKTUTHEINTACTONE2 MONTHS AGO 1 COMMENT

It is not easy to define the word Quality since it is perceived differently by the different set of
individuals. If experts are asked to define quality, they may give varied responses depending
on their individual preferences. These may be similar to following listed phrases.

According to experts, the word quality can be defined either as;

 Fitness for use or purpose.


 To do a right thing at first time.
 To do a right thing at the right-time.
 Find and know, what consumer wants?
 Features that meet consumer needs and give customer satisfaction.
 Freedom from deficiencies or defects.
 Conformance to standards.
 Value or worthiness for money, etc.

Product Quality

“Product quality means to incorporate features that have a capacity to meet consumer needs
(wants) and gives customer satisfaction by improving products (goods) and making them free
from any deficiencies or defects.”

Meaning of Product Quality

Product quality mainly depends on important factors like:

(i) The type of raw materials used for making a product.

(ii) How well are various production-technologies implemented?

(iii) Skill and experience of manpower that is involved in the production process.

(iv) Availability of production-related overheads like power and water supply, transport, etc.
Product quality has two main characteristics viz; measured and attributes.

1. Measured characteristics

Measured characteristics include features like shape, size, color, strength, appearance, height,
weight, thickness, diameter, volume, fuel consumption, etc. of a product.

2. Attributes characteristics

Attributes characteristics checks and controls defective-pieces per batch, defects per item,
number of mistakes per page, cracks in crockery, double-threading in textile material,
discoloring in garments, etc.

Based on this classification, we can divide products into good and bad.
So, product quality refers to the total of the goodness of a product.
The (5) five main aspects of product quality are depicted and listed below:

(i) Quality of design: The product must be designed as per the consumers’ needs and high-
quality standards.

(ii) Quality conformance: The finished products must conform (match) to the product
design specifications.

(iii) Reliability: The products must be reliable or dependable. They must not easily
breakdown or become non-functional. They must also not require frequent repairs. They must
remain operational for a satisfactory longer-time to be called as a reliable one.

(iv) Safety: The finished product must be safe for use and/or handling. It must not harm
consumers in any way.

(v) Proper storage: The product must be packed and stored properly. Its quality must be
maintained until its expiry date.

Company must focus on product quality, before, during and after production:
Importance of Product Quality

Image depicts importance of product quality for company and consumers.

(i) For Company: Product quality is very important for the company. This is because, bad
quality products will affect the consumer’s confidence, image and sales of the company. It
may even affect the survival of the company. So, it is very important for every company to
make better quality products.

(ii) For Consumers: Product quality is also very important for consumers. They are ready to
pay high prices, but in return, they expect best-quality products. If they are not satisfied with
the quality of product of company, they will purchase from the competitors. Nowadays, very
good quality international products are available in the local market. So, if the domestic
companies don’t improve their products’ quality, they will struggle to survive in the market.
Advertisements
Topic 3 Dimensions of Quality
AKTUTHEINTACTONE2 MONTHS AGO 1 COMMENT
Important Dimensions of Quality formulated by David A. Garvin

David A. Garvin, a specialist in the area of quality control, argues that quality can be used in a strategic way to compete effectively and
an appropriate quality strategy would take into consideration various important dimensions of quality

Eight dimensions of product quality management can be used at a strategic level to analyze quality characteristics. The concept was
defined by David A. Garvin, formerly C. Roland Christensen Professor of Business Administration at Harvard Business School (died 30
April 2017). Some of the dimensions are mutually reinforcing, whereas others are not—improvement in one may be at the expense of
others. Understanding the trade-offs desired by customers among these dimensions can help build a competitive advantage.

Garvin’s eight dimensions can be summarized as follows:

1. Performance

It involves the various operating characteristics of the product. For a television set, for example, these characteristics will be the quality
of the picture, sound and longevity of the picture tube.

2. Features

These are characteristics that are supplemental to the basic operating characteristics. In an automobile, for example, a stereo CD
player would be an additional feature.

3. Reliability

Reliability of a product is the degree of dependability and trustworthiness of the benefit of the product for a long period of time.

It addresses the probability that the product will work without interruption or breaking down.

4. Conformance

It is the degree to which the product conforms to pre- established specifications. All quality products are expected to precisely meet the
set standards.

5. Durability

It measures the length of time that a product performs before a replacement becomes necessary. The durability of home appliances
such as a washing machine can range from 10 to 15 years.

6. Serviceability

Serviceability refers to the promptness, courtesy, proficiency and ease in repair when the product breaks down and is sent for repairs.

7. Aesthetics

Aesthetic aspect of a product is comparatively subjective in nature and refers to its impact on the human senses such as how it looks,
feels, sounds, tastes and so on, depending upon the type of product. Automobile companies make sure that in addition to functional
quality, the automobiles are also artistically attractive.

8. Perceived quality

An equally important dimension of quality is the perception of the quality of the product in the mind of the consumer. Honda cars, Sony
Walkman and Rolex watches are perceived to be high quality items by the consumers.

4 Quality Philosophies: Deming’s


AKTUTHEINTACTONE2 MONTHS AGO 1 COMMENT

Deming’s 14 Points on Quality Management, or the Deming Model of Quality Management,


a core concept on implementing total quality management (TQM), is a set of management
practices to help companies increase their quality and productivity.
As a management consultant, William Edwards Deming is known for the so-called PDCA
(Plan-Do-Check-Act) Circle. Herein he emphasizes the importance of continuous
improvement within an organization, as opposed to making changes after the fact.

Deming’s 14 points for Management were first presented in his book Out of the Crisis. With
the 14 important management principles he offered a way to drastically improve the
company’s effectiveness. Many of these management principles are philosophical in nature,
and some are more programmatic.

1. Create constancy of purpose

Strive for constant improvement in products and services, with the aim of becoming
competitive and ensuring consistency in the way business is done, which will ensure retention
of employment. Do not just make adjustments at the end of the production process, but
evaluate if improvements are necessary during the process and get started immediately.

2. The new philosophy

A new (economic) time offers new chances and challenges, and management must take
responsibility for being open to such changes. Without change, a company can not sustain
itself in a time when innovation occurs every day.

3. Cease dependence on inspection

End the dependence on inspections and final checks to ensure quality. It is better to that
quality checks take place during the process so that improvements can be made earlier. This
section links back to the first point, which promotes the importance of interim improvements.

4. End ‘lowest tender’ contract

Move towards a single supplier for any one item. Stop doing business and negotiate with
suppliers based on the lowest price. It is worthwhile in the long term to build a good and
long-standing relationship with suppliers, which fosters trust and increases loyalty. An
organization should be able to rely on their suppliers; they supply the parts for the production
line and are the first link to a high quality product.

5. Continually seek out problems

Improve constantly and forever. Continuous process improvement of production and service
results in improved quality and productivity, which in turn leads to cost reduction. This part
also relates to the first and third points. Improved quality leads to less waste of other raw
materials, which subsequently has a cost-effective effect.

6. Institute training on the job

Training and development of employees is necessary for the survival of an organization. By


integrating it into the organization, it will be considered as normal for the employees, as part
of their Personal Development Plan.

7. Institute supervision
Adopt and institute leadership. Leadership needs to be stimulated. By leading and
supervising, managers are able to help employees and make machines work better. Their
helicopter view ensures that they can see everything that happens on the workplace. They
will also have to delegate more tasks so that they can fully focus on the big picture.

8. Drive out fear

Fear is paralysing. Therefore, fear must be eliminated on the work floor so that everyone can
work effectively for the company, feel safe and take risks. Transparent communication,
motivation, respect and interest in each other and each other’s work can contribute to this.

9. Break down barriers

By eliminating the boundaries between departments, cooperation can be better and different
expert teams will understand each other better. This can be done by, for example, the creation
of multifunctional teams, each with an equal share and open to each other’s ideas.

10. Eliminate exhortations

Remove ‘stimulating’ slogans from the workplace. Such slogans, warnings and exhortations
are perceived as being patronising. Quality and production problems do not arise from the
individual employee, but from the system itself.

11. Eliminate targets

No more focus on achieving certain margins; that impedes professionals from performing
their work well and taking the necessary time for it. Rushing through the work can cause
production errors. Managers should therefore focus on quality rather than quantity.

12. Permit pride of workmanship

Let employees be proud of their craftsmanship and expertise again. This relates back to the
eleventh point. Employees feel more satisfaction when they get a chance to execute their
work well and professionally, without feeling the pressure of deadlines.

13. Institute education

Integrate and promote training, self-development and improvement for each employee. This
directly connects to the sixth point. By encouraging employees to work for themselves and to
see their studies and training as a self-evident part of their jobs, they are able to elevate
themselves to a higher level.

14. The transformation is everyone’s job

Transformation is the work of everyone. Set forth concrete actions to implement and realise
transformation and change throughout the organization.

5 Quality Philosophies: Juran’s


AKTUTHEINTACTONE2 MONTHS AGO 1 COMMENT
Juran was a great Founding Father of quality, and was responsible for the famous Juran Trilogy concept. Juran’s approach to quality
control had Japanese roots. While Japan was price-competitive with the rest of the world, the quality of product did not measure up.
This quality philosophy consists of three steps: Quality Planning, Quality Control and Quality Improvement.

1. Quality Planning

The quality planning phase is the activity of developing products and processes to meet customers’ needs. Involves building an
awareness of the need to improve, setting goals and planning for ways goals can be reached. This begins with management’s
commitment to planned change. It also requires a highly trained and qualified staff. It deals with setting goals and establishing the
means required to reach the goals. Below are the steps in the quality planning process:

 Establish quality goals


 Identify the customers: those who will be impacted by the efforts to meet the goals
 Determine the customer’s needs
 Develop processes that can produce the product to satisfy customers’ needs and meet quality goals under operating conditions.
 Establish process controls, and transfer the resulting plans to the operating forces

(ii) Quality Control

This process deals with the execution of plans and it includes monitoring operations so as to detect differences between actual
performance and goals. means to develop ways to test products and services for quality. Any deviation from the standard will require
changes and improvements. It is outlined with three steps:

 Evaluate actual quality performance


 Compare actual performance to quality goals
 Act on the difference

(iii) Quality Improvement

It is a continuous pursuit toward perfection. Management analyses processes and systems and reports back with praise and
recognition when things are done right this is the process is for obtaining breakthrough in quality performance, and it consists of several
steps:

 Establish the infrastructure needed to secure annual quality improvement


 Identify the specific needs for improvement- the improvement projects
 Establish project teams with clear responsibility for bringing the project to a successful conclusion
 Provide the resources, motivation, and training needed by the teams to- diagnose the cause, stimulate establishment of remedies, and
establish controls to hold the gains.

6 Crosby’s Quality Philosophy


AKTUTHEINTACTONE2 MONTHS AGO 1 COMMENT
The 14 Steps of Crosby – Philip Crosby, not Bing Crosby – formulate a program for Total Quality Management efforts. Crosby’s
fourteen steps rely on the foundational thought that any money a company spends upon quality improvement is money that is well-
spent. In Crosby’s theory, he cites four absolutes of quality management:

A company ought to define quality not as something that is “good” or something that is “exquisite” but instead as something that
conforms to company, stakeholder, or end-user requirements.

Quality starts with prevention – defects should be prevented rather than found after the fact. By preventing defects and other obstacles
to quality, companies save money.

The standard for performance for any company needs to be “zero defects.” Otherwise, it just doesn’t cut it.

In order to measure quality, rather than relying upon intricate indices, companies need to focus on the Price of Non-conformance. The
price of non-conformance, sometimes called the cost of quality, is a measure of the costs associated with producing a product or
service of low quality.

The 14 Steps of Crosby are meant to keep your quality improvement project on track.

1. Commitment of Management

First and foremost, management must be committed to improving the quality in a company. This commitment must also be transparent
to all employees so that proper attitudes towards a Zero Defect product or service line are modeled.

2. Formulate the Quality Improvement Team


Forming a quality improvement team is the second step to achieving total quality management. Search for team members who will
model quality improvement commitment, and who are not already over-committed to other projects. The quality improvement team
should be able to effectively commit themselves to improvement of quality.

3. Measure for Quality in Current Practices

Before you can establish a plan for improving quality, you first have to know exactly where your products and services lie when it comes
to conforming to requirements. Thus, the third step on Crosby’s list is to measure quality. Determine where there is room for
improvement and where potential for improvement exists.

4. What Will the Cost of Quality Be?

How much is your cost of non-conformance to standards? What is the cost for quality? By answering these questions, you can
demonstrate to all company employees that there is a need for a quality improvement system. Explain how the cost of quality figures
into the overall company plan.

5. Quality Awareness is Central to Success

You will need to raise employee awareness to the importance of quality management. By doing this, and making quality a central
concern to employees, you will increase the likelihood that your quality improvement efforts will be realized.

6. Remember the Quality Problems? Take Corrective Action

By now, you will have determined what your company’s quality problems are. It is now time to take corrective action to eliminate the
defects that have been identified. Be sure that you install a system, using causal analysis techniques, to ensure that these problems
don’t reoccur in the future.

7. Plan for Zero Defects

You need to create a committee to ensure that there are zero defects in your products and services. For Crosby, it’s not enough,
remember to have “as few as possible” defects. Instead, you really need to have this number at zero – establish a zero-defect
tolerance in your company.

8. Practice Effective Training for Supervisors

Ensure that your supervisors can carry out the tasks required of them for maintaining quality. By practicing supervisor training, with
quality in mind (and the four absolutes), then you will be more likely to acheive zero-defect status.

9. Happy Zero Defects Day!

Hold a quality event, called a zero defects day, where all employees are made aware of the change that has taken place. By holding a
zero defects day in your company when implementing a total quality management project, you can be sure that you are increasing
awareness for quality in your workplace.

10. Involve Everyone in Goal Setting

After implementing a change, you will need to ensure that you involve everyone – both employees and supervisors – in the goal setting
process. By bringing everyone in the company in on setting goals for improvement, you can ensure greater commitment to achieving
zero defects.

11. Eliminate Causes of Errors

Error-cause removal is necessary for the successful implementation of any quality improvement effort. Encourage your employees to
come to management with any obstacles or issues that arrise in attempting to meet improvement goals. By having employees
communicate obstacles before they become crises, you can avert many of the dampers for quality improvement efforts.

12. Implement Recognition for Participants

The twelfth step of Crosby’s 14 Steps is the implementation of employee recognition. By regularly recognizing those who participate in
quality improvement efforts, employees will be much more likely to continue to participate.

13. Create Quality Councils

By bringing together specialists and employees, you can create a focused effort towards creating lasting quality improvement
implementations. Make sure your quality councils meet on a regular basis.
14. Lather…Rinse…REPEAT!!!

Quality improvement doesn’t end because you have run out of the 14 Steps of Crosby! In order to really make improvements in the
quality of your products and services, you will need to do it over again…and again…and again. Now go get started on your quality
improvement projects!

7 Quality Cost
AKTUTHEINTACTONE2 MONTHS AGO 1 COMMENT

Cost of Quality (COQ) is a measure that quantifies the cost of control/conformance and the cost of failure of control/non-conformance.
In other words, it sums up the costs related to prevention and detection of defects and the costs due to occurrences of defects.

Definition by ISTQB

Cost of quality: The total costs incurred on quality activities and issues and often split into prevention costs, appraisal costs, internal
failure costs and external failure costs.

Definition by QAI

Money spent beyond expected production costs (labor, materials, equipment) to ensure that the product the customer receives is a
quality (defect free) product. The Cost of Quality includes prevention, appraisal, and correction or repair costs.

Quality costs are categorized into four main types. Theses are:

 Prevention costs
 Appraisal costs
 Internal failure costs and
 External failure costs.

These four types of quality costs are briefly explained below:

(i) Prevention costs

It is much better to prevent defects rather than finding and removing them from products. The costs incurred to avoid or minimize the
number of defects at first place are known as prevention costs. Some examples of prevention costs are improvement of manufacturing
processes, workers training, quality engineering, statistical process control etc.

(ii) Appraisal costs

Appraisal costs (also known as inspection costs) are those cost that are incurred to identify defective products before they are shipped
to customers. All costs associated with the activities that are performed during manufacturing processes to ensure required quality
standards are also included in this category. Identification of defective products involve the maintaining a team of inspectors. It may be
very costly for some organizations.

(iii) Internal failure costs

Internal failure costs are those costs that are incurred to remove defects from the products before shipping them to customers.
Examples of internal failure costs include cost of rework, rejected products, scrap etc.

(iv) External failure costs

If defective products have been shipped to customers, external failure costs arise. External failure costs include warranties,
replacements, lost sales because of bad reputation, payment for damages arising from the use of defective products etc. The shipment
of defective products can dissatisfy customers, damage goodwill and reduce sales and profits.

FORMULA / CALCULATION

Cost of Quality (COQ) = Cost of Control + Cost of Failure of Control


where

Cost of Control = Prevention Cost + Appraisal Cost

and

Cost of Failure of Control = Internal Failure Cost + External Failure Cost


NOTES

 In its simplest form, COQ can be calculated in terms of effort (hours/days).


 A better approach will be to calculate COQ in terms of money (converting the effort into money and adding any other tangible costs like
test environment setup).
 The best approach will be to calculate COQ as a percentage of total cost. This allows for comparison of COQ across projects or
companies.
 To ensure impartiality, it is advised that the Cost of Quality of a project/product be calculated and reported by a person external to the
core project/product team (Say, someone from the Accounts Department).
 It is desirable to keep the Cost of Quality as low as possible. However, this requires a fine balancing of costs between Cost of Control
and Cost of Failure of Control. In general, a higher Cost of Control results in a lower Cost of Failure of Control. But, the law of
diminishing returns holds true here as well.

8 Quality Leadership
AKTUTHEINTACTONE2 MONTHS AGO 1 COMMENT

Quality leadership is a precondition for implementing quality management. How


organizational leaders structure and direct an organization as well as how they behave within
an organization are critical elements to the success of an effective quality management
process. This article addresses these leadership issues and presents a plan for quality
management implementation. Several behavioral tools are described that address potential
obstacles to the implementation process.

Those following the teachings of quality gurus such as Deming, Juran and Feigenbaum and
implementing quality culture, tools and techniques are following approaches that tend toward
leadership traits that include, empowerment, a focus on people a strong strategic viewpoint
and an awareness of integrating different disciplines. Other traits include strong integrity and
an awareness of social responsibilities.
Quality Leadership Characteristics

1. Values: Integrity, Trust and Culture


2. Vision: Strategic Focus
3. Inspiration: Communications Skills, Role Model, Motivational and Mentor
4. Innovative: Change Agent
5. Systems View: Interactive
6. Empowering: Focus on Employees
7. Customer Focus: Society
8. Business and Quality Knowledge

UNIT 2

Topic 2 7QC Tools


AKTUTHEINTACTONE2 MONTHS AGO 1 COMMENT

The Seven Basic Tools of Quality (also known as 7 QC Tools) originated in Japan when the country was undergoing major quality revolution
and had become a mandatory topic as part of Japanese’s industrial training program. These tools which comprised of simple graphical and
statistical techniques were helpful in solving critical quality related issues. These tools were often referred as Seven Basics Tools of Quality
because these tools could be implemented by any person with very basic training in statistics and were simple to apply to solve quality-related
complex issues.

7 QC tools can be applied across any industry starting from product development phase till delivery. 7QC tools even today owns the same
popularity and is extensively used in various phases of Six Sigma (DMAIC or DMADV), in continuous improvement process (PDCA cycle) and
Lean management (removing wastes from process).
The seven QC tools are:

1. Stratification (Divide and Conquer)


2. Histogram
3. Check Sheet (Tally Sheet)
4. Cause-and-effect diagram (“fishbone” or Ishikawa diagram)
5. Pareto chart (80/20 Rule)
6. Scatter diagram (Shewhart Chart)
7. Control chart

1. Stratification (Divide and Conquer)

Stratification is a method of dividing data into sub–categories and classify data based on group, division, class or levels that helps in deriving
meaningful information to understand an existing problem.

The very purpose of Stratification is to divide the data and conquer the meaning full Information to solve a problem.

 Un–stratified data (An employee reached late to office on following dates)


 5-Jan, 12-Jan,13-Jan, 19-Jan, 21-Jan, 26-Jan,27-Jan
 Stratified data: (Same data classified by day of the week )

2. Histogram

Histogram introduced by Karl Pearson is a bar graph representing the frequency distribution on each bars.

The very purpose of Histogram is to study the density of data in any given distribution and understand the factors or data that repeat more often.

Histogram helps in prioritizing factors and identify which are the areas that needs utmost attention immediately.

3. Check sheet (Tally Sheet)

A check sheet can be metrics, structured table or form for collecting data and analysing them. When the information collected is quantitative in
nature, the check sheet can also be called as tally sheet.

The very purpose of checklist is to list down the important checkpoints or events in a tabular/metrics format and keep on updating or marking the
status on their occurrence which helps in understanding the progress, defect patterns and even causes for defects.
4. Cause-and-effect diagram. (“Fishbone” or Ishikawa diagram)

Cause–and–effect diagram introduced by Kaoru Ishikawa helps in identifying the various causes (or factors) leading to an effect (or problem) and
also helps in deriving meaningful relationship between them.

The very purpose of this diagram is to identify all root causes behind a problem.

Once a quality related problem is defined, the factors leading to the causal of the problem are identified. We further keep identifying the sub
factors leading to the causal of identified factors till we are able to identify the root cause of the problem. As a result we get a diagram with
branches and sub branches of causal factors resembling to a fish bone diagram.

In manufacturing industry, to identify the source of variation the causes are usually grouped into below major categories:

 People
 Methods
 Machines
 Material
 Measurements
 Environment
5. Pareto chart (80 – 20 Rule)

Pareto chart is named after Vilfredo Pareto. Pareto chart revolves around the concept of 80-20 rule which underlines that in any process, 80% of
problem or failure is just caused by 20% of few major factors which are often referred as Vital Few, whereas remaining 20% of problem or failure
is caused by 80% of many minor factors which are also referred as Trivial Many.

The very purpose of Pareto Chart is to highlight the most important factors that is the reason for major cause of problem or failure.

Pareto chart is having bars graphs and line graphs where individual factors are represented by a bar graph in descending order of their impact
and the cumulative total is shown by a line graph.

Pareto charts help experts in following ways:

 Distinguish between vital few and trivial many.


 Displays relative importance of causes of a problem.
 Helps to focus on causes that will have the greatest impact when solved.

6. Scatter Diagram

Scatter diagram or scatter plot is basically a statistical tool that depicts dependent variables on Y – Axis and Independent Variable on X – axis
plotted as dots on their common intersection points. Joining these dots can highlight any existing relationship among these variables or an
equation in format Y = F(X) + C, where is C is an arbitrary constant.
Very purpose of scatter Diagram is to establish a relationship between problem (overall effect) and causes that are affecting.

The relationship can be linear, curvilinear, exponential, logarithmic, quadratic, polynomial etc. Stronger the correlation, stronger the relationship
will hold true. The variables can be positively or negatively related defined by the slope of equation derived from the scatter diagram.

7. Control Chart (Shewhart Chart)

Control chart is also called as Shewhart Chart named after Walter A. Shewhart is basically a statistical chart which helps in determining if an
industrial process is within control and capable to meet the customer defined specification limits.

The very purpose of control chart is to determine if the process is stable and capable within current conditions.

In Control Chart, data are plotted against time in X-axis. Control chart will always have a central line (average or mean), an upper line for the
upper control limit and a lower line for the lower control limit. These lines are determined from historical data.

By comparing current data to these lines, experts can draw conclusions about whether the process variation is consistent (in control, affected by
common causes of variation) or is unpredictable (out of control, affected by special causes of variation). It helps in differentiating common causes
from special cause of variation.

Control charts are very popular and vastly used in Quality Control Techniques, Six Sigma (Control Phase) and also plays an important role in
defining process capability and variations in productions. This tool also helps in identifying how well any manufacturing process is in line with
respect to customer’s expectation.

Control chart helps in predicting process performance, understand the various production patterns and study how a process changes or shifts
from normally specified control limits over a period of time.
Why use the 7 QC tools?

The 7 QC tools is a title given to a fixed set of graphical techniques identified as being most helpful in troubleshooting issues related to quality.
The 7 QC tools are fundamental instruments to improve the process and product quality. They are used to examine the production process,
identify the key issues, control fluctuations of product quality, and give solutions to avoid future defects.

These are the tools which facilitate the organization to resolve the basic problems. When an organization starts the journey of quality
improvements, the organization normally has many low hanging fruits; which should be tackled with these basic 7 QC tools. These 7 QC tools are
easy to understand and implement and does not need complex analytical/ statistical competence.

When to use the 7 QC Tools?

Collectively, these tools are commonly referred to as the 7 QC tools. In the Define phase of the DMAIC process, Flowcharts are very important. In
the Measure phase, the first three of the 7 QC tools are relevant: Fishbone Diagram, Pareto Chart, and Control Charts. In the Analyze phase, the
Scatter Diagram, Histogram, and Checklist are relevant. The Control Chart is also relevant in the Improve phase.
Advertisements

3 Regression Control Charts

In statistical quality control, the regression control chart allows for monitoring a change in a
process where two or more variables are correlated. The change in a dependent variable can
be detected and compensatory change in the independent variable can be recommended.

Regression control chart differs from a traditional control chart in four main aspects:

 It is designed to control a varying (rather than a constant) average.


 The control limit lines are parallel to the regression line rather than the horizontal line.
 The computations here are much more complex.
 It is appropriate for use in more complex situations.

The general idea of the regression control chart is as follows: Suppose one is monitoring the
relationship between the number of parts produced and the number of work hours expended.
Within reasonable limits, on may expect that the greater the number of work hours, the more
parts are produced. One could randomly draw 5 sample work days from each month, and
monitor that relationship. The control limits established in the regression control chart will
allow one to detect, when the relationship changes, for example, when productivity drops and
more work hours are necessary to produce the same number of parts.
Advertisements

4 Process Capability analysis


Process capability analysis is a set of tools used to find out how well a given process meets
a set of specification limits. In other words, it measures how well a process performs.

In practice, it compares the distribution of sample values—representing the process


outcome—to the specification limits, which are the limits of what we want achieved.
Sometimes it compares to a specification target as well.

Process capability indices are usually used to describe the capability of a process. There are
a number of different process capability indices, and whether you calculate one or all may
depend on your analysis needs. But to calculate any process capability indices you assume
stability of your process; for unstable processes process capability indices are meaningless.
So a first step in process capability analysis is a check for stability throughout the process.

An important technique used to determine how well a process meets a set of specification
limits is called a process capability analysis. A capability analysis is based on a sample of
data taken from a process and usually produces:

1. An estimate of the DPMO (defects per million opportunities).


2. One or more capability indices.
3. An estimate of the Sigma Quality Level at which the process operates.

Capability Analysis for Measurement Data from a Normal Distribution

This procedure performs a capability analysis for data that are assumed to be a random
sample from a normal distribution. It calculates capability indices such as Cpk, estimates the
DPM (defects per million), and determines the sigma quality level (SQL) at which the
process is operating. It can handle two-sided symmetric specification limits, two-sided
asymmetric limits, and one-sided limits. Confidence limits for the most common capability
indices may also be requested.
Capability Analysis for Measurement Data from Non-Normal Distributions

This procedure performs a capability analysis for data that are not assumed to come from a
normal distribution. The program will fit up to 25 alternative distribution and list them
according to their goodness-of-fit. For a selected distribution, it then calculates equivalent
capability indices, DPM, and the SQL.

Capability Analysis for Correlated Measurements

When the variables that characterize a process are correlated, separately estimating the
capability of each may give a badly distorted picture of how well the process is performing.
In such cases, it is necessary to estimate the joint probability that one or more variables will
be out of spec. This requires fitting a multivariate probability distribution. This procedure
calculates capability indices, DPM, and the SQL based on a multivariate normal distribution.

Capability Analysis for Counts or Proportions

When examination of an item or event results in a PASS or FAIL rather than a measurement,
the process capability analysis must be based on a discrete distribution. For very large lots,
the relevant distribution is the binomial. For small lots or cases of limited opportunities for
failure, the hypergeometric distribution must be used:

5 Measurement system Analysis


Measurement system Analysis (MSA) is defined as an experimental and mathematical
method of determining the amount of variation that exists within a measurement process.
Variation in the measurement process can directly contribute to our overall process
variability. MSA is used to certify the measurement system for use by evaluating the
system’s accuracy, precision and stability.

A measurement systems analysis (MSA) is a thorough assessment of a measurement


process, and typically includes a specially designed experiment that seeks to identify the
components of variation in that measurement process.

Just as processes that produce a product may vary, the process of obtaining measurements
and data may also have variation and produce incorrect results. A measurement systems
analysis evaluates the test method, measuring instruments, and the entire process of obtaining
measurements to ensure the integrity of data used for analysis (usually quality analysis) and
to understand the implications of measurement error for decisions made about a product or
process. MSA is an important element of Six Sigma methodology and of other quality
management systems.

MSA analyzes the collection of equipment, operations, procedures, software and personnel
that affects the assignment of a number to a measurement characteristic.

A measurement systems analysis considers the following:

 Selecting the correct measurement and approach


 Assessing the measuring device
 Assessing procedures and operators
 Assessing any measurement interactions
 Calculating the measurement uncertainty of individual measurement devices and/or measurement
systems

Why Perform Measurement System Analysis (MSA)

An effective MSA process can help assure that the data being collected is accurate and the
system of collecting the data is appropriate to the process. Good reliable data can prevent
wasted time, labor and scrap in a manufacturing process. A major manufacturing company
began receiving calls from several of their customers reporting non-compliant materials
received at their facilities sites. The parts were not properly snapping together to form an
even surface or would not lock in place. The process was audited and found that the parts
were being produced out of spec. The operator was following the inspection plan and using
the assigned gages for the inspection. The problem was that the gage did not have adequate
resolution to detect the non-conforming parts. An ineffective measurement system can allow
bad parts to be accepted and good parts to be rejected, resulting in dissatisfied customers and
excessive scrap. MSA could have prevented the problem and assured that accurate useful data
was being collected.

How to Perform Measurement System Analysis (MSA)

MSA is a collection of experiments and analysis performed to evaluate a measurement


system’s capability, performance and amount of uncertainty regarding the values measured.
We should review the measurement data being collected, the methods and tools used to
collect and record the data. Our goal is to quantify the effectiveness of the measurement
system, analyze the variation in the data and determine its likely source. We need to evaluate
the quality of the data being collected in regards to location and width variation. Data
collected should be evaluated for bias, stability and linearity.

During an MSA activity, the amount of measurement uncertainty must be evaluated for each
type of gage or measurement tool defined within the process Control Plans. Each tool should
have the correct level of discrimination and resolution to obtain useful data. The process, the
tools being used (gages, fixtures, instruments, etc.) and the operators are evaluated for proper
definition, accuracy, precision, repeatability and reproducibility

6 Design and Analysis of Experiment (DOE)


AKTUTHEINTACTONE2 MONTHS AGO 1 COMMENT

The term experiment is defined as the systematic procedure carried out under controlled
conditions in order to discover an unknown effect, to test or establish a hypothesis, or to
illustrate a known effect. When analyzing a process, experiments are often used to evaluate
which process inputs have a significant impact on the process output, and what the target
level of those inputs should be to achieve a desired result (output). Experiments can be
designed in many different ways to collect this information. Design of Experiments (DOE) is
also referred to as Designed Experiments or Experimental Design – all of the terms have the
same meaning.

Experimental design can be used at the point of greatest leverage to reduce design costs by
speeding up the design process, reducing late engineering design changes, and reducing
product material and labor complexity. Designed Experiments are also powerful tools to
achieve manufacturing cost savings by minimizing process variation and reducing rework,
scrap, and the need for inspection.

This Toolbox module includes a general overview of Experimental Design and links and
other resources to assist you in conducting designed experiments. A glossary of terms is also
available at any time through the Help function, and we recommend that you read through it
to familiarize yourself with any unfamiliar terms.
Design of Experiments (DOE)

Design of Experiments (DOE) is a branch of applied statistics focused on using the scientific
method for planning, conducting, analyzing and interpreting data from controlled tests or
experiments. DOE is a mathematical methodology used to effectively plan and conduct
scientific studies that change input variables (X) together to reveal their effect on a given
response or the output variable (Y). In plain, non-statistical language, the DOE allows you to
evaluate multiple variables or inputs to a process or design, their interactions with each other
and their impact on the output. In addition, if performed and analyzed properly you should be
able to determine which variables have the most and least impact on the output. By knowing
this you can design a product or process that meets or exceeds quality requirements and
satisfies customer needs.

Why Utilize Design of Experiments (DOE)?

DOE allows the experimenter to manipulate multiple inputs to determine their effect on the
output of the experiment or process. By performing a multi-factorial or “full-factorial”
experiment, DOE can reveal critical interactions that are often missed when performing a
single or “fractional factorial” experiment. By properly utilizing DOE methodology, the
number of trial builds or test runs can be greatly reduced. A robust Design of Experiments
can save project time and uncover hidden issues in the process. The hidden issues are
generally associated with the interactions of the various factors. In the end, teams will be able
to identify which factors impact the process the most and which ones have the least influence
on the process output.

When to Utilize Design of Experiments (DOE)?

Experimental design or Design of Experiments can be used during a New Product / Process
Introduction (NPI) project or during a Kaizen or process improvement exercise. DOE is
generally used in two different stages of process improvement projects.

 During the “Analyze” phase of a project, DOE can be used to help identify the Root Cause of a
problem. With DOE the team can examine the effects of the various inputs (X) on the output (Y).
DOE enables the team to determine which of the Xs impact the Y and which one(s) have the most
impact.
 During the “Improve” phase of a project, DOE can be used in the development of a predictive
equation, enabling the performance of what-if analysis. The team can then test different ideas to
assist in determining the optimum settings for the Xs to achieve the best Y output.

Some knowledge of statistical tools and experimental planning is required to fully understand
DOE methodology. While there are several software programs available for DOE analysis, to
properly apply DOE you need to possess an understanding of basic statistical concepts.
Components of Experimental Design

Consider the following diagram of a cake-baking process. There are three aspects of the
process that are analyzed by a designed experiment:

1. Factors, or inputs to the process

Factors can be classified as either controllable or uncontrollable variables. In this case, the
controllable factors are the ingredients for the cake and the oven that the cake is baked in.
The controllable variables will be referred to throughout the material as factors. Note that the
ingredients list was shortened for this example – there could be many other ingredients that
have a significant bearing on the end result (oil, water, flavoring, etc). Likewise, there could
be other types of factors, such as the mixing method or tools, the sequence of mixing, or even
the people involved. People are generally considered a Noise Factor (see the glossary) – an
uncontrollable factor that causes variability under normal operating conditions, but we can
control it during the experiment using blocking and randomization. Potential factors can be
categorized using the Fishbone Chart (Cause & Effect Diagram) available from the Toolbox.

2. Levels, or settings of each factor in the study

Examples include the oven temperature setting and the particular amounts of sugar, flour, and
eggs chosen for evaluation.

3. Response, or output of the experiment

In the case of cake baking, the taste, consistency, and appearance of the cake are measurable
outcomes potentially influenced by the factors and their respective levels. Experimenters
often desire to avoid optimizing the process for one response at the expense of another. For
this reason, important outcomes are measured and analyzed to determine the factors and their
settings that will provide the best overall outcome for the critical-to-quality characteristics –
both measurable variables and assessable attributes.
Advertisements

7 Acceptance Sampling Plan


AKTUTHEINTACTONE2 MONTHS AGO 1 COMMENT

An inspection of a product or service that determines whether or not the product will be
accepted. For example, a furniture manufacturer would use an acceptance sampling plan to
make acceptance decisions related to the type and quality of wood and similar raw materials
they purchase for inclusion into their finished products.

Meaning of Acceptance Sampling or Sampling Inspection

One method of controlling the quality of a product is 100% inspection which requires huge
expenditure in terms of time, money and labour. Moreover due to boredom and fatigue
involved in repetitive inspection process, there exists a possibility to overlook and some
defective products may pass the inspection point.

Also when the quality of a product is tested by destructive testing (e.g., life of a candle or
testing of electrical fuses) then 100% inspection shall destroy all the products.

The alternative is statistical sampling inspection methods. Here from the whole lot of
products/items to be inspected, some items are selected for inspection.

If that sample of items conforms to be desired quality requirements then the whole lot is
accepted, if it does not, the whole lot is rejected. Thus the sample items are considered to be
the representative of the whole lot. This method of acceptance or rejection of a sample is
called Acceptance Sampling.

In general acceptance sampling method proves to be economical and is used under the
assumption when the quality characteristics of the item are under control and relatively
homogeneous.

Classification of Acceptance Sampling Plan

Depending upon the type of inspection acceptance sampling may be classified in two ways:

(i) Acceptance sampling on the basis of attributes i.e. GO and NOT GO gauges, and

(ii) Acceptance sampling on the basis of variables.

In acceptance sampling by attributes, no actual measurement is done and the inspection is


done by way of GO & NOT GO gauges. If the product conforms to the given specifications it
is accepted, otherwise rejected. The magnitude of error is not important in this case.

For example if cracks is the criteria of inspection/the products with cracks will be rejected
and without cracks accepted the shape and size of the cracks shall not be measured and
considered.
In acceptance sampling by variables, the actual measurements of dimensions are taken or
physical and chemical testing of the characteristics of sample of materials/products is done. If
the results are as per specifications the lot is accepted otherwise rejected.

Advantages of Acceptance Sampling Plan

(i) The method is applicable in those industries where there is mass production and the
industries follow a set production procedure.

(ii) The method is economical and easy to understand.

(iii) Causes less fatigue boredom.

(iv) Computation work involved is comparatively very small.

(v) The people involved in inspection can be easily imparted training.

(vi) Products of destructive nature during inspection can be easily inspected by sampling.

(vii) Due to quick inspection process, scheduling and delivery times are improved.

Limitations of Acceptance Sampling Plan

(i) It does not give 100% assurance for the confirmation of specifications so there is always
some likelihood/risk of drawing wrong inference about the quality of the batch/lot.

(ii) Success of the system is dependent on, sampling randomness, quality characteristics to be
tested, batch size and criteria of acceptance of lot.
Terms Used in Acceptance Sampling

Following terms are generally used in acceptance sampling:

1. Acceptable Quality Level (AQL)

It is the desired quality level at which probability of a acceptance is high. It represents


maximum proportion of defectives which the consumer finds acceptable or it is the maximum
percent defectives that for the purpose of sampling inspection can be considered satisfactory.

2. Lot Tolerance Percent Defective (LTPD) or Reject able Quality Level (RQL)

It is the quality level at which the probability of acceptance is low and below this level the
lots are rejected. This prescribes the dividing line between good and bad lots. Lots at this
quality level are considered to be poor.

3. Average outgoing Quality (A.O.Q)

Acceptance sampling plans provides the assurance that the average quality level or percent
defectives actually going to consumers will not exceed certain limit. Fig demonstrates the
concept of average outgoing quality related with actual percent defectives being produced.
The AOQ curve indicates that as the actual percent defectives in a production process
increases, initially the effect is for the lots to be passed for acceptance even though the
number of defectives has gone up and the percent defectives going to the consumer increases.

If this upward trend continues, the acceptance plan beings to reject lots and when lots are
rejected, 100% inspection is followed and defective units are replaced by good ones. The net
effect is to improve the average quality of the outgoing products since the rejected lots which
to be ultimately accepted contain all non-defective items (because of 100% inspection).

4. Operating Characteristic Curve or O.C. Curve

Operating characteristic curve for a sampling plan is a graph between fraction defective in a
lot and the probability of acceptance. In practice the performance of acceptance sampling for
distinguishing defectives and acceptable or good and bad lots mainly depends upon the
sample size (n) and the number of defectives permissible in the sample.

The O.C. curve shown in Fig. is the curve of a 100 percent inspection plan is said to be an
ideal curve, because it is generated by and acceptance plan which creates no risk either for
producer or the consumer. Fig. Shows the O.C. curve that passes through two stipulated
points i.e. two pre-agreed points AQL and LTPD by the producer and the consumer.

Usually the producer’s and consumer’s risks are agreed upon Fig. and explicitly recorded in
quantitative terms.

This leads to following two types of risks:


The merit of any sampling plan depends on the relationship of sampling cost to risk. As the
cost of inspection go down the cost of accepting defectives increases.

Characteristics of O.C. Curve

(i) The larger the sample size and acceptance number steeper will be the slope of O.C. curve.

(ii) The O.C. curve of the sampling plans with acceptance number greater than zero are
superior to those with acceptance number as zero.

(iii) Fixed sample size tends towards constant quality production.

8 Process failure Mode and effect analysis (PFMEA)


AKTUTHEINTACTONE2 MONTHS AGO 1 COMMENT

Introduction to Process Failure Mode and Effects Analysis (PFMEA)

Manufacturing and Process Engineers envision a process is free of errors. Unfortunately,


errors and especially errors propagated when people are present can be quite catastrophic.
Process Failure Mode and Effects Analysis (PFMEA) looks at each process step to identify
risks and possible errors from many different sources. The sources most often considered are:

 Man
 Methods
 Material
 Machinery
 Measurement
 Mother Earth (Environment)

What is Process Failure Mode and Effects Analysis (PFMEA)?


PFMEA is a methodical approach used for identifying risks on process changes. The Process
FMEA initially identifies process functions, failure modes their effects on the process. If
there are design inputs, or special characteristics, the effect on end user is also included. The
severity ranking or danger of the effect is determined for each effect of failure. Then, causes
and their mechanisms of the failure mode are identified. The assumption that the design is
adequate keeps the focus on the process. A high probability of a cause drives actions to
prevent or reduce the impact of the cause on the failure mode. The detection ranking
determines the ability of specific tests to confirm the failure mode / causes are eliminated.
The PFMEA also tracks improvements through Risk Priority Number (RPN) reductions. By
comparing the before and after RPN, a history of improvement and risk mitigation can be
chronicled.

Why Perform Process Failure Mode and Effects Analysis (PFMEA)?

Risk is the substitute for failure on new processes. It is a good practice to identify risks for
each process step as early as possible. The main goal is to identify risk prior to tooling
acquisition. Mitigation of the identified risk prior to first article or Production Part Approval
Process (PPAP) will validate the expectation of superior process performance.

Risks are identified on new technology and processes, which if left unattended, could result
in failure. The PFMEA is applied when:

 There is a new technology or new process introduced


 There is a current process with modifications, which may include changes due to updated processes,
continuous Improvement, Kaizen or Cost of Quality (COQ).
 There is a current process exposed to a new environment or change in location (no physical change
made to process)

How to Perform Process Failure Mode and Effects Analysis (PFMEA)?

There are five primary sections of the Process FMEA. Each section has a distinct purpose and
a different focus. The PFMEA is completed in sections at different times within the project
timeline, not all at once. The Process FMEA form is completed in the following sequence:
PFMEA Section 1 (Quality-One Path 1)

Process Name / Function

The Process Name / Function column permits the Process (PE) or Manufacturing Engineer
(ME) to describe the process technology that is being analyzed. The process can be a
manufacturing operation or an assembly. The function is the “Verb-Noun” that describes
what the process operation does. There may be many functions for any one process operation.

Requirement

The requirements, or measurements, of the process function are described in the second
column. The requirements are either provided by a drawing or a list of special characteristics.
A Characteristics Matrix, which is form of Quality Function Deployment (QFD), may be
used and will link characteristics to their process operations. The requirement must be
measurable and should have test and inspection methods defined. These methods will later
be placed on the Control Plan. The first opportunity for recommended action may be to
investigate and clarify the requirements and characteristics of the product with the design
team and Design FMEA.

Failure Mode

Failure Modes are the anti-functions or requirements not being met. There are 5 types of
Failure Modes:

(i) Full Failure

(ii) Partial Failure

(iii) Intermittent Failure

(iv) Degraded Failure

(v) Unintentional Failure

Effects of Failure

The effects of a failure are focused on impacts to the processes, subsequent operations and
possibly customer impact. Many effects could be possible for any one failure mode. All
effects should appear in the same cell next to the corresponding failure mode. It is also
important to note that there may be more than one customer; both internal and external
customers may be affected.

Severity

The Severity of each effect is selected based on both Process Effects as well as Design
Effects. The severity ranking is typically between 1 through 10.

Typical Severity for Process Effects (when no Special Characteristics / design inputs are
given) is as follows:

 2-4: Minor Disruption with rework / adjustment in stations; slows down production (does not
describe a lean operation)
 5-6: Minor disruption with rework out of station; additional operations required (does not describe a
lean operation)
 7-8: Major disruption, rework and/or scrap is produced; may shutdown lines at customer or
internally within the organization
 9-10: Regulatory and safety of the station is a concern; machine / tool damage or unsafe work
conditions

Typical Severity for Design Effects (when Special Characteristics / design inputs are given)
is as follows:

 2-4: Annoyance or squeak and rattle; visual defects which do not affect function
 5-6: Degradation or loss of a secondary function of the item studied
 7-8: Degradation or loss of the primary function of the item studied
 9-10: Regulatory and / or Safety implications

The highest severity is chosen from the many potential effects and placed in the Severity
Column. Actions may be identified to can change the design direction on any failure mode
with an effect of failure ranked 9 or 10. If a recommended action is identified, it is placed in
the Recommended Actions column of the PFMEA.

Classification

Classification refers to the type of characteristics indicated by the risk. Many types of special
characteristics exist in different industries. These special characteristics typically require
additional work, either design error proofing, process error proofing, process variation
reduction (Cpk) or mistake proofing. The Classification column designates where the
characteristics may be identified and later transferred to a Control Plan.
PFMEA Section 2 (Quality-One Path 2)

Potential Causes / Mechanisms of Failure

Causes are defined for the Failure Mode and should be determined for their impact on the
Failure Mode being analyzed. Causes typically follow the Fishbone / Ishikawa Diagram
approach, with the focus of cause brainstorming on the 6M’s: Man, Method, Material,
Machine, Measurement and Mother Earth (Environment). Use of words like bad, poor,
defective and failed should be avoided as they do not define the cause with enough detail to
make risk calculations for mitigation.

Current Process Controls Prevention

The prevention strategy used by a manufacturing or process team may benefit the process by
lowering occurrence or probability. The stronger the prevention, the more evidence the
potential cause can be eliminated by process design. The use of verified process standards,
proven technology (with similar stresses applied), Programmable Logic Controllers (PLC),
simulation technology and Standard Work help are typical Prevention Controls.

Occurrence

The Occurrence ranking is an estimate based on known data or lack of it. The Occurrence in
Process FMEAs can be related to known / similar technology or new process technology. A
modification to the ranking table is suggested based on volumes and specific use.
Typical Occurrence rankings for new process technology (similar to DFMEA Occurrence
Ranking) are as follows:

 1: Prevented causes due to using a known design standard


 2: Identical or similar design with no history of failure
 This ranking is often used improperly. The stresses in the new application and a sufficient sample of
products to gain history are required to select this ranking value.
 3-4: Isolated failures
 Some confusion may occur when trying to quantify “isolated”
 5-6: Occasional failures have been experienced in the field or in development / verification testing
 7-9: New design with no history (based on a current technology)
 10: New design with no experience with technology

Typical Occurrence rankings for known / similar technology are as follows:

 1: Prevented through product / process design; error proofed


 2: 1 in 1,000, 000
 3: 1in 100,000
 4: 1 in 10,000
 5: 1 in 2,000
 6: 1 in 500
 7: 1 in 100
 8: 1 in 50
 9: 1 in 20
 10: 1 in 10

Actions may be directed against causes of failure which have a high occurrence. Special
attention must be placed on items with Severity 9 or 10. These severity rankings must be
examined to assure that due diligence has been satisfied.
PFMEA Section 3 (Quality-One Path 3)

Current Process Controls Detection

The activities conducted to verify the product meets the specifications detailed by the product
or process design are placed in the Current Process Controls Detection column. Examples
are:

 Error proofing devices (cannot make nonconforming product)


 Mistake proofing devices (cannot pass nonconforming product)
 Inspection devices which collect variable data
 Alarms for unstable process parameters
 Visual inspection

Detection Rankings

Detection Rankings are assigned to each method or inspection based on the type of technique
used. Each detection control is given a detection ranking using a predetermined scale. There
is often more than one test / evaluation technique per Cause-Failure Mode combination.
Listing all in one cell and applying a detection ranking for each is the best practice. The
lowest of the detection rankings is then placed in the detection column. Typical Process
Controls Detection Rankings are as follows:
 1: Error (Cause) has been fully prevented and cannot occur
 2: Error Detection in-station, will not allow a nonconforming product to be made
 3: Failure Detection in-station, will not allow nonconforming product to pass
 4: Failure Detection out of station, will not leave plant / pass through to customer
 5-6: Variables gage, attribute gages, control charts, etc., requires operator to complete the activity
 7-8: Visual, tactile or audible inspection
 9: Lot sample by inspection personnel
 10: No Controls

Actions may be necessary to improve inspection or evaluation capability. The improvement


will address the weakness in the inspection and evaluation strategy. The actions are placed in
the Recommended Actions Column.
PFMEA Section 4

Risk Priority Number (RPN)

The Risk Priority Number (RPN) is the product of the three previously selected rankings,
Severity * Occurrence * Detection. RPN thresholds must not be used to determine the need
for action. RPN thresholds are not permitted mainly due to two factors:

 Poor behavior by design engineers trying to get below the specified threshold
 This behavior does not improve or address risk. There is no RPN value above which an action should
be taken or below which a team is excused of one.
 “Relative Risk” is not always represented by RPN

Recommended Actions

The Recommended Actions column is the location within the Process FMEA that all
potential improvements are placed. Completed actions are the purpose of the PFMEA.
Actions must be detailed enough that it makes sense if it stood alone in a risk register or
actions list. Actions are directed against one of the rankings previously assigned. The
objectives are as follows:

 Eliminate Failure Modes with a Severity 9 or 10


 Lower Occurrence on Causes by error proofing, reducing variation or mistake proofing
 Lower Detection on specific test improvements

Responsibility and Target Completion Date

Enter the name and date that the action should be completed by. A milestone name can
substitute for a date if a timeline shows the linkage between date and selected milestone.
PFMEA Section 5

Actions Taken and Completion Date

List the Actions Taken or reference the test report which indicates the results. The Process
FMEA should result in actions which bring higher risks items to an acceptable level of risk. It
is important to note that acceptable risk is desirable and mitigation of high risk to lower risk
is the primary goal.

Re-Rank RPN
The new (re-ranked) RPN should be compared with the original RPN. A reduction in this
value is desirable. Residual risk may still be too high after actions have been taken. If this is
the case, a new action line would be developed. This is repeated until an acceptable residual
risk has been obtained.

Process Failure Mode and Effects Analysis (PFMEA) Services

The Process FMEA Services available from Quality-One are PFMEA Consulting, PFMEA
Training and PFMEA Support, which may include Facilitation, Auditing or Contract
Services. Our experienced team of highly trained professionals will provide a customized
approach for developing your people and processes based on your unique PFMEA needs.
Whether you need Consulting to assist with a plan to deploy PFMEA, Training to help
understand and drive improvement or hands-on Project Support for building and
implementing your PFMEA process, Quality-One can support you! By utilizing our
experienced Subject Matter Experts (SME) to work with your teams, Quality-One can help
you realize the value of Process FMEA in your organization.

6 SERVQUAL Model of Measuring Service Quality


AKTUTHEINTACTONE2 MAR 2019 2 COMMENTS

The SERVQUAL Model is an empiric model by Zeithaml, Parasuraman and Berry to


compare service quality performance with customer service quality needs. It is used to do a
gap analysis of an organization’s service quality performance against the service quality
needs of its customers. That’s why it’s also called the GAP model.

It takes into account the perceptions of customers of the relative importance of service
attributes. This allows an organization to prioritize.

There are five core components of service quality:


1. Tangibles: physical facilities, equipment, staff appearance, etc.
2. Reliability: ability to perform service dependably and accurately.
3. Responsiveness: willingness to help and respond to customer need.
4. Assurance: ability of staff to inspire confidence and trust.
5. Empathy: the extent to which caring individualized service is given.

The four themes that were identified by the SERVQUAL developers were numbered and
labelled as:
1. Consumer expectation – Management Perception Gap (Gap 1):

Management may have inaccurate perceptions of what consumers (actually) expect. The
reason for this gap is lack of proper market/customer focus. The presence of a marketing
department does not automatically guarantee market focus. It requires the appropriate
management processes, market analysis tools and attitude.
2. Service Quality Specification Gap (Gap 2):

There may be an inability on the part of the management to translate customer expectations
into service quality specifications. This gap relates to aspects of service design.
3. Service Delivery Gap (Gap 3):

Guidelines for service delivery do not guarantee high-quality service delivery or


performance. There are several reasons for this. These include: lack of sufficient support for
the frontline staff, process problems, or frontline/contact staff performance variability.
4. External Communication Gap (Gap 4):

Consumer expectations are fashioned by the external communications of an organization. A


realistic expectation will normally promote a more positive perception of service quality. A
service organization must ensure that its marketing and promotion material accurately
describes the service offering and the way it is delivered

5. These four gaps cause a fifth gap (Gap 5)

Which is the difference between customer expectations and perceptions of the service
actually received Perceived quality of service depends on the size and direction of Gap 5,
which in turn depends on the nature of the gaps associated with marketing, design and
delivery of services. So, Gap 5 is the product of gaps 1, 2, 3 and 4. If these four gaps, all of
which are located below the line that separates the customer from the company, are closed
then gap 5 will close.
How to measure Service Quality?

1. Mystery Shopping

This is a popular technique used for retail stores, hotels, and restaurants, but works for any
other service as well. It consists out of hiring an ‘undercover customer’ to test your service
quality – or putting on a fake moustache and going yourself, of course.

The undercover agent then assesses the service based on a number of criteria, for example
those provided by SERVQUAL. This offers more insights than simply observing how your
employees work. Which will probably be outstanding — as long as their boss is around.

2. Post Service Rating

This is the practice of asking customers to rate the service right after it’s been delivered.

With Userlike’s live chat, for example, you can set the chat window to change into a service
rating view once it closes. The customers make their rating, perhaps share some explanatory
feedback, and close the chat.

Something similar is done with ticket systems like Help Scout, where you can rate the service
response from your email inbox.

It’s also done in phone support. The service rep asks whether you’re satisfied with her service
delivery, or you’re asked to stay on the line to complete an automatic survey. The latter
version is so annoying, though, that it kind of destroys the entire service experience.

Different scales can be used for the post service rating. Many make use of a number-rating
from 1 – 10. There’s possible ambiguity here, though, because cultures differ in how they rate
their experiences.

3. Follow-Up Survey

With this method you ask your customers to rate your service quality through an email survey
– for example via Google Forms. It has a couple advantages over the post-service rating.
For one, it gives your customer the time and space for more detailed responses. You can send
a SERVQUAL type of survey, with multiple questions instead of one. That’d be terribly
annoying in a post-service rating.

It also provides a more holistic overview of your service. Instead of a case-by-case


assessment, the follow-up survey measures your customers’ overall opinion of your service.

It’s also a useful technique if you didn’t have the post service rating in place yet and want a
quick overview of the state of your service quality.

4. In-App Survey

With an in-app survey, the questions are asked while the visitor is on the website or in the
app, instead of after the service or via email. It can be one simple question – e.g. ‘how would
you rate our service’ – or it could be a couple of questions.

Convenience and relevance are the main advantages. SurveyMonkey offers some great tools
for implementing something like this on your website.

5. Customer Effort Score (CES)

This metric was proposed in an influential Harvard Business Review article. In it, they argue
that while many companies aim to ‘delight’ the customer – to exceed service expectations –
it’s more likely for a customer to punish companies for bad service than it is for them to
reward companies for good service.

While the costs of exceeding service expectations are high, they show that the payoffs are
marginal. Instead of delighting our customers, so the authors argue, we should make it as
easy as possible for them to have their problems solved.

That’s what they found had the biggest positive impact on the customer experience, and what
they propose measuring.

6. Social Media Monitoring

This method has been gaining momentum with the rise of social media. For many people,
social media serve as an outlet. A place where they can unleash their frustrations and be
heard.

And because of that, they are the perfect place to hear the unfiltered opinions of your
customers – if you have the right tools. Facebook and Twitter are obvious choices, but also
review platforms like TripAdvisor or Yelp can be very relevant. Buffer suggests to ask your
social media followers for feedback on your service quality.

Two great tools to track who’s talking about you are Mention and Google Alerts.

7. Documentation Analysis

With this qualitative approach you read or listen to your respectively written or recorded
service records. You’ll definitely want to go through the documentation of low-rated service
deliveries, but it can also be interesting to read through the documentation of service agents
that always rank high. What are they doing better than the rest?

The hurdle with the method isn’t in the analysis, but in the documentation. For live chat and
email support it’s rather easy, but for phone support it requires an annoying voice at the start
of the call: “This call could be recorded for quality measurement”.

8. Objective Service Metrics

These stats deliver the objective, quantitative analysis of your service. These metrics aren’t
enough to judge the quality of your service by themselves, but they play a crucial role in
showing you the areas you should improve in.

 Volume per channel. This tracks the amount of inquiries per channel. When combined with other
metrics, like those covering efficiency or customer satisfaction, it allows you to decide which
channels to promote or cut down.
 First response time. This metric tracks how quickly a customer receives a response on her inquiry.
This doesn’t mean their issue is solved, but it’s the first sign of life – notifying them that they’ve been
heard.
 Response time. This is the total average of time between responses. So let’s say your email ticket
was resolved with 4 responses, with respective response times of 10, 20, 5, and 7 minutes. Your
response time is 10.5 minutes. Concerning reply times, most people reaching out via email expect a
response within 24 hours; for social channels it’s 60 minutes. Phone and live chat require an
immediate response, under 2 minutes.
 First contact resolution ratio. Divide the number of issues that’s resolved through a single response
by the number that required more responses. Forrester research showed that first contact
resolutions are an important customer satisfaction factor for 73% of customers.
 Replies per ticket. This shows how many replies your service team needs on average to close a
ticket. It’s a measure of efficiency and customer effort.
 Backlog Inflow/Outflow. This is the number of cases submitted compared to the number of cases
closed. A growing number indicates that you’ll have to expand your service team.
 Customer Success Ratio. A good service doesn’t mean your customers always finds what they want.
But keeping track of the number that found what they looked for versus those that didn’t, can show
whether your customers have the right ideas about your offerings.
 ‘Handovers’ per issue. This tracks how many different service reps are involved per issue. Especially
in phone support, where repeating the issue is necessary, customers hate HBR identified it as one of
the four most common service complaints.
 Things Gone Wrong. The number of complaints/failures per customer inquiry. It helps you identify
products, departments, or service agents that need some ‘fixing’.
 Instant Service / Queuing Ratio. Nobody likes to wait. Instant service is the best service. This metric
keeps track of the ratio of customers that were served instantly versus those that had to wait. The
higher the ratio, the better your service.
 Average Queueing Waiting Time. The average time that queued customers have to wait to be
served.
 Queueing Hang-ups. How many customers quit the queueing process. These count as a lost service
opportunity.
 Problem Resolution Time. The average time before an issue is resolved.
 Minutes Spent Per Call. This can give you insight on who are your most efficient operators.

Some of these measures are also financial metrics, such as the minutes spent per call and
number of handovers. You can use them to calculate your service costs per service contact.
Winning the award for the world’s best service won’t get you anywhere if the costs eat up
your profits.

Some service tools keep track of these sort of metrics automatically, like Talkdesk for phone
and User like for live chat support. If you make use of communication tools that aren’t
dedicated to service, tracking them will be a bit more work.

One word of caution for all above mentioned methods and metrics: beware of averages, they
will deceive you. If your dentist delivers a great service 90% of the time, but has a habit of
binge drinking and pulling out the wrong teeth the rest of the time, you won’t stick around
long.

A more realistic image shapes up if you keep track of the outliers and standard deviation as
well. Measure your service, aim for a high average, and improve by diminishing the outliers.

UNIT 3.

1 Quality function Deployment


AKTUTHEINTACTONE2 MONTHS AGO 1 COMMENT

The average consumer today has a multitude of options available to select from for similar
products and services. Most consumers make their selection based upon a general perception
of quality or value. Consumers typically want “the most bang for their buck”. In order to
remain competitive, organizations must determine what is driving the consumer’s perception
of value or quality in a product or service. They must define which characteristics of the
products such as reliability, styling or performance form the customer’s perception of quality
and value. Many successful organizations gather and integrate the Voice of the Customer
(VOC) into the design and manufacture of their products. They actively design quality and
customer perceived value into their products and services. These companies are utilizing a
structured process to define their customer’s wants and needs and transforming them into
specific product designs and process plans to produce products that satisfy the customer’s
needs. The process or tool they are using is called Quality Function Deployment (QFD).
Quality Function Deployment (QFD)

Quality Function Deployment (QFD) is a process and set of tools used to effectively define
customer requirements and convert them into detailed engineering specifications and plans to
produce the products that fulfill those requirements. QFD is used to translate customer
requirements (or VOC) into measureable design targets and drive them from the assembly
level down through the sub-assembly, component and production process levels. QFD
methodology provides a defined set of matrices utilized to facilitate this progression.

QFD was first developed in Japan by Yoji Akao in the late 1960s while working for
Mitsubishi’s shipyard. It was later adopted by other companies including Toyota and its
supply chain. In the early 1980s, QFD was introduced in the United States mainly by the big
three automotive companies and a few electronics manufacturers. Acceptance and growth of
the use of QFD in the US was initially rather slow but has since gained popularity and is
currently being used in manufacturing, healthcare and service organizations.
Why Implement Quality Function Deployment (QFD)
Effective communication is one of the most important and impactful aspects of any
organization’s success. QFD methodology effectively communicates customer needs to
multiple business operations throughout the organization including design, quality,
manufacturing, production, marketing and sales. This effective communication of the Voice
of the Customer allows the entire organization to work together and produce products with
high levels of customer perceived value. There are several additional benefits to using
Quality Function Deployment:

 Customer Focused: QFD methodology places the emphasis on the wants and needs of the customer,
not on what the company may believe the customer wants. The Voice of the Customer is translated
into technical design specifications. During the QFD process, design specifications are driven down
from machine level to system, sub-system and component level requirements. Finally, the design
specifications are controlled throughout the production and assembly processes to assure the
customer needs are met.
 VOC Competitor Analysis: The QFD “House of Quality” tool allows for direct comparison of how your
design or product stacks up to the competition in meeting the VOC. This quick analysis can be
beneficial in making design decisions that could place you ahead of the pack.
 Shorter Development Time and Lower Cost: QFD reduces the likelihood of late design changes by
focusing on product features and improvements based on customer requirements. Effective QFD
methodology prevents valuable project time and resources from being wasted on development of
non-value added features or functions.
 Structure and Documentation: QFD provides a structured method and tools for recording decisions
made and lessons learned during the product development process. This knowledge base can serve
as a historical record that can be utilized to aid future projects.

Companies must bring new and improved products to market that meet the customer’s actual
wants and needs while reducing development time. QFD methodology is for organizations
committed to listening to the Voice of the Customer and meeting their needs.
How to Implement Quality Function Deployment (QFD)
The Quality Function Deployment methodology is a 4-phase process that encompasses
activities throughout the product development cycle. A series of matrices are utilized at each
phase to translate the Voice of the Customer to design requirements for each system, sub-
system and component. The four phases of QFD are:

1. Product Definition: The Product Definition Phase begins with collection of VOC and translating the
customer wants and needs into product specifications. It may also involve a competitive analysis to
evaluate how effectively the competitor’s product fulfills the customer wants and needs. The initial
design concept is based on the particular product performance requirements and specifications.
2. Product Development: During the Product Development Phase, the critical parts and assemblies are
identified. The critical product characteristics are cascaded down and translated to critical or key
part and assembly characteristics or specifications. The functional requirements or specifications are
then defined for each functional level.
3. Process Development: During the Process Development Phase, the manufacturing and assembly
processes are designed based on product and component specifications. The process flow is
developed and the critical process characteristics are identified.
4. Process Quality Control: Prior to production launch, the QFD process identifies critical part and
process characteristics. Process parameters are determined and appropriate process controls are
developed and implemented. In addition, any inspection and test specifications are developed. Full
production begins upon completion of process capability studies during the pilot build.

Effective use of QFD requires team participation and discipline inherent in the practice of
QFD, which has proven to be an excellent team-building experience.
Level 1 QFD

The House of Quality is an effective tool used to translate the customer wants and needs into
product or service design characteristics utilizing a relationship matrix. It is usually the first
matrix used in the QFD process. The House of Quality demonstrates the relationship between
the customer wants or “Whats” and the design parameters or “Hows”. The matrix is data
intensive and allows the team to capture a large amount of information in one place. The
matrix earned the name “House of Quality” due to its structure resembling that of a house. A
cross-functional team possessing thorough knowledge of the product, the Voice of the
Customer and the company’s capabilities, should complete the matrix. The different sections
of the matrix and a brief description of each are listed below:

 “Whats”: This is usually the first section to be completed. This column is where the VOC, or the
wants and needs, of the customer are listed.
 Importance Factor: The team should rate each of the functions based on their level of importance to
the customer. In many cases, a scale of 1 to 5 is used with 5 representing the highest level of
importance.
 “Hows” or Ceiling: Contains the design features and technical requirements the product will need to
align with the VOC.
 Body or Main Room: Within the main body or room of the house of quality the “Hows” are ranked
according to their correlation or effectiveness of fulfilling each of the “Whats”. The ranking system
used is a set of symbols indicating either a strong, moderate or a weak correlation. A blank box
would represent no correlation or influence on meeting the “What”, or customer requirement. Each
of the symbols represents a numerical value of 0, 1, 3 or 9.
 Roof: This matrix is used to indicate how the design requirements interact with each other. The
interrelationships are ratings that range from a strong positive interaction (++) to a strong negative
interaction (–) with a blank box indicating no interrelationship.
 Competitor Comparison: This section visualizes a comparison of the competitor’s product in regards
to fulfilling the “Whats”. In many cases, a scale of 1 to 5 is used for the ranking, with 5 representing
the highest level of customer satisfaction. This section should be completed using direct feedback
from customer surveys or other means of data collection.
 Relative Importance: This section contains the results of calculating the total of the sums of each
column when multiplied by the importance factor. The numerical values are represented as discrete
numbers or percentages of the total. The data is useful for ranking each of the “Hows” and
determining where to allocate the most resources.
 Lower Level / Foundation: This section lists more specific target values for technical specifications
relating to the “Hows” used to satisfy VOC.

Upon completion of the House of Quality, the technical requirements derived from the VOC
can then be deployed to the appropriate teams within the organization and populated into the
Level 2 QFDs for more detailed analysis. This is the first step in driving the VOC throughout
the product or process design process.

Level 2 QFD

The Level 2 QFD matrix is a used during the Design Development Phase. Using the Level 2
QFD, the team can discover which of the assemblies, systems, sub-systems and components
have the most impact on meeting the product design requirements and identify key design
characteristics. The information produced from performing a Level 2 QFD is often used as a
direct input to the Design Failure Mode and Effects Analysis (DFMEA) process. Level 2
QFDs may be developed at the following levels:
 System Level: The technical specifications and functional requirements or “Hows” identified and
prioritized within The House of Quality become the “Whats” for the system level QFD. They are then
evaluated according to which of the systems or assemblies they impact. Any systems deemed critical
would then progress to a sub-system QFD.
 Sub-system Level: The requirements cascaded down from the system level are re-defined to align
with how the sub-system contributes to the system meeting its functional requirements. This
information then becomes the “Whats” for the QFD and the components and other possible “Hows”
are listed and ranked to determine the critical components. The components deemed critical would
then require progression to a component level QFD.
 Component Level: The component level QFD is extremely helpful in identifying the key and critical
characteristics or features that can be detailed on the drawings. The key or critical characteristics
then flow down into the Level 3 QFD activities for use in designing the process. For purchased
components, this information is valuable for communicating key and critical characteristics to
suppliers during sourcing negotiations and as an input to the Production Part Approval Process
(PPAP)

Level 3 QFD

The Level 3 QFD is used during the Process Development Phase where we examine which of
the processes or process steps have any correlation to meeting the component or part
specifications. In the Level 3 QFD matrix, the “Whats” are the component part technical
specifications and the “Hows” are the manufacturing processes or process steps involved in
producing the part. The matrix highlights which of the processes or process steps have the
most impact on meeting the part specifications. This information allows the production and
quality teams to focus on the Critical to Quality (CTQ) processes, which flow down into the
Level 4 QFD for further examination.

Level 4 QFD

The Level 4 QFD is not utilized as often as the previous three. Within the Level 4 QFD
matrix, the team should list all the critical processes or process characteristics in the “Whats”
column on the left and then determine the “Hows” for assuring quality parts are produced and
list them across the top of the matrix. Through ranking of the interactions of the “Whats” and
the “Hows”, the team can determine which controls could be most useful and develop quality
targets for each. This information may also be used for creating Work Instructions, Inspection
Sheets or as an input to Control Plans.

The purpose of Quality Function Deployment is not to replace an organization’s existing


design process but rather support and improve an organization’s design process. QFD
methodology is a systemic, proven means of embedding the Voice of the Customer into both
the design and production process. QFD is a method of ensuring customer requirements are
accurately translated into relevant technical specifications from product definition to product
design, process development and implementation. The fact is that every business,
organization and industry has customers. Meeting the customer’s needs is critical to success.
Implementing QFD methodology can enable you to drive the voice of your customers
throughout your processes to increase your ability to satisfy or even excite your customers.

Robust Design and Taguchi Method


AKTUTHEINTACTONE2 MONTHS AGO 1 COMMENT
The Taguchi Method was named after Dr. Genichi Taguchi and is also labeled as the Robust
Design technique. Dr. Genichi Taguchi was the man who pioneered the design after World
War II ended and that has developed over the years.

Unlike the Six Sigma method which aims to reduce waste in the manufacturing and during
the operations phase mainly focuses on:

1) Increasing engineering productivity to quickly develop new products at a low rate, and

2) Management which is based on value; the Robust Design centers on improving


engineering productivity.

This Robust Design is focused on the improvement of the significant function or role of a
process or a product itself, therefore smoothing the progress of strategies and coexisting
engineering. The Taguchi (Robust Design) approach rooted on a so called Energy
Transformation method for engineering systems like electrical, chemical, mechanical and the
like. It is a unique method which makes use of the ideal function of a process or product
in contrast to the conventional approaches which mainly concentrate on “symptom
analysis” as a source for development or improvement towards the achievement of
Robustness and Quality Assurance.

To ensure or guarantee customer satisfaction, the Robust Design approach takes into account
both

1) The noise considered as the variation from environmental to manufacturing and


component failure, and

2) The cost considered as the rate of deterioration in the area. It is a technique for performing
experiments to look into processes or investigate on processes where the end result depends
on several factors such as inputs and variables without having a mind-numbing and
inefficient or too costly operation with the use of possible and feasible mixture of values of
the said variables. With a systematic choice of variable combination, dividing their individual
effects is possible.

The Robust Design method is an exclusive alternative for DOE or Design of


Experiments which differentiates itself from the traditional Design of Experiments focusing
on the most favorable design parameters to reduce discrepancy prior to attaining the average
values on output parameters. This innovative design to engineering signifies the most
important leap in process and product method ever since the beginning of the Quality
revolution. The Robust Design method or the Taguchi approach makes it possible for
engineers to:

 Improve processes and products which are intended under a broad variety of consumer’s
circumstances in their life cycle and making processes reliable and products durable
 Capitalize and get the most out of robustness by developing the planned function of a product by
improving and expanding insensitivity to factors of noise which somehow discredit performance
 Alter and develop formulas and processes of a product to arrive at the performance desired at a
reduced cost or the lowest rate possible but, at the shortest turnaround or time frame
 Make designs easier and processes at a reduced cost

Over the years, Six Sigma has made it possible to reduce cost by uncovering problems which
occur during manufacturing and resolving instant causes in the life cycle of a product. Robust
Design on the other hand has made it feasible to prevent issues or problems by rigorously
developing designs for both manufacturing process and product. The Robust Design follows
a crucial methodology to ensure a systematic process to attain a good output. Below are the 5
primary tools used in the Robust Design approach:

1. The P-Diagram. This is used to categorize variables into noise, signal or the input, response or the
output, and control factors related with a product.
2. The Ideal function. This Ideal Function is utilized to statistically or mathematically identify the ideal
or ultimate outline of the signal-response association as represented by the design idea for
developing the higher-level system work fault free.
3. Quadratic Loss Function. This is also termed the Quality Loss Function and is used to measure the
loss earned or acquired by the consumer or user from the intended performance due to a deviation
from it.
4. Signal-to-Noise Ratio. This is used to predict the quality of the field by going through systematic
laboratory tests or experiments.
5. Orthogonal Arrays. These are used to collect and gather reliable information about control factors
which are considered the design parameters with minimal number of tests and experiments.

The following are the 4 main steps in Robust Parameter method:

1. Problem Formulation. This step would incorporate the identification of the main function,
development of the P-diagram, classifying the best function and signal to noise or S/N ratio, and
planning or strategizing the experiments. The tests or experiments would involve altering the noise,
control as well as the signal factor logically and efficiently utilizing orthogonal arrays.
2. Gathering of Data. This is the stage where experiments or tests are performed in either simulation
or hardware. Having a full-scale example of the product for experimentation purposes is not
considered necessary or compulsory in this step. What’s important or significant in this stage is to
have a vital model or example of the product which satisfactorily encapsulates the design idea or
concept. As a result, experiments or tests can be performed at a low cost or economically.
3. Factor Effects Analysis. This is the stage where results or outcome of the control factors are
estimated and such results are evaluated to identify and classify the most favorable arrangement of
the control variables or factors.
4. Prediction/Confirmation. This is the stage wherein predicting the performance or operation of the
product model under the most favorable arrangement of the control variables or factors to confirm
best conditions is done. After which, experiments are done under such conditions as well as
comparing the results observed with the underlying predictions. If the outcome or results of the
experiments done corresponds with the predicted results or predictions, final results are then
implemented. However, if predictions do not match with the final results, the steps need to be
repeated.

A lot of companies worldwide have saved millions of dollars or even hundreds of millions
just by using the Taguchi approach. Telecommunications, software, electronics, xerography,
automobiles and other engineering fields are just some of the few businesses which have
already practiced the Robust Design method. With the Robust approach, rapid achievement to
the full technological capability of designs and higher profit can be considered consistent.

Topic 3 Design failure Mode & Effect Analysis


AKTUTHEINTACTONE2 MONTHS AGO 1 COMMENT

DFMEA is a detailed, methodical method for identifying potential failure points and causes
for projects. While initially developed for rocketry (where rockets have a high risk of failure
due to complexity and failures are usually catastrophic), it is now used in many industries to
reduce failures that can be avoided.

DFMEA is used to identify these failure states during each design and redesign phase of a
projects. This takes the form of a five step process:

1. Failure modes and Severity


In this section you define the individual systems and subsystems of a project, along with the
Failure Modes and Severity.

Failure modes:

 Full Failure
 Partial Failure
 Inconsistent Failure
 Degraded Failure
 Unintended Failure

Severity:

Usually ranked from 1-10, with a 1 being an insignificant failure:

2-4: A minor annoyance. Things like a loud screech of a microphone when turning on, or an
occasional visual “flutter” on a screen that doesn’t significantly impair function.

5-6: Degradation or complete loss of a minor or secondary function of a device, like the clock
in a car or the sound card in your computer.

7-8: Degradation or complete loss of a primary function of a device, like the ignition of your
car or a motherboard failure in your computer.

9-10: Catastrophic and dangerous implications, often violate regulations. Your car’s brakes
failing and airbag failing to deploy, or you computer overheating to the point it catches fire.

2. Causes and mechanisms of failure:

In this section you define the causes of failure; this varies by type. For example a car’s
brake’s failing may be due to inferior construction materials in the brake fluid line causing it
to degrade quickly and snap.

You then sign an Occurrence ranking of 1-10 for the likely failures based on your knowledge
of the design (and then reassign Severity):

1: A failure prevented by current processes.

2: Design is similar enough to an existing design that failures are unlikely.

3-4: Isolated failures; failures that are so rare as to be hard to replicate (and therefore hard to
fix).

5-6: Occasional failures have been experienced in testing or in the field with the current or a
similar enough design.

7-9: New design with no data.

10: New design with no knowledge of technology (purely theoretical or experimental).

3. Current Design Controls Inspection: Actions done to verify design safety.


You assign tests based on severity, and carry out those tests if possible (i.e. you have a
prototype). You also define detection rankings, also on a 1-10 scale which varies from project
to project. In general though, this will be a range of 1 meaning a failure was prevented by the
design and standards itself to 10 meaning it’s impossible to evaluate.

4. Risk Priority Number: The conglomerate of Severity Occurrence Detection.

This would put failures that are high Severity (high risk) that Occur frequently and are hard to
Detect at the top.

You then determine Recommended Actions:

 Eliminate high severity Failure Modes


 Lower Occurrence
 Lower Detection

5. Repeat until RPN is below desired threshold, or it is determined that this is impossible. Record
results.

Following these steps properly will result in less (preferably no) unexpected failures in design
and determine how common failure points can be avoided.

Topic 4 Product Reliability Analysis


AKTUTHEINTACTONE2 MONTHS AGO 1 COMMENT

Product Reliability is defined as the probability that a device will perform its required
function, subjected to stated conditions, for a specific period of time. Product Reliability is
quantified as MTBF (Mean Time

Between Failures) for repairable product and MTTF (Mean Time To Failure) for non-
repairable product.

Now that we’ve defined it, how can we measure it, or better yet, how can we predict it?
The Famous Bathtub Curve

Figure 1 shows the reliability “bathtub curve” which models the cradle to grave instantaneous
failure rate vs. time, which we would see if we were to wait long enough and keep good
records for a given lot of devices. This curve is modeled mathematically by exponential
functions. More on this later.
Figure 1. Reliability Bathtub Curve

The life of a population of devices (a group of devices of the same type) can be divided into
three distinct periods:

Early Life

If we follow the slope from the leftmost start to where it begins to flatten out this can be
considered the first period. The first period is characterized by a decreasing failure rate. It is
what occurs during the “early life” of a population of units. The weaker units fail leaving a
population that is more rigorous.

Useful Life

The next period is the flat bottom portion of the graph. It is called the “useful life” period.
Failures occur more in a random sequence during this time. It is difficult to predict which
failure mode will occur, but the rate of failures is predictable. Notice the constant slope.

Wearout

The third period begins at the point where the slope begins to increase and extends to the
rightmost end of the graph. This is what happens when units become old and begin to fail at
an increasing rate. It is called the “wearout” period.

The formula for calculating the MTBF is

MTBF= T/R where T = total time and R = number of failures

MTTF stands for Mean Time To Failure. To distinguish between the two, the concept of
suspensions must first be understood. In reliability calculations, a suspension occurs when a
destructive test or observation has been completed without observing a failure. MTBF
calculations do not consider suspensions whereas

MTTF does. MTTF is the number of total hours of service of all devices divided by the
number of devices.

It is only when all the parts fail with the same failure mode that MTBF converges to MTTF.

MTTF= T/N where T = total time and N = Number of units under test.
Example: Suppose 10 devices are tested for 500 hours. During the test 2 failures occur.

The estimate of the MTBF is:

MTBF= (10*500)/2 = 2,500 hours / failure.

Whereas for MTTF

MTTF= (10*500)/10 = 500 hours / failure.

If the MTBF is known, one can calculate the failure rate as the inverse of the MTBF. The
formula for

Failure rate is:

failure rate= 1/MTBF = R/T where R is the number of failures and T is total time.

Once an MTBF is calculated, what is the probability that any one particular device will be
operational at time equal to the MTBF?

UNIT 4

4 Quality Circles
AKTUTHEINTACTONE2 MAR 2019 2 COMMENTS

Quality Circle

A quality circle is a volunteer group composed of workers, usually under the leadership of
their supervisor, who are trained to identify, analyze and solve work-related problems and
present their solutions to management in order to improve the performance of the
organization, and motivate and enrich the work of employees. When matured, true quality
circles become self-managing, having gained the confidence of management.

Participative management technique within the framework of a company wide quality system
in which small teams of (usually 6 to 12) employees voluntarily form to define and solve a
quality or performance related problem. In Japan (where this practice originated) quality
circles are an integral part of enterprise management and are called quality control circles.

“A Quality Circle is volunteer group composed of members who meet to talk about
workplace and service improvements and make presentations to their management with their
ideas.” (Prasad, L.M, 1998).

Quality circles enable the enrichment of the lives of the workers or students and creates
harmony and high performance. Typical topics are improving occupational safety and health,
improving product design, and improvement in the workplace and manufacturing processes.
Objectives of Quality Circle

The perception of Quality Circles today is ‘Appropriateness for use1 and the tactic
implemented is to avert imperfections in services rather than verification and elimination.
Hence the attitudes of employees influence the quality. It encourages employee participation
as well as promotes teamwork. Thus it motivates people to contribute towards organizational
effectiveness through group processes. The following could be grouped as broad intentions of
a Quality Circle:

1. To contribute towards the improvement and development of the organization or a department.


2. To overcome the barriers that may exist within the prevailing organizational structure so as to foster
an open exchange of ideas.
3. To develop a positive attitude and feel a sense of involvement in the decision making processes of
the services offered.
4. To respect humanity and to build a happy work place worthwhile to work.
5. To display human capabilities totally and in a long run to draw out the infinite possibilities.
6. To improve the quality of products and services.

7. To improve competence, which is one of the goals of all organizations?


8. To reduce cost and redundant efforts in the long run.
9. With improved efficiency, the lead time on convene of information and its subassemblies is reduced,
resulting in an improvement in meeting customers due dates.
10. Customer satisfaction is the fundamental goal of any library. It will ultimately be achieved by Quality
Circle and will also help to be competitive for a long time.

BENEFITS OF QUALITY CIRCLES

There are no monetary rewards in the QC’s. However, there are many other gains, which
largely benefit the individual and consecutively, benefit the business. These are:

(i) Self-development: QC’s assist self-development of members by improving self-


confidence, attitudinal change, and a sense of accomplishment.

(ii) Social development: QC is a consultative and participative programme where every


member cooperates with others. This interaction assists in developing harmony.

(iii) Opportunity to attain knowledge: QC members have a chance for attaining new
knowledge by sharing opinions, thoughts, and experience.

(iv) Potential Leader: Every member gets a chance to build up his leadership potential, in
view of the fact that any member can become a leader.

(v) Enhanced communication skills: The mutual problem solving and presentation before
the management assists the members to develop their communication skills.

(vi) Job-satisfaction: QC’s promote creativity by tapping the undeveloped intellectual skills
of the individual. Individuals in addition execute activities diverse from regular work, which
enhances their self-confidence and gives them huge job satisfaction.

(vii) Healthy work environment: QC’s creates a tension-free atmosphere, which each
individual likes, understands, and co-operates with others.

(viii) Organizational benefits: The individual benefits create a synergistic effect, leading to
cost effectiveness, reduction in waste, better quality, and higher productivity.

1 Total Quality Management (TQM)


AKTUTHEINTACTONE2 MAR 2019 5 COMMENTS

Total Quality Management (TQM)

Total Quality Management (TQM) is the continual process of detecting and reducing or
eliminating errors in manufacturing, streamlining supply chain management, improving the
customer experience, and ensuring that employees are up to speed with their training. Total
quality management aims to hold all parties involved in the production process accountable
for the overall quality of the final product or service.

A total approach to quality is the current thinking of today; which is popularly called total
quality management (TQM).

TQM is a philosophy that believes in a company-wide responsibility toward quality via


fostering a quality culture throughout the organization; involving continuous improvement in
the quality of work of all employees with a view to best meeting the requirements of
customers.

Advantages of TQM

(i) Sharpens Competitive Edge of the Enterprise

TQM helps an organization to reduce costs through elimination of waste, rework etc. It
increases profitability and competitiveness of the enterprise; and helps to sharpen the
organization’s competitive edge, in the globalized economy of today.
(ii) Excellent Customer Satisfaction

By focusing on customer requirements, TQM makes for excellent customer satisfaction. This
leads to more and more sales, and excellent relations with customers.

(iii) Improvement in Organisational Performance

Through promoting quality culture in the organization, TQM lead to improvements in


managerial and operative personnel’s performance.

(iv) Good Public Image of the Enterprise

TQM helps to build an image of the enterprise in the minds of people in society. This is due
to stress on total quality system and customers’ requirements, under the philosophy of TQM.

(v) Better Personnel Relations

TQM aims at promoting mutual trust and openness among employees, at all levels in the
organization. This leads to better personnel relations in the enterprise.
Limitations of TQM

The philosophy of TQM suffers from the following major limitations

(i) Waiting for a Long Time

TQM requires significant change in organization; consisting of:

1. Change in methods, processes etc. of organization.


2. Change in attitude, behaviour etc. of people

Launching of TQM and acceptance of the philosophy of TQM requires a long waiting for the
organization. It is not possible to accept and implement TQM overnight.

(ii) Problem of Labour Management Relations

Success of TQM depends on the relationships between labour and management; because
participation of people at all levels is a pre-requisite for TQM programme implementation. In
many organizations, here and abroad, labour-management relations are quite tense. As such,
launching, acceptance and implementation of TQM programme is nothing more than a dream
for such organizations.
Basic Principles of TQM

In TQM, the processes and initiatives that produce products or services are thoroughly
managed. By this way of managing, process variations are minimized, so the end product or
the service will have a predictable quality level.

Following are the key principles used in TQM

(i) Top management – The upper management is the driving force behind TQM. The upper
management bears the responsibility of creating an environment to rollout TQM concepts and
practices.
(ii) Training needs – When a TQM rollout is due, all the employees of the company need to
go through a proper cycle of training. Once the TQM implementation starts, the employees
should go through regular trainings and certification process.

(iii) Customer orientation – The quality improvements should ultimately target improving
the customer satisfaction. For this, the company can conduct surveys and feedback forums for
gathering customer satisfaction and feedback information.

(iv) Involvement of employees – Pro-activeness of employees is the main contribution from


the staff. The TQM environment should make sure that the employees who are proactive are
rewarded appropriately.

(v) Techniques and tools – Use of techniques and tools suitable for the company is one of
the main factors of TQM.

(vi) Corporate culture – The corporate culture should be such that it facilitates the
employees with the tools and techniques where the employees can work towards achieving
higher quality.

(vii) Continues improvements – TQM implementation is not one time exercise. As long as
the company practices TQM, the TQM process should be improved continuously.

Topic 7 Six Sigma


Six Sigma is a business management strategy which aims at improving the quality of
processes by minimizing and eventually removing the errors and variations. The concept of
Six Sigma was introduced by Motorola in 1986, but was popularized by Jack Welch who
incorporated the strategy in his business processes at General Electric. The concept of Six
Sigma came into existence when one of Motorola’s senior executives complained of
Motorola’s bad quality. Bill Smith eventually formulated the methodology in 1986.

Quality plays an important role in the success and failure of an organization. Neglecting an
important aspect like quality, will not let you survive in the long run. Six Sigma ensures
superior quality of products by removing the defects in the processes and systems. Six
sigma is a process which helps in improving the overall processes and systems by identifying
and eventually removing the hurdles which might stop the organization to reach the levels of
perfection. According to sigma, any sort of challenge which comes across in an
organization’s processes is considered to be a defect and needs to be eliminated.

Organizations practicing Six Sigma create special levels for employees within the
organization. Such levels are called as: “Green belts”, “Black belts” and so on. Individuals
certified with any of these belts are often experts in six sigma process. According to Six
Sigma any process which does not lead to customer satisfaction is referred to as a defect
and has to be eliminated from the system to ensure superior quality of products and
services. Every organization strives hard to maintain excellent quality of its brand and the
process of six sigma ensures the same by removing various defects and errors which come in
the way of customer satisfaction.
The process of Six Sigma originated in manufacturing processes but now it finds its use in
other businesses as well. Proper budgets and resources need to be allocated for the
implementation of Six Sigma in organizations.

Following are the two Six Sigma methods

 DMAIC
 DMADV

DMAIC focuses on improving existing business practices. DMADV, on the other hand
focuses on creating new strategies and policies.
DMAIC has Five Phases

D – Define the Problem. In the first phase, various problems which need to be addressed to
are clearly defined. Feedbacks are taken from customers as to what they feel about a
particular product or service. Feedbacks are carefully monitored to understand problem areas
and their root causes.

M – Measure and find out the key points of the current process. Once the problem is
identified, employees collect relevant data which would give an insight into current
processes.

A – Analyze the data. The information collected in the second stage is thoroughly verified.
The root cause of the defects are carefully studied and investigated as to find out how they
are affecting the entire process.

I – Improve the current processes based on the research and analysis done in the previous
stage. Efforts are made to create new projects which would ensure superior quality.

C – Control the processes so that they do not lead to defects.


DMADV Method

D – Design strategies and processes which ensure hundred percent customer satisfactions.

M – Measure and identify parameters that are important for quality.

A – Analyze and develop high level alternatives to ensure superior quality.

D – Design details and processes.

V – Verify various processes and finally implement the same.

Topic 4 Six Sigma for Process Improvement


AKTUTHEINTACTONE2 MONTHS AGO 1 COMMENT

Six Sigma follows the DMAIC model for quality improvement and problem reduction (For
existing processes). This well-defined process approach consists of five phases in order:
Image Source: http://businesscherub.com/lean-6-sigma/
Figure 1: Understanding DMAIC

 Define
 Measure
 Analyze
 Improve
 Control

It is an integral part of Lean Six Sigma process, but can be implemented as a standalone
quality improvement process. Indeed, it is the most preferred tool that can help improving the
efficiency and the effectiveness of any organization. Within the DMAIC framework, Six
Sigma can utilize several quality management tools

5 Six Sigma in Product Development


AKTUTHEINTACTONE2 MONTHS AGO 1 COMMENT

As Product Development focuses more on innovation and the creation of new products, there are difference core principles, placed with
differing priority to that of standard Lean Production. Below are these key principles, broken down into stages.

First Principle: Define Value to the Customer

1. Voice of the Customer


2. Quality Function Deployment
3. Lean Design
4. Platforms and Design Re-Use
5. Rapidly Explore Alternatives

Second Principle: Identify the Value Stream and Reduce Waste

6. Streamline the Development Process


7. 5S Workplace
8. Standardized Work
9. Integration of Design Tools

Third Principle: Make the Value Creating Steps Flow

 Pipeline Management
 Flow Process and Pull Scheduling
 Reduce Batch Sizes
 Synchronize Activities
 Defer Commitment

Fourth Principle: Empower the Team

 Cross-Functional Team
 Workforce Empowerment
 Right Resources

Fifth Principle: Learn and Improve

 Amplify Learning

Ultimately, the successful implementation of Product Development should increase innovation within an organization tenfold, as well as
facilitating the introduction of new products – potentially even at a 400-500% increase.

Topic 6 Design for Six Sigma


AKTUTHEINTACTONE2 MONTHS AGO 1 COMMENT

In the current global marketplace, competition for products and services has never been
higher. Consumers have multiple choices for many very similar products. Therefore, many
manufacturing companies are continually striving to introduce completely new products or
break into new markets. Sometimes the products meet the consumer’s needs and expectations
and sometimes they don’t. The company will usually redesign the product, sometimes
developing and testing multiple iterations prior to re-introducing the product to market.
Multiple redesigns of a product are expensive and wasteful. It would be much more
beneficial if the product met the actual needs and expectations of the customer, with a higher
level of product quality the first time. Design for Six Sigma (DFSS) focuses on performing
additional work up front to assure you fully understand the customer’s needs and
expectations prior to design completion. DFSS requires involvement by all stakeholders in
every function. When following a DFSS methodology you can achieve higher levels of
quality for new products or processes.
Design for Six Sigma (DFSS)

Design for Six Sigma (DFSS) is a different approach to new product or process development
in that there are multiple methodologies that can be utilized. Traditional Six Sigma utilizes
DMAIC or Define, Measure, Analyze, Improve and Control. This methodology is most
effective when used to improve a current process or make incremental changes to a product
design. In contrast, Design for Six Sigma is used primarily for the complete re-design of a
product or process. The methods, or steps, used for DFSS seem to vary according to the
business or organization implementing the process. Some examples are DMADV, DCCDI
and IDOV. What all the methodologies seem to have in common is that they all focus on
fully understanding the needs of the customer and applying this information to the product
and process design. The DFSS team must be cross-functional to ensure that all aspects of the
product are considered, from market research through the design phase, process
implementation and product launch. With DFSS, the goal is to design products and processes
while minimizing defects and variations at their roots. The expectation for a process
developed using DFSS is reportedly 4.5 sigma or greater.
Why Implement Design for Six Sigma (DFSS)

When your company designs a new product or process from the ground up it requires a
sizable amount of time and resources. Many products today are highly complex, providing
multiple opportunities for things to go wrong. If your design does not meet the customer’s
actual wants and expectations or your product does not provide the value the customer is
willing to pay for, the product sales will suffer. Redesigning products and processes is
expensive and increases your time to market. In contrast, by utilizing Design for Six Sigma
methodologies, companies have reduced their time to market by 25 to 40 percent while
providing a high quality product that meets the customer’s requirements. DFSS is a proactive
approach to design with quantifiable data and proven design tools that can improve your
chances of success.
When to Implement Design for Six Sigma (DFSS)

DFSS should be used when designing a completely new product or service. DFSS is intended
for use when you must replace a product instead of redesigning. When the current product or
process cannot be improved to meet customer requirements, it is time for replacement. The
DFSS methodologies are not meant to be applied to incremental changes in a process or
design. DFSS is used for prevention of quality issues. Utilize the DFSS approach and its
methodologies when your goal is to optimize your design to meet the customer’s actual wants
and expectations, shorten the time to market, provide a high level of initial product quality
and succeed the first time.
How to Implement Design for Six Sigma (DFSS)

As previously mentioned, DFSS is more of an approach to product design rather than one
particular methodology. There are some fundamental characteristics that each of the
methodologies share. The DFSS project should involve a cross functional team from the
entire organization. It is a team effort that should be focused on the customer requirements
and Critical to Quality parameters (CTQs). The DFSS team should invest time studying and
understanding the issues with the existing systems prior to developing a new design. There
are multiple methodologies being used for implementation of DFSS. One of the most
common techniques, DMADV (Define, Measure, Analyze, Design, Verify), is detailed
below.
Define

The Define stage should include the Project Charter, Communication Plan and Risk
Assessment / Management Plan.
The Project Charter
The team should develop a Project Charter, which should include:

 Purpose or reason for project – preferably with quantifiable data or measurable targets
 Voice of Business – what the business expects to gain from completion of the project
 Project Scope – establish the scope and parameters of the project and determine exactly what is in
and out of scope for the project to prevent “project creep”
 Problem statement or identification of the gap between current and desired state
 Statement of the goals for improved revenue, customer satisfaction or market share stated in
measurable, well-defined targets
 Project timeline or schedule with well-defined gates and deliverables for each gate review.
 Project Budget – Cost target for the project including any capital expenditures
 Identification of the project sponsor and key stakeholders
 Identification of the cross-functional team members
 Clarification of roles and responsibilities for the team members and other stakeholders

The Communication Plan


During the Define phase, the team should develop a strategy for proper communication
throughout the life of the project. The Communication Plan should be designed to address
different aspects and techniques for discussing the evaluation results. The plan should also
guide the process to successfully share results of the evaluation. To develop the
Communication Plan, answer the following questions:

 Who is the primary contact on the team that is responsible for communicating?
 What are the main goals for the communication process?
 Who are you communicating to? (Identify target audience)
 When and how often will the communication occur?
 What methods will be used for communication?

The Risk Assessment or Risk Management Plan


The project manager should prepare a Risk Assessment or Risk Management Plan that
includes, but is not limited, to the following information:

 Risks associated with the project


 Impact of risks against the success of the project
 Outline / plan for managing any project risk

Measure

During the Measurement Phase, the project focus is on understanding customer needs and
wants and then translating them into measurable design requirements. The team should not
only focus on requirements or “Must Haves” but also on the “Would likes”, which are
features or functions that would excite the customer, something that would set your product
apart from the competition. The customer information may be obtained through various
methods including:

 Customer surveys
 Dealer or site visits
 Warranty or customer service information
 Historical data
 Consumer Focus Groups

Analyze

In the Analyze Phase, the customer information should be captured and translated into
measureable design performance or functional requirements. The Parameter (P) Diagram is
often used to capture and translate this information. Those requirements should then be
converted into System, Sub-system and Component level design requirements. The Quality
Function Deployment (QFD) and Characteristic Matrix are effective tools for driving the
needs of the customer from the machine level down to component level requirements. The
team should then use the information to develop multiple concept level design options.
Various assessment tools like benchmarking or brainstorming can be used to evaluate how
well each of the design concepts meet customer and business requirements and their potential
for success. Then the team will evaluate the options and select a final design using decision-
making tools such as a Pugh Matrix or a similar method.
Design

When the DFSS team has selected a single concept-level design, it is time to begin the
detailed design work using 3D modeling, preliminary drawings, etc. The design team
evaluates the physical product and other considerations including, but not limited to, the
following:

 Manufacturing process
 Equipment requirements
 Supporting technology
 Material selection
 Manufacturing location
 Packaging

Once the preliminary design is determined the team begins evaluation of the design using
various techniques, such as:

 Finite Element Analysis (FEA)


 Failure Modes and Effects Analysis (FMEA)
 Tolerance Stack Analysis
 Design Of Experiment (DOE)

FMEA is a popular tool used to identify potential design risks, identify key characteristics
and develop a list of actions to either alter the design or add to the validation plan. Computer
simulation and analysis tools can allow the team to work through the processes and
understand the process inputs and desired outputs. The design phase is complete once the
team has developed a solid design and validation plan for the new product or process that will
meet customer and business requirements. One popular tool is Design Verification Plan and
Report (DVP&R), which documents the validation plan and provides a section for reporting
results.
Verify

During the Verify Phase, the team introduces the design of the product or process and
performs the validation testing to verify that it does meet customer and performance
requirements. In addition, the team should develop a detailed process map, process
documentation and instructions. Often a Process FMEA is performed to evaluate the risk
inherent in the process and address any concerns prior to a build or test run. Usually a
prototype or pilot build is conducted. A pilot build can take the form of a limited product
production run, service offering or possibly a test of a new process. The information or data
collected during the prototype or pilot run is then used to improve the design of the product
or process prior to a full roll-out or product launch. When the project is complete the team
ensures the process is ready to hand-off to the business leaders and current production teams.
The team should provide all required process documentation and a Process Control Plan.
Finally, the project leaders, stakeholders and sponsors complete the project documentation
and communicate the project results. The entire team should then celebrate project
completion.
Other Variations of DFSS

DMADV seems to be the most prominently used process but as mentioned previously it is
not the only option. Even DMADV has a variation sometimes utilized, known as DMADOV,
which adds the step identified as Optimize or Optimization. This step can be beneficial for
developing new or revised business procedures. An additional variation of DFSS is known as
DCCDI (Define Customer and Concept, Design, Implement). DCCDI has many similarities
with DMAVD and contains similar define, measure and design stages. Furthermore yet
another variation is IDOV (Identify, Design, Optimize, Verify). The IDOV method also adds
the optimization phase. Companies could possibly implement any one of these various
methods according to their business culture and needs.

UNIT 5

Topic 2 ISO (9000& 14000 Series)


THEINTACTFRONT9 APR 2018 4 COMMENTS

ISO (9000& 14000 SERIES)

The International Organization of Standardization (ISO) is a worldwide federation consisting of member bodies from 91 countries, which
promotes the development of international manufacturing, trade and communication standards.

ISO 9000 is a set of international standards on quality management and quality assurance developed to help companies effectively
document the quality system elements to be implemented to maintain an efficient quality system. They are not specific to any one
industry and can be applied to organizations of any size.

ISO 9000 can help a company satisfy its customers, meet regulatory requirements, and achieve continual improvement. However, it
should be considered to be a first step, the base level of a quality system, not a complete guarantee of quality.

ISO 9000 refers to a generic series of standards published by the ISO that provide quality assurance requirements and quality
management guidance. ISO 9000 is a quality system standard, not a technical product standard. The ISO 9000 series currently
contains four standards – ISO 9001, ISO 9002, ISO 9003 and ISO 9004. Firms select the standard that is most relevant to their
business activities. However, these four standards will be revised in late 2000. More information is provided later in this paper under
ISO 9000:2000.

ISO 9000 Series standards

The ISO 9000 family contains these standards:

 ISO 9001:2015: Quality management systems – Requirements


 ISO 9000:2015: Quality management systems – Fundamentals and vocabulary (definitions)
 ISO 9004:2009: Quality management systems – Managing for the sustained success of an organization (continuous improvement)
 ISO 19011:2011: Guidelines for auditing management systems

ISO 9000 principles of quality management

The ISO 9000:2015 and ISO 9001:2015 standards are based on seven quality management principles that senior management can
apply for organizational improvement:

 Customer focus
 Understand the needs of existing and future customers
 Align organizational objectives with customer needs and expectations
 Meet customer requirements
 Measure customer satisfaction
 Manage customer relationships
 Aim to exceed customer expectations

 Leadership
 Establish a vision and direction for the organization
 Set challenging goals
 Model organizational values
 Establish trust
 Equip and empower employees
 Recognize employee contributions

 Engagement of people
 Ensure that people’s abilities are used and valued
 Make people accountable
 Enable participation in continual improvement
 Evaluate individual performance
 Enable learning and knowledge sharing
 Enable open discussion of problems and constraints

 Process approach
 Manage activities as processes
 Measure the capability of activities
 Identify linkages between activities
 Prioritize improvement opportunities
 Deploy resources effectively

 Improvement
 Improve organizational performance and capabilities
 Align improvement activities
 Empower people to make improvements
 Measure improvement consistently
 Celebrate improvements

(6) Evidence-based decision making

 Ensure the accessibility of accurate and reliable data


 Use appropriate methods to analyze data
 Make decisions based on analysis
 Balance data analysis with practical experience
 See tools for decision making.

 Relationship management
 Identify and select suppliers to manage costs, optimize resources, and create value
 Establish relationships considering both the short and long term
 Share expertise, resources, information, and plans with partners
 Collaborate on improvement and development activities
 Recognize supplier successes

Why Consider ISO 9000 Registration

There are several benefits to implementing this series in your company. There is also a strong belief that having a documented quality
procedure gives a firm a strong advantage over its competitors. For example, it will guide you to build quality into your product or service
and avoid costly after-the-fact inspections, warranty costs, and rework.

Most importantly, more contractors are working with ISO certified customers every year as the certifications are more widely used and
accepted in the United States.

ISO14000

ISO 14000 is a family of standards related to environmental management that exists to help organizations

(a) minimize how their operations (processes, etc.) negatively affect the environment (i.e. cause adverse changes to air, water, or land);

(b) comply with applicable laws, regulations, and other environmentally oriented requirements; and

(c) continually improve in the above.


ISO 14000 is similar to ISO 9000 quality management in that both pertain to the process of how a product is produced, rather than to
the product itself. As with ISO 9001, certification is performed by third-party organizations rather than being awarded by ISO directly.
The ISO 19011 and ISO 17021 audit standards apply when audits are being performed.

The ISO 14000 series of environmental management standards are intended to assist organizations manage the environmental effect
of their business practices. The ISO 14000 series is similar to the ISO 9000 series published in 1987. The purpose of the ISO 9000
series is to encourage organizations to institute quality assurance management programs. Although ISO 9000 deals with the overall
management of an organization and ISO 14000 deals with the management of the environmental effects of an organization, both
standards are concerned with processes, and there is talk of combining the two series into one.

Both series of standards were published by ISO, the International Organization for Standardization. The purpose of ISO is to facilitate
international trade and cooperation in commercial, intellectual, scientific and economic endeavors by developing international standards.
ISO originally focused on industrial and mechanical engineering standards. Now, it has ventured into setting standards for an
organization’s processes, policies, and practices.

The requirements of ISO 14001 are an integral part of the European Union’s Eco-Management and Audit Scheme (EMAS). EMAS’s
structure and material are more demanding, mainly concerning performance improvement, legal compliance, and reporting duties. The
current version of ISO 14001 is ISO 14001:2015, which was published in September 2015.

ISO 14000 refers to a series of standards on environmental management tools and systems. ISO 14000 deals with a company’s system
for managing its day-to-day operations and how they impact the environment. The Environmental Management System and
Environmental Auditing address a wide range of issues to include the following:

 Top management commitment to continuous improvement, compliance, and pollution prevention.


 Creating and implementing environmental policies, including setting and meeting appropriate targets.
 Integrating environmental considerations in operating procedures.
 Training employees in regard to their environmental obligations.
 Conducting audits of the environmental management system.

Topic 2 ISO 14001


AKTUTHEINTACTONE2 MONTHS AGO 1 COMMENT

ISO 14001:2015 specifies the requirements for an environmental management system that an
organization can use to enhance its environmental performance. ISO 14001:2015 is intended
for use by an organization seeking to manage its environmental responsibilities in a
systematic manner that contributes to the environmental pillar of sustainability.

ISO 14001:2015 helps an organization achieve the intended outcomes of its environmental
management system, which provide value for the environment, the organization itself and
interested parties. Consistent with the organization’s environmental policy, the intended
outcomes of an environmental management system include:

 Enhancement of environmental performance;


 Fulfilment of compliance obligations;
 Achievement of environmental objectives.

ISO 14001:2015 is applicable to any organization, regardless of size, type and nature, and
applies to the environmental aspects of its activities, products and services that the
organization determines it can either control or influence considering a life cycle
perspective. ISO 14001:2015 does not state specific environmental performance criteria.

ISO 14001:2015 can be used in whole or in part to systematically improve environmental


management. Claims of conformity to ISO 14001:2015, however, are not acceptable unless
all its requirements are incorporated into an organization’s environmental management
system and fulfilled without exclusion.

Topic 3 ISO 22000


AKTUTHEINTACTONE2 MONTHS AGO 1 COMMENT
ISO 22000 is an international standard that defines the requirements of a food safety
management system covering all sizes of all organizations throughout the food chain.
Benefits of ISO 22000

 Introduce internationally recognized processes to your business


 Give suppliers and stakeholders confidence in your hazard controls
 Put these hazard controls in place across your supply chain
 Introduce transparency around accountability and responsibilities
 Continually improve and update your systems so it stays effective
 ISO 22000 contains the food safety management system requirements of FSSC 22000 (which is a
Global Food Safety Initiative, GFSI recognised scheme) and is used along with requirements for
prerequisite programs for the appropriate industry sector.

Topic 4 ISO 27001


AKTUTHEINTACTONE2 MONTHS AGO 1 COMMENT

ISO 27001 (formally known as ISO/IEC 27001:2005) is a specification for an information


security management system (ISMS). An ISMS is a framework of policies and procedures
that includes all legal, physical and technical controls involved in an organisation’s
information risk management processes.

According to its documentation, ISO 27001 was developed to “provide a model for
establishing, implementing, operating, monitoring, reviewing, maintaining and improving an
information security management system.”

ISO 27001 uses a topdown, risk-based approach and is technology-neutral. The specification
defines a six-part planning process:

1. Define a security policy.


2. Define the scope of the ISMS.
3. Conduct a risk assessment.
4. Manage identified risks.
5. Select control objectives and controls to be implemented.
6. Prepare a statement of applicability.

The specification includes details for documentation, management responsibility, internal


audits, continual improvement, and corrective and preventive action. The standard requires
cooperation among all sections of an organisation.

The 27001 standard does not mandate specific information security controls, but it provides a
checklist of controls that should be considered in the accompanying code of practice,
ISO/IEC 27002:2005. This second standard describes a comprehensive set of information
security control objectives and a set of generally accepted good practice security controls.

ISO 27002 contains 12 main sections:

1. Risk assessment
2. Security policy
3. Organization of information security
4. Asset management
5. Human resources security
6. Physical and environmental security
7. Communications and operations management
8. Access control
9. Information systems acquisition, development and maintenance
10. Information security incident management
11. Business continuity management
12. Compliance

Organisations are required to apply these controls appropriately in line with their specific
risks. Third-party accredited certification is recommended for ISO 27001 conformance.

Other standards being developed in the 27000 family are:

 27003: Implementation guidance.


 27004: An information security management measurement standard suggesting metrics to help
improve the effectiveness of an ISMS.
 27005: An information security risk management standard. (Published in 2008)
 27006: A guide to the certification or registration process for accredited ISMS certification or
registration bodies. (Published in 2007)
 27007: ISMS auditing guideline

Topic 5 OHSAS 18001 and QS 9000


AKTUTHEINTACTONE2 MONTHS AGO 1 COMMENT

OHSAS 18001

BS OHSAS 18001 sets out the minimum requirements for occupational health and safety
management best practice. Work with us to bring work health and safety (h&s) into your
business and you can achieve the maximum return for your employees, your operations and
your customers.

BS OHSAS 18001 is a framework for an occupational health and safety (ohs) management
system and is a part of the OHSAS 18000 (sometimes incorrectly identified as ISO 18000)
series of standards, along with OHSAS 18002. It can help you put in place the policies,
procedures and controls needed for your organization to achieve the best possible working
conditions and workplace health and safety, aligned to internationally recognized best
practice.
QS 9000

QS9000 was a quality standard developed by a joint effort of the “Big Three” American
automakers, General Motors, Chrysler and Ford. It was introduced to the industry in 1994. It
has been adopted by several heavy truck manufacturers in the U.S. as well. Essentially all
suppliers to the US automakers needed to implement a standard QS9000 system, before its
termination.

The standard is divided into three sections with the first section being ISO 9001 plus some
automotive requirements.
The second section is titled “Additional Requirements” and contains system requirements
that have been adopted by all three automakers – General Motors, Chrysler and Ford.

The third section is titled the “Customer Specific Section” which contains system
requirements that are unique to each automotive or truck manufacturer.

On December 14, 2006, all QS9000 certifications were terminated. With QS9000, the middle
certification between ISO 9001 and ISO/TS 16949, no longer valid, businesses had a choice
between either ISO9001 or TS16949. QS9000 is considered superseded by ISO/TS 16949,
now a standard published by IATF, thus renamed IATF 16949:2016 (current version)

Topic 6 Indian Quality Standards


AKTUTHEINTACTONE2 MONTHS AGO 1 COMMENT

Manufacturers often sell their products with a quality mark of a product certification body.
Quality marks are meant to communicate the added value of the product. The extra quality
may refer to one or more aspects that a consumer may be unsure about. Such aspects can be
environmental impact, product quality, safety and hygiene, production standards, the absence
of additives or preservatives, etc. Overall, a quality mark gives the consumer a visual and
easily identifiable quality assessment tool, originating from a reliable source. Here, Ashok
Kanchan, Food Desk, Consumer Voice, acquaints us with the quality marks used in India.
ISI MARK

ISI is a certification marks scheme, operated by Bureau of Indian Standards (BIS), earlier
known as Indian Standards Institute (ISI), under the provisions of BIS Act, 1986. Any
product that has the ISI mark is supposed to be of minimum standard and safe for use by
consumers.

The ISI mark is both mandatory and voluntary. Some mandatory ISI certification products
include cement, electrical appliances, LPG cylinder, Batteries, Oil pressure stove,
Automobile Accessories, Medical equipment, steel products, Stainless Steel, Chemicals,
Fertilizers, infant foods and packaged drinking water. Complete list of mandatory products
is listed by BIS.
AGMARK

The Agmark certification is done of agricultural commodities for the benefit of consumers
and producers/manufacturers by Directorate of Marketing and Inspection, an agency of the
Government of India. There are some 205 different commodities including Pulses, Cereals,
Essential Oils, Vegetable Oils, Fruits and Vegetables, and semi-processed products that have
to have an AGMARK. The scheme is legally enforced by Agricultural Produce (Grading &
Marking) Act, 1937.

Manufacturers seeking to grade their commodities under Agmark have to obtain a Certificate
of Authorization from an Agmark laboratory. For this purpose, they should have adequate
infrastructure to process the commodity and access to an approved laboratory for
determination of quality and safety factors. The quality of a product is determined with
reference to factors such as size, variety, weight, colour, moisture and fat content. The grades
incorporated are grades 1, 2, 3 and 4, or special, good, fair and ordinary.
VEGETARIAN AND NON-VEGETARIAN MARKS

As per Food Safety & Standards (Packaging & Labelling) Regulations, 2011:

(i) Every package of ‘non-vegetarian’ food shall bear a declaration to this effect made by a
symbol and colour code as stipulated, to indicate that the symbol and colour product is non-
vegetarian food. The symbol shall consist of a brown colour-filled circle inside a square with
brown outline, having sides double the diameter of the circle.

(ii) Where any article of food contains egg only as non-vegetarian ingredient, the
manufacturer or packer or seller may give declaration to this effect in addition to the said
symbol.
(iii) Every package of vegetarian food shall bear a declaration to this effect by a symbol and
colour code as stipulated for this purpose to indicate that symbol and colour code the product
is vegetarian food. The symbol shall consist of a green colour-filled circle, having a diameter
not less than the minimum size specified, inside the square with green outline having size
double the diameter of the circle.

The symbol shall be prominently displayed

 on the package having contrast background on principal display panel


 just close in proximity to the name or brand name of the product
 on the labels, containers, pamphlets, leaflets, advertisements in any media

Provided also that the provisions of above regulation shall not apply in respect of mineral
water or packaged drinking water or carbonated water or alcoholic drinks, or liquid milk and
milk powders.

For organic products read Organic Certification and Symbols for Organic Food in India
HALLMARK

The hallmarking scheme was launched by Bureau of Indian Standards (BIS) on behest of the
Government of India, for gold in the year 2000 and for silver jewelry in 2005. The scheme is
voluntary in nature.

Consumers need to look out for the following markings on gold/silver jewelry:

1. BIS Standard Mark


2. Purity in Carat/fineness mark. With reference to gold, the marks are:

 916 corresponds to 22 carat


 750 corresponds to 18 carat
 585 corresponds to 14 carat

3. Assaying and Hallmarking Centre identification mark/Number: The logo of a BIS-recognized Assaying
and Hallmarking Centre where the jewellery has been assayed and hallmarked
4. Jeweller’s identification mark: The logo of a BIS-certified jeweller/jewellery manufacturer
Consumers need to look out for the following markings on gold/silver Bullion:

 BIS Standard Mark


 Fineness
 of bullion, bar or Coin in kg. or g
 Name of Manufacturer
 Serial no.

BEE’ STAR LABEL MARK

So as to provide consumers with a reference for energy saving, and thereby cost saving,
aspects of electrical household and other equipments, in 2006 the Bureau of Electrical
Efficiency (BEE) of Ministry of Power, Government of India, launched a scheme for BEE
star on labels. The scheme was invoked for frost-free (no frost) refrigerators, tubular
fluorescent lamps, room air conditioners, direct cool refrigerators, distribution transformers,
induction motors, pump sets, ceiling fans, LPG stoves, electric geysers and colour TVs.

The BEE star label has been applicable for following electrical products: To sell mandatory
products, a minimum 1 Star rating is mandatory.

Mandatory Appliances Voluntary Appliances

1.Room Air Conditioners 1. Induction Motors

2.Frost Free Refrigerators 2. Pump Sets

3.Tubular Florescent Lamp 3. Ceiling Fans

4. Distribution Transformer 4. LPG Stoves

5. Room Air Conditioner (Casettes, Floor


5. Washing Machine
Standing)

6. Direct Cool Refrigerator 6. Computer(Notebooks/ Laptops)


7. Color TV 7. Ballast ( Electronic/ Magnetic)

8. Office equipment’s ( Printer, Copier,


8. Electric Geysers
Scanner, MFD’s )

9. Variable Capacity Inverter Air


9. Diesel Engine Driven Mono-set Pumps
conditioners

10. LED Lamps 10. Solid State Inverter

11. DG Sets

Topic 13 Benchmarking
THEINTACTFRONT17 JUN 2019 2 COMMENTS

Benchmarking is a way to go backstage and watch another company’s performance from the wings, where all stage tricks and hurried
realignments are visible.

In Joseph Juran’s 1964 book Managerial Breakthrough, he asked the question:

What is that organizations do that gets result so much better than ours?

The answer to this question opens door to benchmarking, an approach that is accelerating among many firms that have adopted the
total quality management (TQM) philosophy.

The Essence of Benchmarking

The essence of benchmarking is the continuous process of comparing a company’s strategy, products, processes with those of the
world leaders and best-in-class organizations.

The purpose is to learn how the achieved excellence, and then setting out to match and even surpass it.The justification lies partly in the
question: “Why reinvent the wheel if I can learn from someone who has already done it?” However, Benchmarking is not a panacea that
can replace all other quality efforts or management processes.

Levels of Benchmarking

1. Internal benchmarking (within the company)


2. Competitive or strategic benchmarking (Industry and competitors)
3. Benchmarking outside the industry.
What benefits have been achieved by the organizations that have successfully completed their benchmarking programs?

There are three sets of benefits:

1. Cultural Change
2. Performance Improvement
3. Human Resources

(A) Cultural Change: Benchmarking allows organizations to set realistic, rigorous new performance targets, and this process helps
convince people of the credibility of these targets. It helps people to understand that there are other organizations who know and do job
better than their own organization.

(B) Performance Improvement: Benchmarking allows the organization to define specific gaps in performance and to select the
processes to improve. These gaps provide objectives and action plans for improvement at all levels of organization and promote
improved performance for individual and group participants.

(C) Human Resources: Benchmarking provides basis for training. Employees begin to see gap between what they are doing and what
best-in-class are doing. Closing the gap points out the need of personnel to be trained to learn techniques of problem solving and
process improvement.

What theoretical model would you suggest to implement a benchmarking program?

Organizations that benchmark, adapt the process to best fit their own needs and culture. Although number of steps in the process may
vary from organization to organization, the following six steps contain the core techniques:

1. Decide what to benchmark.


2. Understand the current performance of your organization.
3. Do proper planning of what, how and when of benchmarking endeavor.
4. Study others well (the practices or system you wish to benchmark)
5. Gather data and learn from it.
6. Use the findings.

Topic 8 Quality Audit


AKTUTHEINTACTONE2 MONTHS AGO 1 COMMENT

Quality audit is the process of systematic examination of a quality system carried out by an
internal or external quality auditor or an audit team. It is an important part of an
organization’s quality management system and is a key element in the ISO quality system
standard, ISO 9001.
Quality audits are typically performed at predefined time intervals and ensure that the
institution has clearly defined internal system monitoring procedures linked to effective
action. This can help determine if the organization complies with the defined quality system
processes and can involve procedural or results-based assessment criteria.

With the upgrade of the ISO9000 series of standards from the 1994 to 2008 series, the focus
of the audits has shifted from purely procedural adherence towards measurement of the actual
effectiveness of the Quality Management System (QMS) and the results that have been
achieved through the implementation of a QMS.

Audits are an essential management tool to be used for verifying objective evidence of
processes, to assess how successfully processes have been implemented, for judging the
effectiveness of achieving any defined target levels, to provide evidence concerning reduction
and elimination of problem areas. For the benefit of the organization, quality auditing should
not only report non-conformances and corrective actions, but also highlight areas of good
practice. In this way other departments may share information and amend their working
practices as a result, also contributing to continual improvement.

Quality audits can be an integral part of compliance or regulatory requirements. One example
is the US Food and Drug Administration, which requires quality auditing to be performed as
part of its Quality System Regulation (QSR) for medical devices (Title 21 of the US Code of
Federal Regulations part 820).

Several countries have adopted quality audits in their higher education system (New Zealand,
Australia, Sweden, Finland, Norway and USA) Initiated in the UK, the process of quality
audit in the education system focused primarily on procedural issues rather than on the results
or the efficiency of a quality system implementation.

Audits can also be used for safety purposes. Evans & Parker (2008) describe auditing as one
of the most powerful safety monitoring techniques and ‘an effective way to avoid
complacency and highlight slowly deteriorating conditions’, especially when the auditing
focuses not just on compliance but effectiveness.

The processes and tasks that a quality audit involves can be managed using a wide variety of
software and self-assessment tools. Some of these relate specifically to quality in terms of
fitness for purpose and conformance to standards, while others relate to Quality costs or,
more accurately, to the Cost of poor quality. In analyzing quality costs, a cost of quality audit
can be applied across any organization rather than just to conventional production or
assembly processes.

Topic 9 Quality Awards


AKTUTHEINTACTONE2 MONTHS AGO 1 COMMENT

The Rajiv Gandhi National Quality Award is the national quality award given by the
Bureau of Indian Standards to Indian organisations that show excellence in their
performance. It is named after Rajiv Gandhi, the former Prime Minister of India, and was
introduced in 1991 after his death. The award aims to promote quality services to the
consumers and to give special recognition to organisations that contribute significantly
towards the quality movement of India.
The award is presented annually as per the financial year, and is similar to other national
quality awards worldwide like the Malcolm Baldrige National Quality Award of the United
States, European Quality Award of the European Union and the Deming Prize of Japan.

The award is presented to organisations in five broad categories: large scale manufacturing,
small scale manufacturing, large scale service sector, small scale service sector and best
overall. Furthermore, there are 14 commendation certificates for organisations showing
excellence in various fields, including but not limited to biotechnology, chemicals,
electronics, food and drugs, metallurgy, textiles, jewellery, education, finance, healthcare and
information technology.

Apart from the certificated and awards, the winner of Best of All gets a monetary prize of
₹500,000 (US$7,200), while the other four awards carry a cash prize of ₹200,000
(US$2,900). The commendation certificate carries a financial incentive of ₹100,000
(US$1,400).

You might also like