Professional Documents
Culture Documents
Critical to Schedule (CTS) metrics and Critical to Cost (CTC) metrics are two important
types of project metrics that are used to track and measure the performance of a project.
Both types of metrics are used to identify and assess risks and opportunities that could
impact the project's schedule or cost.
Critical to Schedule (CTS) metrics are used to measure the project's progress against its
planned schedule. These metrics can include:
• Schedule variance: The difference between the planned completion date of an activity and
the actual completion date.
• Schedule performance index (SPI): A measure of how well the project is progressing on
schedule. It is calculated by dividing the planned value of work by the actual value of work.
• Schedule slippage: The amount of time that a project has slipped behind schedule.
Critical to Cost (CTC) metrics are used to measure the project's cost performance against its
planned budget. These metrics can include:
• Cost variance: The difference between the planned cost of an activity and the actual cost.
• Cost performance index (CPI): A measure of how well the project is managing its costs. It is
calculated by dividing the earned value of work by the actual cost of work.
• Cost overrun: The amount of money that a project has exceeded its budget.
Key differences between CTS and CTC metrics The key differences between CTS and
CTC metrics are:
• Focus: CTS metrics focus on the project's schedule, while CTC metrics focus on the project's
cost.
• Measurement: CTS metrics are typically measured in terms of time, while CTC metrics are
typically measured in terms of money.
• Impact: Delays in the project schedule can impact the project's cost, and cost overruns can
impact the project's schedule.
Relationship between CTS and CTC metrics CTS and CTC metrics are often interrelated. For
example, a delay in one activity can cause delays in other activities, which can lead to a cost
overrun. Conversely, a cost overrun can sometimes be avoided by delaying an activity.
By tracking and measuring CTS and CTC metrics, project managers can identify and assess
risks and opportunities that could impact the project's schedule or cost. This information can
then be used to make informed decisions about how to manage the project.
Here is a table summarizing the key differences between CTS and CTC metrics:
Relationship Interrelated
A process baseline estimate should be comprehensive and include all of the key elements of
the project, such as:
• Scope of work: This defines the specific deliverables or outcomes of the project.
• Tasks: These are the individual steps or activities that need to be completed to achieve the
project's objectives.
• Resources: These are the people, equipment, and materials that will be required to
complete the project.
• Schedule: This outlines the timeframe for completing each task and milestone.
• Cost: This is an estimate of the total cost of the project, including labor, materials, and
overhead.
Once the process baseline estimate has been developed, it should be reviewed and
approved by project stakeholders. It should also be updated regularly as the project
progresses to reflect changes in the project scope, schedule, or resources.
Benefits of a process baseline estimate
• Improved project planning: A process baseline estimate can help project managers to identify
potential risks and issues early on and develop mitigation strategies.
• Better resource allocation: By understanding the resource requirements of the project, project
managers can allocate resources more effectively.
• Enhanced project tracking and control: A process baseline estimate can be used to track
project progress and identify deviations from the plan.
• More accurate cost management: A process baseline estimate can help project managers to
avoid cost overruns.
Developing a process baseline estimate There are several different methods for
developing a process baseline estimate, but the most common methods include:
• Expert judgment: This involves relying on the experience and knowledge of experts to
estimate the time, resources, and costs of the project.
• Parametric estimation: This involves using historical data from similar projects to estimate
the time, resources, and costs of the project.
• Bottom-up estimation: This involves breaking down the project into smaller tasks and
estimating the time, resources, and costs of each task.
• Top-down estimation: This involves making a high-level estimate of the time, resources,
and costs of the project and then allocating these resources to individual tasks.
The choice of estimation method will depend on the specific project and the availability of
data.
Conclusion
A process baseline estimate is a valuable tool for project managers. It can help to improve
project planning, resource allocation, tracking, and control, and cost management. By
developing a process baseline estimate, project managers can increase their chances of
project success.
In statistics and process improvement, special causes and common causes of variation are
two categories of factors that contribute to the variability of a process or system.
Special causes are assignable factors that are not inherent to the process and that can be
identified and eliminated. These causes are typically unexpected and sporadic, and they can
cause significant shifts in the process output.
• A machine breakdown
• A human error
• A change in the raw materials
• A supplier defect
• An environmental change
Common causes are inherent factors that are part of the process and that cannot be
eliminated without making fundamental changes to the process. These causes are typically
predictable and consistent, and they contribute to the natural variability of the process output.
The distinction between special and common causes is important for process improvement.
By identifying and eliminating special causes of variation, you can improve the predictability
and consistency of the process. By addressing common causes of variation, you can reduce
the overall variability of the process and improve its overall performance.
Here is an example to illustrate the difference between special and common causes of
variation:
Consider a manufacturing process that produces ball bearings. The diameter of the ball
bearings is a critical quality characteristic, and the process is designed to produce ball
bearings with a diameter of 1 inch.
Over time, it is observed that the diameter of the ball bearings varies from 0.95 inches to 1.05
inches. This variability is a natural part of the process, and it is caused by common causes
such as variation in the skill level of operators, variation in the quality of raw materials, and
variation in the measurement system.
However, one day, the diameter of the ball bearings starts to vary significantly, with some ball
bearings being as small as 0.8 inches and others being as large as 1.1 inches. This is a
special cause of variation, and it is likely due to a factor such as a machine breakdown or a
change in the raw materials.
By investigating this special cause of variation, the engineers are able to identify and
eliminate the problem. As a result, the diameter of the ball bearings returns to its normal
range of variation.
This example illustrates that special causes of variation can have a significant impact on the
quality of a product or service. By identifying and eliminating special causes of variation, you
can improve the overall performance of a process.
The purpose of MSE is to ensure that the measurement system is capable of providing data that is
accurate, precise, and consistent. This data can then be used to:
Example of bias:
Repeatability
Example of repeatability:
• A measuring device that is capable of repeatedly measuring the thickness of a piece of paper
to within ±0.01 mm.
• An operator who is capable of repeatedly measuring the length of a piece of wood to within
±0.1 cm.
• A measurement procedure that is well-documented and standardized, so that all operators
can take measurements in the same way.
Reproducibility
Example of reproducibility:
• A measuring device that is capable of producing the same measurements of the thickness of
a piece of paper when used by different operators or in different environments.
• An operator who is capable of producing the same measurements of the length of a piece of
wood when using different measuring devices or in different environments.
• A measurement procedure that is robust enough to withstand variations in operator skill level
or environmental conditions.
Stability
Example of stability:
• A measuring device that is capable of producing the same measurements of the weight of an
object over time, even if the device is used frequently or exposed to different environmental
conditions.
• An operator who is capable of producing the same measurements of the volume of a liquid
over time, even if the operator is fatigued or under stress.
• A measurement procedure that is resistant to changes in temperature, humidity, or other
environmental factors.
Linearity
Linearity is a measure of the relationship between the input and output of a measurement
system. It is the degree to which the measurement system produces a linear relationship
between the true value of the characteristic being measured and the measured value. A
linear measurement system will produce a straight line when the true value of the
characteristic is plotted against the measured value.
Example of linearity:
• A measuring device that produces a linear relationship between the length of a piece of metal
and the measured length.
• An operator who consistently produces a linear relationship between the weight of an object
and the measured weight.
• A measurement procedure that is designed to produce a linear relationship between the true
value of the characteristic and the measured value.
I hope this explanation is helpful. Please let me know if you have any other questions.
Value stream analysis and value stream mapping are two closely related concepts that
are often used interchangeably. However, there is a subtle difference between the two.
Value stream analysis is a broader term that refers to the process of identifying and
analyzing the steps involved in producing a product or service. This includes identifying the
value-added and non-value-added steps, as well as the sources of waste and variation.
Value stream mapping is a specific tool that is used to visualize the value stream. It is a
graphical representation of the steps involved in producing a product or service, including the
time, resources, and information flows.In other words, value stream analysis is the process of
understanding the value stream, while value stream mapping is a tool for visualizing the value stream.
Here is a table summarizing the key differences between value stream analysis and value
stream mapping:
drive_spreadsheetExport to Sheets
Here are some examples of how value stream analysis and value stream mapping can be
used:
• Identifying and eliminating waste: By analyzing the value stream, you can identify the steps
that are adding no value to the customer and are therefore wasteful. You can then take steps
to eliminate these steps or reduce their impact.
• Reducing lead times: By visualizing the value stream, you can identify the bottlenecks that
are slowing down the production process. You can then take steps to eliminate these
bottlenecks or reduce their impact.
• Improving quality: By analyzing the value stream, you can identify the sources of variation in
the production process. You can then take steps to reduce this variation and improve the
quality of the product or service.
Value stream analysis and value stream mapping are powerful tools that can be used to
improve the efficiency and effectiveness of any process. By understanding and visualizing
the value stream, you can identify and eliminate waste, reduce lead times, and improve
quality.
The goal of statistical inference is to make inferences about a population based on a sample
of that population. This is because it is often impractical or impossible to collect data from the
entire population. For example, if you want to know the average height of all adults in the
United States, it would be very difficult to collect data from every adult in the country. Instead,
you would likely collect data from a sample of adults, and then use statistical inference to
draw conclusions about the average height of all adults.
• Estimation: This is the process of estimating the value of a population parameter based on a
sample of that population. For example, you might estimate the average height of all adults in
the United States by collecting data from a sample of 1,000 adults and then calculating the
average height of that sample.
• Hypothesis testing: This is the process of testing whether or not a particular hypothesis
about a population parameter is true. For example, you might test the hypothesis that the
average height of all adults in the United States is 6 feet tall.
Statistical inference is based on the assumption that the sample is representative of the
population. This means that the sample should be drawn from the population in a way that
ensures that every member of the population has an equal chance of being selected. If the
sample is not representative, then the inferences that are drawn from the sample may be
biased and inaccurate.
here is an explanation of spaghetti charts vs cause and effect diagram with suitable
examples:
Spaghetti Charts
A spaghetti chart is a diagram that is used to visualize the flow of materials or information
through a process. It is a type of process map that uses lines to represent the movement of
materials or information. Spaghetti charts are typically used to identify bottlenecks and areas
of waste in a process.
A spaghetti chart of a manufacturing process might show the flow of raw materials from the
warehouse to the production floor, the movement of work-in-progress between different
workstations, and the flow of finished products to the shipping area.
A cause and effect diagram, also known as a fishbone diagram or Ishikawa diagram, is a
diagram that is used to identify the root causes of a problem. It is a type of brainstorming tool
that uses a branching structure to represent the different causes of a problem.
A cause and effect diagram of a manufacturing defect might show the different factors that
could contribute to the defect, such as machine malfunction, operator error, or material
defects.
• Purpose: Spaghetti charts are used to visualize the flow of materials or information through a
process, while cause and effect diagrams are used to identify the root causes of a problem.
• Structure: Spaghetti charts use lines to represent the movement of materials or information,
while cause and effect diagrams use a branching structure to represent the different causes
of a problem.
• Application: Spaghetti charts are typically used in process improvement projects, while cause
and effect diagrams are used in problem-solving projects.
Here is a table summarizing the key differences between spaghetti charts and cause and
effect diagrams:
drive_spreadsheetExport to Sheets
I hope this explanation is helpful. Please let me know if you have any other questions.
DOE (Design of Experiments) and ANOVA (Analysis of Variance) are two statistical
techniques that are often used together to improve process quality and performance. While
they appear similar, there is a distinct difference between the two.
What is DOE?
Design of Experiments (DOE) is a systematic approach to planning, conducting, and
analyzing experiments to identify and understand the effects of different factors on a process
or outcome. It involves selecting the appropriate experimental design, controlling variables,
collecting data, and interpreting the results to draw conclusions about the process.
What is ANOVA?
Analysis of Variance (ANOVA) is a statistical technique used to compare the means of two or
more groups to determine whether there is a significant difference between them. It uses the
concept of variance to partition the total variability in a dataset into components attributable
to different factors or groups.
2. Optimize a process by finding the combination of factors that produces the best outcome.
3. Develop a model to predict the outcome of a process given different values of the factors.
Next, they use ANOVA to analyze the data and determine whether there is a significant
difference in bottle quality between the different combinations of temperature and pressure. If
there is a significant difference, they can use ANOVA to identify which combinations of
factors produce the highest quality bottles.
In this example, DOE and ANOVA were used together to identify and optimize the
temperature and pressure settings for the molding process, resulting in improved bottle
quality.
Conclusion
DOE and ANOVA are valuable statistical techniques that can be used to improve process
quality and performance. DOE provides the framework for planning and conducting
experiments, while ANOVA provides the statistical tools for analyzing the data and drawing
conclusions. By using these techniques together, you can gain a deeper understanding of
your processes and make informed decisions to improve them.
2. Translating Needs into Design Requirements: Once customer needs are understood, QFD
helps translate them into specific design requirements and technical specifications. This
involves creating a House of Quality (HoQ) matrix, which maps customer needs against
product characteristics and design elements.
3. Prioritizing Customer Needs: Not all customer needs have equal importance. QFD provides a
mechanism for prioritizing customer needs based on their impact on customer satisfaction
and the company's competitive advantage. This prioritization helps focus development efforts
on the most critical needs.
4. Design Evaluation and Iteration: Throughout the design process, QFD provides a framework
for evaluating proposed design solutions against customer needs. This iterative process
ensures that the final product meets or exceeds customer expectations.
1. Improved Customer Satisfaction: By focusing on customer needs from the outset, QFD helps
create products that better meet or exceed customer expectations, leading to higher
customer satisfaction and loyalty.
2. Reduced Development Costs: By identifying and prioritizing customer needs early on, QFD
can help prevent costly rework and redesign later in the development process.
3. Enhanced Market Success: QFD can help companies develop products that are more
competitive and successful in the marketplace by aligning product features with customer
demand.
1. Smartphone Design: Customer demand for a high-resolution camera, long battery life, and
powerful processor can be translated into specific design requirements for smartphone
components.
2. Automobile Design: Customer preferences for fuel efficiency, safety features, and comfort
can be incorporated into automobile design specifications.
Conclusion
Customer demand is the driving force behind QFD, ensuring that product development aligns
with customer expectations and preferences. By effectively capturing and integrating
customer demand into the QFD process, companies can create products that are more
successful in the marketplace and deliver enhanced customer satisfaction.
The Analytic Hierarchy Process (AHP) is a structured technique for organizing and
analyzing complex decision-making problems. It is particularly useful when there are
multiple criteria to consider and when these criteria are not easily quantifiable.
2. Construct a pairwise comparison matrix to compare the relative importance of each criterion.
3. Calculate the weights for each criterion based on the pairwise comparisons.
6. Calculate the overall score for each alternative by multiplying its attribute scores by the
corresponding attribute weights.
The company constructs a pairwise comparison matrix to compare the relative importance of
each criterion. For example, the company might decide that accessibility is twice as important
as visibility.
1. Quality loss function: Taguchi defines quality loss as the deviation from a desired target
value. The quality loss function is a mathematical formula that quantifies the cost of quality
loss.
2. Signal-to-noise ratio (SNR): Taguchi defines the SNR as the ratio of the desired signal to the
undesired noise. The SNR is a measure of the robustness of a product or process.
3. Control factors: Taguchi defines control factors as factors that can be controlled by the
designer. Control factors are used to reduce the effects of noise factors on the quality of a
product or process.
A manufacturer of ball bearings wants to design a process that produces ball bearings with a
diameter of 1 inch. However, the process is subject to variation caused by factors such as
variations in the skill level of operators, variations in the quality of raw materials, and
variations in the environment.
The manufacturer can use the Taguchi Robustness Concept to design a process that is less
sensitive to this variation. The manufacturer can do this by:
1. Identifying the sources of noise: The manufacturer can identify the sources of noise in the
process, such as variations in operator skill level, variations in raw material quality, and
variations in the environment.
2. Measuring quality loss: The manufacturer can measure the quality loss caused by each
source of noise. This information can be used to prioritize the sources of noise and to focus
on reducing the effects of the most significant sources of noise.
3. Selecting control factors: The manufacturer can select control factors that can be used to
reduce the effects of noise factors. For example, the manufacturer could select the operator
training program as a control factor, or the manufacturer could select the supplier of raw
materials as a control factor.
4. Optimizing control factors: The manufacturer can use experimental design to optimize the
control factors. This will help to identify the settings of the control factors that will produce the
most robust product or process.
By using the Taguchi Robustness Concept, the manufacturer can design a process that
produces ball bearings with a diameter that is closer to the desired target value of 1 inch. The
manufacturer can also reduce the amount of rework and scrap that is caused by variation in
the process.
• Reduced quality loss: The Taguchi Robustness Concept can help to reduce quality loss by
designing products and processes that are less sensitive to variation.
• Improved process control: The Taguchi Robustness Concept can help to improve process
control by identifying and reducing the effects of noise factors.
• Reduced costs: The Taguchi Robustness Concept can help to reduce costs by reducing the
amount of rework and scrap that is caused by variation.
• Improved customer satisfaction: The Taguchi Robustness Concept can help to improve
customer satisfaction by producing products and services that are of higher quality and that
are more reliable.
Conclusion
The Taguchi Robustness Concept is a powerful tool for designing products and processes
that are less sensitive to variation. It can help to reduce quality loss, improve process control,
reduce costs, and improve customer satisfaction.
Here are some examples of how organizations use short-run control plans:
• A manufacturing company is experiencing a defect rate that is above the acceptable level.
The company develops a short-run control plan to identify the root cause of the defect rate
and to implement corrective actions. The corrective actions may include training employees
on new procedures, inspecting materials more thoroughly, or investing in new equipment.
• A retail store is experiencing a decrease in sales. The store develops a short-run control plan
to identify the reasons for the decrease in sales and to implement strategies to improve
sales. The strategies may include running promotions, improving customer service, or
expanding the product line.
Despite the challenges, short-run control plans can be a valuable tool for organizations that
need to take quick action to improve performance. When used effectively, they can help to
improve the quality of products and services, reduce costs, and improve customer
satisfaction.
Pre-control is a statistical process control (SPC) technique that is used to monitor and control
a process during the initial stages of production. It is a simple and effective way to identify
and correct process problems early on, before they have a chance to cause significant
defects or waste.
• Use of control zones: Pre-control divides the specification limits into three zones: green,
yellow, and red. The green zone represents the acceptable range of variation, the yellow
zone represents a warning that the process may be starting to drift out of control, and the red
zone represents an unacceptable level of variation that requires immediate corrective action.
1. Define the specification limits: The first step is to define the specification limits for the product
or service. This can be done based on customer requirements, industry standards, or other
relevant factors.
2. Collect data: Collect data on the product or service by measuring individual units. The data
should be collected in a consistent manner over time.
3. Plot the data on a control chart: Plot the data on a control chart that shows the specification
limits and the control zones.
4. Analyze the data: Analyze the data to identify any trends or patterns. Any measurements that
fall outside of the specification limits or in the yellow zone should be investigated further.
5. Take corrective action: If a measurement falls outside of the specification limits or in the red
zone, take immediate corrective action to bring the process back into control.
• A manufacturing company uses pre-control to monitor the diameter of ball bearings. The
specification limits are defined as 1 inch ± 0.01 inches. The company collects data on the
diameter of each ball bearing and plots it on a control chart. Any ball bearing that falls outside
of the specification limits or in the yellow zone is investigated further.
• A food processing company uses pre-control to monitor the weight of frozen vegetables. The
specification limits are defined as 10 ounces ± 0.5 ounces. The company collects data on the
weight of each package of frozen vegetables and plots it on a control chart. Any package that
falls outside of the specification limits or in the yellow zone is investigated further.
• A service company uses pre-control to monitor the response time of customer service
representatives. The specification limit is defined as 3 minutes. The company collects data
on the response time of each customer service call and plots it on a control chart. Any call
that has a response time of 3 minutes or longer is investigated further.
Pre-control is a valuable tool for organizations that want to improve the quality of their
products and services. It is a simple and effective way to identify and correct process
problems early on, before they have a chance to cause significant defects or waste