You are on page 1of 118

• Subject: Metrology And Measurement

Methods(T Y Mechanical)
• Teacher: Prof P P Suryawanshi
• College Of Engineering Pune.
Structure for this Semester
• 1. For Laboratory courses, continuous weekly assessments through online
oral quiz will be taken after each experiment/assignment and the total
obtained marks will be pro-rated to 100 at the end of the semester. There will
not be any end-semester oral examination for the Lab courses.

• 2. For Theory courses: Test 1: 20 marks, Test 2: 20 marks, Online End


Semester Examination (ESE): 60 marks -prorated to 30 marks and Online-off
campus Oral Examination (OOE) for 30 marks on complete syllabus will be
conducted. Conduction of OOE will take place during the days allotted for
Practical exam/orals in the academic Calendar. Faculty-in-charge along with
one additional Internal Faculty/External examiner {assigned by HoD} will
jointly conduct the OOE for 30 marks towards ESE.
• Course Outcomes:

• Students will be able to


1. To apply knowledge of various tools and techniques used to determine
geometry and dimensions of components in engineering applications.

2. An ability to design gauges to meet desired needs within realistic constraints


3. Apply the basic concepts of mechanical measurement for industrial
applications.
4. Describe methods of measurement for various quantities like force, torque,
power, displacement, velocity/seed and acceleration.

Syllabus
• Unit I: Linear and Angular Measurements, Interferometry,
Measurement System Analysis [8 hrs]

• A]Introduction: Meaning of Metrology, Precision, Accuracy, Methods and


Errors in Measurement, Calibration.
• B]Linear Measurement: Standards, Line Standards, End Standard,
Wavelength Standard, Classification of Standards, Precision and Non -
Precision Measuring instruments and their characteristics, Slip Gauges.
• C]Interferometry: Introduction, Flatness testing by interferometry, NPL
Flatness Interferometer. Study of Measuring Machines, Recent Trends in
Engineering Metrology, use of interferometers for length, angle and surface
roughness measurement
• D]Angle Measurement: Sine bars, Sine Centers, Uses of sine bars, angle
gauges, Auto Collimator Angle Dekkor,
• E]Measurement System Analysis: Introduction, Influence of temperature,
operator skills and the instrument errors etc. on the MSA.

• Unit II: Design of gauges, Interferometers and Comparators, Measuring


Machines
[6 hrs]

• A]Limits, Fits and Tolerances: Meaning of Limit, Fits and Tolerance, Cost –
Tolerance relationship, concept of Interchangeability, Indian Standard System.

• B]Design of limits Gauges: Types, Uses, Taylor‘s Principle, Design of Limit


Gauges, Three surface Generation.
• C]Inspection of Geometric parameters: Straightness, Flatness, Parallelism,
Concentricity, Squareness, and Circularity.
• D]Comparators: Uses, Types, Advantages and Disadvantages of various
types of Comparators.
• E]Measuring Machines: Theory of Co-ordinate Metrology, Universal
Measuring Machines, Co-ordinate Measuring Machines (CMM), different
configurations of CMM, Principle, Error involved, calibration, Probing
system, automated inspection system.
• Unit III: Surface Finish Measurement, Screw Thread Metrology, Gear
Metrology [7 hrs]

• A]Surface Finish Measurement: Surface Texture, Meaning of RMS and


CLA values, Roughness Measuring Instruments, Tactile and Non-tactile
measuring instruments, difference between waviness and roughness, Grades of
Roughness, Specifications, Assessment of surface roughness as per IS,
Relationship between surface roughness and Manufacturing Processes.
• B]Screw Thread Metrology: External Screw Thread Terminology, Floating
Carriage Instruments, Pitch and flank Measurement of External Screw Thread,
Application of Tool Maker‘s Microscope, Use of Profile Projector.
• C]Gear Metrology: Spur Gear Parameters, Gear Tooth Thickness
Measurement: Gear Tooth Vernier Caliper, Constant Chord Method, Span
Micrometer.
• Unit IV: Introduction to Mechanical Measurements
[6 hrs.]

• Importance of Measurements, Classification of measuring instruments,


generalized measurement system, types of inputs for measurements. Concepts
such as Linearity, Sensitivity, Static error, Precision, Reproducibility,
Threshold, Resolution, Hysteresis, Drift, Span & Range etc. Errors in
Measurements, Classification of errors in measurements, First order
instruments and its response to step, ramp, sinusoidal and impulse inputs.
• Unit V: Measurement Methods And Devices
[6 hrs.]

• A] Displacement Measurement: Transducers for displacement measurement,


potentiometer, LVDT, Capacitance Types, Digital Transducers (optical encoder),
Nozzle Flapper Transducer.

• B] Velocity Measurement: Tachometers, Tacho generators, Digital tachometers


and Stroboscopic Methods.

• C] Acceleration Measurement: theory of accelerometer and vibrometers, practical


accelerometers, strain gauge based and piezoelectric accelerometers.

• D] Strain Measurement: Theory of Strain Gauges, gauge factor, temperature


Compensation, Bridge circuit, orientation of strain gauges for force and torque,
Strain gauge based load cells and torque sensors.
• Unit VI: Measurement – methods and devices
[6 hrs.]

• A] Pressure Measurement: Elastic pressure transducers viz. Bourdon tubes,


diaphragm, bellows and piezoelectric pressure sensors, High Pressure
Measurements, Bridge man gauge.

• B] Temperature Measurement: Thermocouple, Resistance thermometers,


Thermistors, Pyrometers. Liquid in glass Thermometers, Bimetallic strip.

• C] Vacuum measurement: Vacuum gauges viz. McLeod gauge, Ionization


and Thermal Conductivity gauges.
• Text Books:
• R. K. Jain, A Text book of Engineering Metrology, Khanna Pblications Pvt.
Ltd.18th Edition, 2002
• I.C.Gupta, A Text book of Engineering Metrology, Dhanpat Rai Publications
Pvt. Ltd.6th Edition, 2004
• Anand Bewoor, Vinay Kulkarni. Metrology and measurement, Tata McGraw-
Hill, first edition 2009.
• N.V. Raghavendra, L. Krishnamurthy, Engineering Metrology And
Measurements, Oxford University Press,1st edition 2013
• R.K.Rajput A textbook of measurement and metrology , S.K. Kataria &
Sons,2013.
• R.K.Jain , Mechanical and Industrial Measurements, Khanna Publishers,1995
• A.K.Sawhney ,Mechanical measurement and control, Dhanpat Rai & Co. (P)
Limited ,2017
• Reference Books
• G.M.S. De Silva, Basic Metrology for ISO 9000 Certification Elsevier
Publications, 3rd Edition 2002.
• Ernest Doebelin and Dhanesh Manik, Measurement Systems , McGraw-Hill,
6th Edition, 2017.
What is metrology?
The science of measurement

The fabric of science and technology


Metrology establishes the international standards
for measurement used by all countries in the
world in both science and industry

Examples: distance, time, mass, temperature,


voltage, values of physical and chemical
constants
Metrology
• “The science of measurement, embracing both experimental and
theoretical determinations at any level of uncertainty in any field
of science and technology".[13
• ] It establishes a common understanding of units, crucial to
human activity.[2]
• Trading manufactured goods, the ability to accurately diagnose
illnesses, and ensuring consumer confidence during the purchase
of goods and services; all depend on confidence in the
measurements made during these processes.[13]
• This confidence is achieved by metrology's three basic
activities: the definition of internationally-accepted units of
measurement, the realisation of these units of measurement in
practice, and the application of chains of traceability (linking
measurements to reference standards).[7][2] 7]
Metrology
• Dimensional metrology is that branch of Metrology
which deals with measurement of “dimensions“ of
a part or work piece (lengths, angles, etc.)
• Dimensional measurements at the required level of
accuracy are the essential link between the
designers’ intent and a delivered product.
Objectives of Metrology and
Measurement
1. To ascertain that the newly developed components are
comprehensively evaluated and designed within the
process, and that facilities possessing measuring
capabilities are available in the plant
2. To ensure uniformity of measurements
3. To carry out process capability studies to achieve better
component tolerances
4. To assess the adequacy of measuring instrument
capabilities to carry out their respective measurements
5. To ensure cost-effective inspection and optimal use of
available facilities
Objectives of Metrology and
Measurement
6. To adopt quality control techniques to minimize scrap rate
and rework
7. To design gauges and special fixtures required to carry out
inspection
8. To investigate and eliminate different sources of measuring
errors
9. To calibrate measuring instruments regularly in order to
maintain accuracy in measurement
10. To adopt quality control techniques to minimize scrap rate
and rework
Measurement System

1. Measurand, a physical quantity such as length, weight, and angle to be


measured
2. Comparator, to compare the measurand (physical quantity) with a known
standard (reference) for evaluation
3. Reference, the physical quantity or property to which quantitative
comparisons are to be made, which is internationally accepted
Accuracy and Precision
• Accuracy is the degree of agreement of the measured dimension with its true magnitude.
It can also be defined as the maximum amount by which the result differs from the true
value or as the nearness of the measured value to its true value, often expressed as a
percentage.
• True value may be defined as the mean of the infinite number of measured values when
the average deviation due to the various contributing factors tends to zero.
• In practice, realization of the true value is not possible due to uncertainties of the
measuring process and hence cannot be determined experimentally. Positive and
negative deviations from the true value are not equal and will not cancel each other. One
would never know whether the quantity being measured is the true value of the quantity
or not.
• Precision is the degree of repetitiveness of the measuring process. It is the degree of
agreement of the repeated measurements of a quantity made by using the same method,
under similar conditions. In other words, precision is the repeatability of the measuring
process
Accuracy and Precision
• The ability of the measuring instrument to repeat the same results during the
act of measurements for the same quantity is known as repeatability.
Repeatability is random in nature and, by itself, does not assure accuracy,
though it is a desirable characteristic.
• Precision refers to the consistent reproducibility of a measurement.
Reproducibility is normally specified in terms of a scale reading over a given
period of time. If an instrument is not precise, it would give different results
for the same dimension for repeated readings.
• Accuracy gives information regarding how far the measured value is with
respect to the true value, whereas precision indicates quality of measurement,
without giving any assurance that the measurement is correct. These concepts
are directly related to random and systematic measurement errors.
• From the figure shows that precision is not a single measurement but is
associated with a process or set of measurements.
• Normally, in any set of measurements performed by the same instrument on
the same component, individual measurements are distributed around the mean
value and precision is the agreement of these values with each other between
the true value and the mean value of the set of readings on the same
component is termed as an error.
• Error can also be defined as the difference between the indicated value and the
true value of the quantity measured. The difference
Accuracy and Precision

• The value of E is also known as the absolute error. For example, when the
weight being measured is of the order of 1 kg, an error of ±2 g can be
neglected, but the same error of ±2 g becomes very significant while
measuring a weight of 10g.
• Thus, it can be mentioned here that for the same value of error, its distribution
becomes significant when the quantity being measured is small. Hence, %
error is sometimes known as relative error.
• Relative error is expressed as the ratio of the error to the true value of the
quantity to be measured. Accuracy of an instrument can also be expressed as
% error. If an instrument measures Vm instead of Vt , then,
• Accuracy of an instrument is always assessed in terms of error. The instrument
is more accurate if the magnitude of error is low. It is essential to evaluate the
magnitude of error by other means as the true value of the quantity being
measured is seldom known, because of the uncertainty associated with the
measuring process.
• Consequently, when precision is an important criterion, mating components
are manufactured in a single plant and measurements are obtained with the
same standards and internal measuring precision, to accomplish
interchangeability of manufacture. If mating components are manufactured at
different plants and assembled elsewhere, the accuracy of the measurement of
two plants with true standard value becomes significant.
• Two terms are associated with accuracy, especially when one strives for higher
accuracy in measuring equipment: sensitivity and consistency. The ratio of the
change of instrument indication to the change of quantity being measured is
termed as sensitivity. In other words, it is the ability of the measuring
equipment to detect small variations in the quantity being measured
• When efforts are made to incorporate higher accuracy in measuring
equipment, its sensitivity increases. The permitted degree of sensitivity
determines the accuracy of the instrument. An instrument cannot be more
accurate than the permitted degree of sensitivity. It is very pertinent to mention
here that unnecessary use of a more sensitive instrument for measurement than
required is a disadvantage. When successive readings of the measured quantity
obtained from the measuring instrument are same all the time, the equipment is
said to be consistent.
• A highly accurate instrument possesses both sensitivity and consistency. A
highly sensitive instrument need not be consistent, and the degree of
consistency determines the accuracy of the instrument. An instrument that is
both consistent and sensitive need not be accurate, because its scale may have
been calibrated with a wrong standard. Errors of measurement will be constant
in such instruments, which can be taken care of by calibration. It is also
important to note that as the magnification increases, the range of
measurement decreases and, at the same time, sensitivity increases
• Range is defined as the difference between the lower and higher values that an
instrument is able to measure. If an instrument has a scale reading of 0.01–100
mm, then the range of the instrument is 0.01–100 mm, that is, the difference
between the maximum and the minimum value.
Accuracy &Precision
• Accuracy is the agreement of the result of a
measurement with the true value of the measured
quantity
• Precision is the repeatability of a measuring process
• Example: If a carpenter had to cut a board to fit in
the shelf , it does not matter what scale he uses but
the same scale must be used. Here the precision with
which he measures the two is important
When the carpenter buys the board from the market
he should make sure that the scale used by him & the
supplier is same .So both scales should be accurate
and in accordance to the standard scales
Accuracy &Precision
• Precision is concerned with a set of measurements

Average
Dimension Average
True Value Dimension & True
Value

Frequency Frequency

The plot tells how well the various measurements performed by same
instrument on the same component agree with each other.
If the difference between the true value & measured value is less
(error), then the measurement is accurate.
Calibration of Measuring Instruments
• It is essential that the equipment/instrument used to measure a given physical
quantity is validated. The process of validation of the measurements to
ascertain whether the given physical quantity conforms to the original/national
standard of measurement is known as traceability of the standard.
• One of the principal objectives of metrology and measurements is to analyse
the uncertainty of individual measurements, the efforts made to validate each
measurement with a given equipment/instrument, and the data obtained from
it.
• Conditions that exist during calibration of the instrument should be similar to
the conditions under which actual measurements are made. The standard that
is used for calibration purpose should normally be one order of magnitude
more accurate than the instrument to be calibrated.
• Calibration is the procedure used to establish a relationship between the values
of the quantities indicated by the measuring instrument and the corresponding
values realized by standards under specified conditions.
• It refers to the process of establishing the characteristic relationship between
the values of the physical quantity applied to the instrument and the
corresponding positions of the index, or creating a chart of quantities being
measured versus readings of the instrument.
• If the instrument has an arbitrary scale, the indication has to be multiplied by a
factor to obtain the nominal value of the quantity measured, which is referred
to as scale factor.
• If the values of the variable involved remain constant (not time dependent)
while calibrating a given instrument, this type of calibration is known as static
calibration, whereas if the value is time dependent or time-based information
is required, it is called dynamic calibration.
• The relationship between an input of known dynamic behavior and the
measurement system output is determined by dynamic calibration.
• The main objective of all calibration activities is to ensure that the measuring
instrument will function to realize its accuracy objectives.
• General calibration requirements of the measuring systems are as follows:
• (a) accepting calibration of the new system,
• (b) ensuring traceability of standards for the unit of measurement under
consideration, and
• (c) carrying out calibration of measurement periodically, depending on the
usage or when it is used after storage.
• Calibration is achieved by comparing the measuring instrument with the
following: (a) a primary standard, (b) a known source of input, and (c) a
secondary standard that possesses a higher accuracy than the instrument to be
calibrated.
• During calibration, the dimensions and tolerances of the gauge or accuracy of
the measuring instrument is checked by comparing it with a standard
instrument or gauge of known accuracy.
• If deviations are detected, suitable adjustments are made in the instrument to
ensure an acceptable level of accuracy. The limiting factor of the calibration
process is repeatability, because it is the only characteristic error that cannot be
calibrated out of the measuring system and hence the overall measurement
accuracy is curtailed
Terminologies in Metrology

• Sensitivity : It is the quotient of the increase in the observed


variable and the corresponding increase in the measured quantity.

• Resolution: Resolution is the smallest change in a physical


property that an instrument can sense.
• Range:
• Drift: Drift can be defined as the variation caused in the output
of an instrument, which is not caused by any change in the input.
• Zero Stability: It is defined as the ability of an instrument to
return to the zero reading after the input signal
• Threshold: If the input to the instrument is gradually increased
from zero, a minimum value of that input is required to detect the
output.
Hysteresis In Measurement Systems

• Slack motion in bearings and


gears, storage of strain energy
in the system, bearing
friction, residual charge in
electrical components, etc.,
are some of the reasons for
the occurrence of hysteresis.
• Hysteresis is the difference
between the indications of the
measuring instrument when
value of the quantity is
measured in both ascending
and descending orders.
Linearity of Measurement System

A measuring instrument/system is said to


be linear if it uniformly responds to
incremental changes, that is, the output
value is equal to
the input value of the measured property
over a specified range.

Linearity: Linearity is defined as the


maximum deviation of the output of the
measuring system from a specified straight
line applied to a plot of data points on a
curve of measured (output) values versus
the measurand (input) values.
Linearity of Measurement System
Systematic or Controllable Errors
• A systematic error is a type of error that deviates by a fixed amount from the
true value of measurement.
• These types of errors are controllable in both their magnitude and their
direction, and can be assessed and minimized if efforts are made to analyse
them.
• In order to assess them, it is important to know all the sources of such errors,
and if their algebraic sum is significant with respect to the manufacturing
tolerance, necessary allowance should be provided to the measured size of the
workpiece.
• Examples of such errors include measurement of length using a metre scale,
measurement of current with inaccurately calibrated ammeters, etc. When the
systematic errors obtained are minimum, the measurement is said to be
extremely accurate.
• It is difficult to identify systematic errors, and statistical analysis cannot be
performed. In addition, systematic errors cannot be eliminated by taking a
large number of readings and then averaging them out. These errors are
reproducible inaccuracies that are consistently in the same direction.
Minimization of systematic errors increases the accuracy of measurement.
• Calibration Errors A small amount of variation from the nominal value will be
present in the actual length standards, as in slip gauges and engraved scales.
Inertia of the instrument and its hysteresis effects do not allow the instrument
to translate with true fidelity.
• Hysteresis is defined as the difference between the indications of the
measuring instrument when the value of the quantity is measured in both the
ascending and descending orders. These variations have positive significance
for higher-order accuracy achievement.
• Calibration curves are used to minimize such variations. Inadequate
amplification of the instrument also affects the accuracy.
• Ambient Conditions It is essential to maintain the ambient conditions at
internationally accepted values of standard temperature (20 ºC) and pressure
(760mmHg) conditions.
• A small difference of 10mmHg can cause errors in the measured size of the
component. The most significant ambient condition affecting the accuracy of
measurement is temperature. An increase in temperature of 1ºC results in an
increase in the length of C25 steel by 0.3µm, and this is substantial when
precision measurement is required.
• In order to obtain error-free results, a correction factor for temperature has to
be provided. Therefore, in case of measurements using strain gauges,
temperature compensation is provided to obtain accurate results. Relative
humidity, thermal gradients, vibrations, and CO2 content of the air affect the
refractive index of the atmosphere. Thermal expansion occurs due to heat
radiation from different sources such as lights, sunlight, and body temperature
of operators
Deformation of Workpiece
• Any elastic body, when subjected to a load, undergoes elastic deformation.
The stylus pressure applied during measurement affects the accuracy of
measurement. Due to a definite stylus pressure, elastic deformation of the
workpiece and deflection of the workpiece shape may occur, as shown in
figure .
• The magnitude of deformation depends on the applied load, area of contact,
and mechanical properties of the material of the given workpiece. Therefore,
during comparative measurement, one has to ensure that the applied measuring
loads are same.
• Datum errors Datum error is the difference between the true value of the
quantity being measured and the indicated value, with due regard to the sign of
each. When the instrument is used under specified conditions and a physical
quantity is presented to it for the purpose of verifying the setting, the
indication error is referred to as the datum error.
• Reading errors These errors occur due to the mistakes committed by the
observer while noting down the values of the quantity being measured. Digital
readout devices, which are increasingly being used for display purposes,
eliminate or minimize most of the reading errors usually made by the observer.
• Errors due to parallax effect Parallax errors occur when the sight is not
perpendicular to the instrument scale or the observer reads the instrument from
an angle. Instruments having a scale and a pointer are normally associated with
this type of error. The presence of a mirror behind the pointer or indicator
virtually eliminates the occurrence of this type of error
• Effect of misalignment These occur due to the inherent inaccuracies present
in the measuring instruments. These errors may also be due to improper use,
handling, or selection of the instrument. Wear on the micrometer anvils or
anvil faces not being perpendicular to the axis results in misalignment, leading
to inaccurate measurements. If the alignment is not proper, sometimes sine and
cosine errors also contribute to the inaccuracies of the measurement.
• Zero errors When no measurement is being carried out, the reading on the
scale of the instrument should be zero. A zero error is defined as that value
when the initial value of a physical quantity indicated by the measuring
instrument is a non-zer value when it should have actually been zero.
Random Errors:
Random errors provide a measure of random deviations when measurements of a physical
quantity are carried out repeatedly. When a series of repeated measurements are made on a
component under similar conditions, the values or results of measurements vary. Specific
causes for these variations cannot be determined, since these variations are unpredictable
and uncontrollable by the experimenter and are random in nature. They are of variable
magnitude and may be either positive or negative.
Caused due to the small variations in the positions of the setting standard and workpiece,
slight displacements of the lever joints in measuring instruments, transient fluctuations in
the friction , operator errors in the reading scale.
Characteristics are:
1. Random errors follow the normal distribution curve so the probability
of occurrence is equal for positive and negative errors.
2. The arithmetic mean of the random errors in given series of
measurements approaches zero as no of measurements increases.
3. Standard deviation is used to measure maximum measuring error.
4. The maximum error is measured as three times the standard deviation.
Random errors can be minimized by calculating the average of a large number of
observations.
Since precision is closely associated with the repeatability of the measuring
process, a precise instrument will have very few random errors and better
repeatability.
Hence, random errors limit the precision of the instrument.
The following are the likely sources of random errors:
1.Presence of transient fluctuations in friction in the measuring instrument
2. Play in the linkages of the measuring instruments
3. Error in operator’s judgement in reading the fractional part of engraved scale
divisions
4. Operator’s inability to note the readings because of fluctuations during
measurement
5. Positional errors associated with the measured object and standard, arising due
to small variations in setting.
Random Errors

……Mean

……Standard Deviation
Relationship between systematic and random
errors with measured value
SYSTEMATIC ERROR RANDOM ERROR

Repetitive in Nature Random in Nature

These errors result from improper These errors are inherent in the
conditions and procedures measuring system

Controlled in magnitude and sense Accidental in nature and difficult to


control

After proper analysis these errors can Can not be eliminated


be reduced or eliminated

Statistical methods does not apply on Statistical methods only apply on error
error

Example: Parallax error, Calibration Slight displacement of measuring


error, etc. joint, friction of mating parts,
combined effect etc.
Errors
Alignment Errors/Cosine errors
Length to be measured = L
Actual measurement = M
Error =M – L =M – M cos θ= M (1 – cos θ)
Elastic deformation due to stylus pressure
Parallax Error

Scale

Wrong position of
Wrong position of
observer
observer
Correct position
of observer

•When Line of sight is not exactly perpendicular to the scale


• Pointer on scale arrangement
Contact Errors

Airy Points:
Airy points can be defined as the points at which a horizontal rod is
optionally supported to prevent it from bending. These points are
used to support a length standard in such a way as to minimize the
error due to bending.
Airy Points
• Sir George Biddell Airy (1801–92) showed that the
distance d between the supports can be determined by the
formula
• where n is the number of supports and L, the length of the
bar. When it is supported by two points, n = 2. Substituting
this value in the preceding formula, the distance between
the supports is obtained as 0.577L
• This means that the supports should be at an equal distance
from each end to a position where they are 0.577L apart.
Normally, airy points are marked for length bars greater
than 150 mm.
Accuracy Vs Cost

Accuracy depends on
1. Calibration Standards
2. Work piece being measured
3. Measuring Instruments
4. Person carrying out the
measurement
5. Environmental influences
Measurement Standards
▪The accurate measurement must be made by
comparison with a standard of known dimension and
such a standard is called “Primary Standard”

•The first accurate standard was made in England and


was known as “Imperial Standard yard” which was
followed by “International Prototype meter” made in
France.

•Since these two standards of length were made of


metal alloys they are called ‘material length standards’.
International Prototype METER
It is defined as the straight line distance, at 0°C, between the
engraved lines of pure platinum-iridium alloy (90% platinum
& 10% iridium) of 1020 mm total length and having a
‘TRESCA’ cross section as shown in fig. The graduations are
on the upper surface of the web which coincides with the
neutral axis of the section.
Imperial Standard YARD
An imperial standard yard, shown in fig, is a bronze (82%
Cu, 13% tin, 5% Zinc) bar of 1 inch square section and 38
inches long. A round recess, 1 inch away from the two ends is
cut at both ends up to the central or ‘neutral plane’ of the
bar.
Further, a small round recess of (1/10) inch in diameter is
made below the center. Two gold plugs of (1/10) inch
diameter having engravings are inserted into these holes so
that the lines (engravings) are in neutral plane.
Yard is defined as the distance between the two central
transverse lines of the gold plug at 62°F.
Disadvantages of Material
Standards
The following disadvantages are associated with material standards:
1. Material standards are affected by changes in environmental conditions
such as temperature, pressure, humidity, and ageing, resulting in
variations in length.
2. Preservation of these standards is difficult because they must have
appropriate security to prevent their damage or destruction.
3. Replicas of material standards are not available for use at other places.
4. They cannot be easily reproduced.
5. Comparison and verification of the sizes of gauges pose considerable
difficulty. 6. While changing to the metric system, a conversion factor is
necessary.
Light (optical) Wave Length Standard
• Line standards and End standards are material (physical)
standards, which are subjected to destruction and their
dimensions vary with time.
• Also, a considerable difficulty is observed, while comparing
and verifying the size of gauges by these standards.
• To overcome the above difficulties, wavelength of light is
introduced as standard of measurement. It is highly
accurate and does not depend on any physical standard.
With monochromatic light, we have the advantage of
constant wavelength.
• Wavelength standards works on the principle based on the
phenomenon of 'interference of light waves'. This
interferometric technique enables the size of slip gauges
and end bars to be determined directly in terms of the
wavelength of light, instead of metres.
• Since wavelength does not have physical form, therefore, it
is not needed to be preserved, like material standards.
Therefore wavelength standard is called as reproducible
standard of length.
• Error in reproducibility is very small (1 part in 100
millions). Therefore, this error can be neglected.
• Krypton–86 is the most suitable source of radiations of
constant wavelength. Due to its high degree of accuracy, it
is accepted as International wavelength standard.
• According to this standard, 1 metre is expressed as
1,650,763.73 times the wavelength of red-orange radiation
of Kr-86 in vacuum. Also, we know that, 1 yard is equal to
0.9144 metre, therefore wavelength standard can be easily
transferred to line standard and end standard.
Subdivision of Standards
The imperial standard yard and the international
prototype meter are master standards & cannot be
used for ordinary purposes.

Thus based upon the accuracy required, the


standards are subdivided into four grades namely;

1. Primary Standards
2. Secondary standards
3. Teritiary standards
4. Working standards
Primary Standards
• They are material standards preserved under
most careful conditions.
• These are not used directly for measurements
but are used once in 10 or 20 years for
calibrating secondary standards.
• Ex: International Prototype meter,
Imperial Standard yard.
Secondary Standards
• These are close copies of primary standards w.r.t design,
material & length.

• Any error existing in these standards is recorded by


comparison with primary standards after long intervals.

• They are kept at a number of places under great


supervision and serve as reference for tertiary standards.

• This also acts as safeguard against the loss or


destruction of primary standards.
Teritiary Standards
• Tertiary standards are the reference standards
employed by National Physical laboratory (N.P.L)
and are the first standards to be used for reference
in laboratories & workshops.
• They are made as close copies of secondary
standards & are kept as reference for comparison
with working standards.
Working Standards
• These standards are similar in design to primary,
secondary & tertiary standards.
• But being less in cost and are made of low grade
materials, they are used for general applications in
metrology laboratories.
Sometimes, standards are also classified as;

✓Reference standards (used as reference purposes)


✓Calibration standards (used for calibration of inspection
& working standards)
✓Inspection standards (used by inspectors)
✓Working standards (used by operators)
• Accuracy is one of the most important factors to be
maintained and should always be traceable to a single
source, usually the national standards of the country.
• National laboratories of most of the developed countries
are in close contact with the BIPM. This is essential
because ultimately all these measurements are compared
with the standards developed and maintained by the
bureaus of standards throughout the world.
• Hence, there is an assurance that items manufactured to
identical dimensions in different countries will be
compatible with each other, which helps in maintaining a
healthy trade.
Measurement Standards can also be classified as
1] Line Standard –
It is distance between two lines eg - meter, yard
2] End Standard-
It is distance between two plane surfaces -slip
gauges, micrometer anvils
LINE STANDARDS END STANDARDS

When the length being measured is When the length being measured is
expressed as the distance between two expressed as the distance between two
lines, then it is called “Line Standard”. parallel faces, then it is called ‘End
standard’.

Examples: Measuring scales, Imperial Examples: Slip gauges, Gap gauges,


standard yard, International prototype Ends of micrometer anvils, etc.
meter, etc
Limited Accuracy (0.2 mm) High Accuracy (0.001 mm)

Measurement Quick and Easy Measurement Time Consuming

Parallax Error occur Change in Temperature or Improper


wringing of slip gauge

Manufacturing is simple and Cost is Manufacturing is complex and Cost is


low high
CALIBRATION
➢ It is the procedure to establish relationship between the values
of the quantities indicated by measuring instrument and the
corresponding values realized by standards under specified
conditions.
➢ The four primary reasons for calibrations are to provide
traceability, to ensure that the instrument (or standard) is
consistent with other measurements, to determine accuracy,
and to establish reliability
➢ Calibration is achieved by comparing the measurement
instrument with primary standard, known source of input,
secondary standard that possess higher accuracy than the
instrument to be calibrated
Traceability.
➢ It is property of a measurement result whereby
the result can be related to a reference through
a documented unbroken chain of calibrations,
each contributing to the measurement
uncertainty
➢ It permits the comparison of measurements,
whether the result is compared to the previous
result in the same laboratory, a measurement
result a year ago, or to the result of a
measurement performed anywhere else in the
world.
➢ The chain of traceability allows any
measurement to be referenced to higher levels
of measurements back to the original definition
of the unit
SLIP GAUGES OR GAUGE BLOCKS

Slip gauges are rectangular blocks of steel having


cross section of 30 mm face length & 10 mm face
width as shown in fig.
➢ Slip gauges are blocks of steel that have been hardened and
stabilized by heat treatment.
➢ They are ground and lapped to size to very high standards of
accuracy and surface finish. They are end Standards
➢ A gauge block (also known Johansson gauge, slip gauge, or
Jo block) is a precision Length measuring standard consisting
of a ground and lapped metal or ceramic block.
➢Slip Gauges were invented in 1896 by Swedish machinist
carl edward johansson.
➢Two adjacent surfaces are highly finished (Ground &lapped)
➢They come in Standard series-to built any dimension up to
0.001mm/0.0005mm
Principle of Wringing: “When two clean and very accurately flat
surfaces are slide together under pressure, they adhere firmly”.
• Wringing is nothing but, adhering two flat and parallel surfaces by
sliding or pressing the measuring faces of a slip gauge against
measuring faces of other slip gauges.
• Slip gauges are also called as gauge blocks, whereas, measuring
faces are also known as reference surfaces or datum surfaces or
contact surfaces.
• Wringing effect is possible,
(i) Partly due to molecular attraction, between the contact surfaces,
and
(ii) Partly due to atmospheric pressure.
Wringing of Slip Gauges
• It is the joining
process of 2 slip
gauges.
• By this method you
get any dimension to
be measured/built-
up.
• Method of wringing
shown in Fig
• Slip gauges are classified into grades depending on their guaranteed
accuracy. The grade defines the type of application for which a slip gauge
is suited, such as inspection, reference, or calibration.
• Accordingly, slip gauges are designated into five grades, namely grade 2,
grade 1, grade 0, grade 00, and calibration grade.
• Grade 2 This is the workshop-grade slip gauge. Typical uses include
setting up machine tools, milling cutters, etc., on the shop floor.
• Grade 1 This grade is used for tool room applications for setting up sine
bars, dial indicators, calibration of vernier, micrometer instruments, and
so on.
• Grade 0 This is an inspection-grade slip gauge. Limited people will have
access to this slip gauge and extreme care is taken to guard it against
rough usage.
• Grade 00 This set is kept in the standards room and is used for
inspection/calibration of high precision only. It is also used to check the
accuracy of the workshop and grade 1 slip gauges .
• Calibration grade
• This is a special grade, with the actual sizes of slip gauges stated on a special
chart supplied with the set of slip gauges. This chart gives the exact dimension
of the slip gauge, unlike the previous grades, which are presumed to have been
manufactured to a set tolerance.
• They are the best-grade slip gauges because even though slip gauges are
manufactured using precision manufacturing methods, it is difficult to achieve
100% dimensional accuracy.
• Calibration-grade slip gauges are not necessarily available in a set of preferred
sizes, but their sizes are explicitly specified up to the third or fourth decimal
place of a millimetre.
INDIAN STANDARD ON SLIP GAUGES (IS 2984-1966)

Slip gauges are graded according to their accuracy as Grade 0, Grade


I & Grade II.

Grade II is intended for use in workshops during actual production of


components, tools & gauges.

Grade I is of higher accuracy for use in inspection departments.

Grade 0 is used in laboratories and standard rooms for periodic


calibration of Grade I & Grade II gauges.
87 set of Slip Gauges

Range (mm) Step (mm) Pieces

1.001 to 1.009 0.001 9

1.010 to 1.49 0.010 49

0.5 to 9.5 0.5 19

10 to 90 10 9

1.0005 1

Total 87
Normal set of M112 Gauges
Range (mm) Step (mm) Pieces

1.001 to 1.009 0.001 9

1.010 to 1.49 0.010 49

0.5 to 24.50 0.050 49

25,50,75,100 25 4

1.0005 1

Total 112
List the minimum number of slip gauges to be brought
together to produce an overall dimension of 73.975 mm,
using a set of 87 pieces.
For instance, the set of 103 pieces consists of the following:
1. One piece of 1.005mm
2. 49 pieces ranging from 1.01 to 1.49mm in steps of 0.01m
3. 49 pieces ranging from 0.5 to 24.5mm in steps of 0.5mm
4. Four pieces ranging from 25 to 100mm in steps of 25mm
A set of 56 slip gauges consists of the following:
1. One piece of 1.0005mm
2. Nine pieces ranging from 1.001 to 1.009mm in steps of
0.001mm
3. Nine pieces ranging from 1.01 to 1.09mm in steps of
0.01mm
4. Nine pieces ranging from 1.0 to 1.9mm in steps of 0.1mm
5. 25 pieces ranging from 1 to 25mm in steps of 1.0mm
6. Three pieces ranging from 25 to 75mm in steps of 25mm
Linear Measurement
Cast Iron/Granite/Glass/Datum Surface Plate
Every linear measurement starts at a
reference point and ends at a measured
point. This is true when our basic interest is
in measuring a single dimension, length in
this case. However, the foundation for all
dimensional measurements is the ‘datum
plane’, the most important one being the
surface plate.
Since a surface plate is used as the datum
for all measurements on a job, it should be
finished to a high degree of accuracy. It
should also be robust to withstand repeated
The surface plates are made either from cast contacts with metallic workpieces and not
iron or from granite. Even though granite be vulnerable to wear and tear.
surface plates are perceived to be superior,
cast iron surface plates are still in wide use.
Cast Iron Surface Plates
• They are made of either plain or alloyed close-grained cast iron, reinforced
with ribs to provide strength against bending or buckling.
• The plates are manufactured in three grades, namely grade 0, grade I, and
grade II. While grade 0 and grade I plates are hand scraped to achieve the
required degree of flatness, grade II plates are precision machined to the
required degree of accuracy.
• Cast iron is dimensionally more stable over time compared to granite plates.
Unlike granite, it also has uniform optical properties with very small light
penetration depth, which makes it a favourable material for certain optical
applications.
• One significant drawback of cast iron is its higher coefficient of thermal
expansion, which makes it unsuitable for applications involving large
variations in temperature
Granite Surface Plates
• In recent times, granite has replaced cast iron as the preferred
material for surface plates.
• Most surface plates are made of black granite, while pink granite is
the next preferred choice.
• Granite has many advantages over cast iron. Natural granite that is
seasoned in the open for thousands of years is free from warp age
or deterioration.
• It is twice as hard as cast iron and not affected by temperature
changes. It is not vulnerable to rusting and is non-magnetic
• The plates are manufactured in three grades, namely grade 00,
grade 0, and grade 1. Flatness for grade 00 surface plates range
from 0.002 to 0.007mm, whereas that for grade 0 plates varies
from 0.003 to 0.014mm. Grade 1 plate is the coarsest, with flatness
ranging from 0.005 to 0.027mm.
Linear Measurement
Calipers-Transfer Instruments
A job with a circular cross
section. An attempt to take
measurement using a steel rule
alone will lead to error, since the
steel rule cannot be positioned
diametrically across the job with
the required degree of accuracy.
One option is to use the
combination set. However,
calipers are the original transfer
instruments to transfer such
measurements on to a rule.
Calliper being used to transfer a
dimension from a job to a rule
Calipers-Transfer Instruments
Vernier Instruments
• Vernier instruments based on the vernier scale principle can measure
up to a much finer degree of accuracy. In other words, they can
amplify finer variations in dimensions and can be branded as
‘precision’ instruments.
• a vernier scale provides a least count of up to 0.01mm or less, which
remarkably improves the measurement accuracy of an instrument. It
has become quite common in the modern industry to specify
dimensional accuracy up to 1 µm or less.
• A vernier scale comprises two scales: the main scale and the vernier
scale.
Vernier Instruments
Least count = 1 MSD/N = 1mm/10 =
0.1mm
Therefore, total reading = 1 + (4 × 0.1)
= 1.4mm
Vernier Caliper

• The L-shaped main frame also serves as the fixed jaw at its end. The
movable jaw, which also has a vernier scale plate, can slide over the
entire length of the main scale, which is engraved on the main frame or
the beam.
• A clamping screw enables clamping of the movable jaw in a particular
position after the jaws have been set accurately over the job being
measured.
• In order to capture a dimension, the operator has to open out the two jaws,
hold the instrument over the job, and slide the movable jaw inwards, until the
two jaws are in firm contact with the job.
• A fine adjustment screw enables the operator to accurately enclose the portion
of the job where measurement is required by applying optimum clamping
pressure.
• The recommended measuring ranges for vernier calipers are 0–125, 0–200, 0–
250, 0–300, 0–500, 0–750, 0–1000, 750–1500, and 750– 2000mm
Dial Caliper

• In a dial calliper, the reading can be directly taken from a


dial gauge that is attached to the caliper. The dial gauge
has its own least count, which is clearly indicated on the
face of the dial. By multiplying the value of the reading
indicated by the least count, one can calculate the
measured value easily. A small but precise pair of rack and
pinion drives a pointer on a circular scale.
• This facilitates direct reading without the need to read a
vernier scale. Typically, the pointer undergoes one
complete rotation per centimetre or per millimetre of linear
measurement.
• This measurement should be added to the main scale
reading to get the actual reading. A dial calliper also
eliminates parallax error, which is associated with a
conventional vernier calliper.
Electronic Digital Caliper

An electronic digital calliper is a battery-operated instrument that


displays the reading on a liquid crystal display (LCD) screen. The
digital display eliminates the need for calculations and provides an
easier way of taking readings.
In order to initialize the instrument, the external jaws are brought
together until they touch and the ‘zero button’ is pressed to set the
reading to zero. The digital calliper can now be used to measure a
linear dimension. Some digital callipers can be switched between
centimetres or millimetres, and inches.
• An electronic digital calliper are its electronic
calculator functions and capability to be interfaced
with a computer. It can be set to either metric or
British system of units.
• The ‘floating zero’ option allows any place within
the scale range to be set to zero. The digital
display will then exhibit either plus or minus
deviations of the jaw from a reference value. This
enables the instrument to be also used as a limit
gauge.
Vernier Depth Gauge
The lower surface of the base has
to butt firmly against the upper
surface of the hole or recess whose
depth is to be measured.
The vernier scale is stationary and
screwed onto the slide, whereas the
main scale can slide up and down.
The nut on the slide has to be
loosened to move the main scale.
The main scale is lowered into the
hole or recess, which is being
measured.
Vernier Height Gauge
The graduated scale or bar is held in a
vertical position by a finely ground and
lapped base.
A precision ground surface plate is
mandatory while using a height gauge. The
feature of the job to be measured is held
between the base and the measuring jaw.
The measuring jaw is mounted on a slider
that moves up and down, but can be held in
place by tightening of a nut.
A fine adjustment clamp is provided to
ensure very fine movement of the slide in
order to make a delicate contact with the job.
Unlike in depth gauge, the main scale in a
height gauge is stationary while the slider
moves up and down
• Long gauge stacks are not nearly as accurate as
shorter gauge blocks.
• Temperature variation becomes more critical.
• A difference in deformation occurs at the point of
roller contact to the support surface and to the
gauge blocks
• The size of gauges, instruments or parts that a sine
bar can inspect is limited, since it is not designed
to support large or heavy objects.
Micrometer Instruments
• The first is as a unit of measure, being one
thousandth of a millimetre.
• The second meaning is a hand-held measuring
instrument using a screw-based mechanism.
• . A micrometer can provide better least counts and
accuracy than a vernier calliper. Better accuracy
results because of the fact that the line of
measurement is in line with the axis of the
instrument, unlike the vernier calliper that does
not conform to this condition
Abbe’s Law
It states that ‘maximum accuracy may be obtained only when
the standard is in line with the axis of the part being
measured’.

Conformity to Abbe’s law (a) Micrometer (b) Vernier calliper


Outside Micrometer

It consists of a C-shaped frame with a stationary anvil and a


movable spindle. The spindle movement is controlled by a
precision ground screw. The spindle moves as it is rotated in a
stationary spindle nut. A graduated scale is engraved on the
stationary sleeve and the rotating thimble. The zeroth mark on the
thimble will coincide with the zeroth division on the sleeve when
the anvil and spindle faces are brought together.
• Micrometers with metric scales are prevalent in India. The
graduations on the sleeve are in millimetres and can be
referred to as the main scale.
• If the smallest division on this scale reads 0.5mm, each
revolution of the thimble advances the spindle face by
0.5mm. The thimble, in turn, will have a number of
divisions.
• Suppose the number of divisions on the thimble is 50, then
the least count of the micrometer is 0.5/50, that is, 0.01mm
• In this example, the main scale reading is 8.5mm, which
is the division immediately preceding the position of the
thimble on the main scale. As already pointed out, let us
assume the least count of the instrument to be 0.01mm.
The 22nd division on the thimble is coinciding with the
reference line of the main scale. Therefore, the reading is
as follows: 8.5 + 22 (0.01)mm = 8.72mm
Disc Micrometer
• Disk micrometer It is used for
measuring the distance between
two features with curvature. A
tooth span micrometer is one
such device that is used for
measuring the span between the
two teeth of a gear. Although it
provides a convenient means
for linear measurement, it is
prone to error in measurement
when the curvature of the
feature does not closely match
the curvature of the disk
DISC /Screw/Dial/Blade Type
Screw thread micrometer It
measures pitch diameters directly.
The anvil has an internal ‘vee’,
which fits over the thread.
Since the anvil is free to rotate, it
can accommodate any rake range of
thread.
However, interchangeable anvils
need to be used to cover a wide
range of thread pitches.
The spindle has a conical shape and
is ground to a precise dimension
Dial micrometer The dial indicator
fixed to the frame indicates the
linear displacement of a movable
anvil with a high degree of
precision.
It is especially useful as a
comparator for GO/NO-GO
judgement in mass production. The
dial micrometer normally has an
accuracy of 1 µm and repeatability
of 0.5 µm. Instruments are
available up to 50mm measuring
distance, with a maximum
measuring force of 10 N. The dial
tip is provided with a carbide face
for a longer life
DIGITAL Micrometer
Inside Micrometer Caliper
Depth Micrometer
• Determine the set of slip gauges required
for 58.975mm. By using M112 set
Change in the sine value with change
in the angle
REFERENCES
▪ A Text Book Of Engineering Metrology
And Measurements By N.V. Raghavendra
And L. Krishnamurthy, Oxford University
Press.
▪ Engineering Metrology BY R K JAIN ,
Khanna Publishers
ERROR
• Error is responsible for the difference between a
measured value and the “true” value
Types of Errors
➢ ALIGNMENT ERROR (COSINE ERROR)
➢ ENVIRONMENTAL ERROR (TEMPERATURE)
➢ ELASTIC DEFORMATION OR SUPPORT ERRORS
➢ SYSTEMATIC ERRORS (CONTROLLABLE ERROR)
➢ RANDOM ERROR
➢ DIRT ERROR
➢ CONTACT ERROR
➢ PARALLAX ERROR
Errors
• Error is the difference between the actual value & the measured
(indicated) value
• Classification
I] Static Errors
a) Reading errors (say parallax, misreading an instrument,
Alignment errors)
b) Characteristic errors (say due to calibration-inertia effect)
c) Environmental errors (say temperature)
II] Loading errors (errors due to elastic deformation-due to supports or
constant contact pressure eg: dial gauge in contact with job)
III] Dynamic errors
Caused by the time variations in the measurand and inability of measurement
system to respond to the time varing measurand.
a) Controllable errors (mainly static errors)
b) Random errors (occur randomly)
Linear Measurement

V Blocks
USE OF SINE BAR
SINE BAR
Sine bar is a standard
instrument for accurate
measurement of angles .
Limitations of Sine Bars
The sine bars inherently become increasingly
impractical and inaccurate as the angle exceeds
45°because of following reasons:

• The sine bar is physically clumsy to hold in position.

• The body of the sine bar obstructs the gauge block stack,
even if relieved.

• Slight errors of the sine bar cause large angular errors.

You might also like