You are on page 1of 29

MECHANICAL MEASUREMENTS AND METROLOGY LABORATORY MANUAL

April 10, 2007

Department of Mechanical Engineering

Basaveshwar Engineering College, Bagalkot

MECHANICAL MEASUREMENTS AND METROLOGY LABORATORY (Common to ME /IP/AU/IM/MA) Sub Code : IPL 37 B/IPL 47B Hrs/Week: 03 Total Hrs: 42 Part A: MECHANICAL MEASUREMENTS 1. Calibration of Pressure gauge 2. Calibration of Thermocouple 3. Calibration of LVDT 4. Calibration of Load cell 5. Determination of modulus of elasticity of a mild steel specimen using strain gauges. Part B : METROLOGY 6. Measurement using Optical projector / Tool makers microscope 7. Measurement of angle using Sine Center / Sine bar / bevel protractor 8. Measurement of alignment using autocollimator / roller set 9. Measurement of cutting tool forces using a) Lathe tool dynamometer b) Drill tool dynamometer 10. Measurement of screw thread parameters using two wire or three wire method 11. Measurement of Surface roughness using talysurf / mechanical Comparator 12. Measurement of gear tooth profile using gear tooth vernier / gear tooth micrometer 13. Calibration of a micrometer using slip gauges 14. Measurement using Optical flats Scheme of Examination: ONE question from Metrology (part A): ONE question from Instrumentation (Part B): Viva Voce: 20 Marks 20 marks 10 Marks 2 IA Marks : 25 Exam Hours: 03 Exam Marks: 50

CONTENTS

Metrology

1. Measurement of effective diameter of a screw thread (3- wire method) 2. Measurement of gear tooth thickness. 3. Measurement of taper angle (standard roller set method) 4. Measurement of taper angle (sine bar) 5. Calibration of micrometer

Instrumentation

1. Calibration of LVDT 2. Calibration of pressure gauge 3. Calibration of thermocouple 4. Calibration of torque sensor 5. Calibration of strain gauge

Calibration of micrometer screw

The accuracy of any gage may be checked with the following steps for calibration. Calibration is another way of saying, "checking for accuracy." Calibration is the process for insuring the accuracy of gages. The process involves a gage block and the micrometer. A gage block is a block made from steel that is cut to size with in a millionth of an inch. Gage blocks come in various sizes and are used to check the accuracy of measuring devices such as a micrometer. Note: Each gage is labeled with its range on the frame. (For example, 4"-5", 1"-2") see figure 2.

1. Use a gage block that falls within the limits of the gage?s range.(see fig. 2) 2. Snug the gage block between the spindle and anvil of the micrometer using appropriate feel.(see fig 3)

3. Use the ratchet stop as explained in step 2 under zero checking until you have a comfortable feel between the gage and the gage block. 4. Confirm calibration by checking that the display shows the dimension of the gage block.

5. Again, you may want to insure calibration by repeating these steps more than once. If the reading on the micrometer display shows the gage blocks dimension, you may begin using the micrometer for measuring. The micrometer may also be checked for calibration using other gage blocks within its range. Using other blocks that fall within the range of the gage will test the gage?s accuracy from one end of the spindle to the other. This may also uncover problems and explain why the gage is loosing accuracy. Errors in screw threads may be of three types, namely progressive, periodic and erratic. The method of manufacture of micrometer screws eliminates the last of these, but there may be a progressive error and also a periodic error in the readings, which is usually caused by eccentricity of the thimble. The method of finding the errors in an external micrometer is by taking the readings over slip gauges, the sizes of which must be chosen to disclose the both types of error. A sufficient number of readings for the progressive error is obtained by using slips in steps of 2.5 mm, and for periodic error by taking five readings during one revolution of the thimble. It is advisable to check the periodic error at two positions of the spindle, one near at each end of its travel. Suitable slip gauges for testing a 0.25 mm micrometer are therefore For the progressive error 2.5 to 25 in steps of 2.5 mm For the periodic error -2.1 to 2.5 in steps of 0.1 mm The slips in the latter series are wrung on to the 20 mm slip to obtain readings for the periodic error near to the fully open position of the micrometer. To avoid errors caused by expansion in handling, the micrometer should be clamped to the suitable stand and the slip gauges laid out in readiness some ten minutes before required, being held in tongs or with the piece of chamois leather during use. It is very important to apply the same pressure on each slip gauge. To ensure this, the spindle must be rotated very slowly during the last part of a revolution until a ratchet slips by one click. Readings of the micrometer are taken, first with the measuring faces in contact and then over each slip gauge in turn, the results being recorded as plus or minus errors in units of 0.001 mm. From the readings, two graphs of errors should be drawn; one for the progressive error and the second, to a larger scale, for the periodic error. From these, the error in the micrometer at any nominal reading, and consequently the true size of the object measured, can be obtained.

3-wire method
Aim: To determine the effective diameter of a given thread by 3-wire method. Instruments: 1. Micrometer 2. Pitch gauge 3. Metric thread 4. Three wire set box Theory:

Effective diameter is the diameter of an imaginary cylinder co-axial with the thread, which has equal metal and space widths. In other words, the distance EF in the above figure is equal to one half the pitch. With reference to the above figure

and Q value is 2W P i.e W (cosec +1) (p/2) cot The above relations do not take into account the helix angle and are therefore nearly correct. The corrections for elastic deformations and obliquity are needed to obtain precise results. To minimize the effect due to errors in thread angle 2 , it is preferable for the wires to make contact at the effective dia. When the helix angle of the screw is neglected, this requires a fixed dia. of the wire for a given combination of angle and pitch p, The wire diameter (also known as best size) W= (p/2) sec

While the technique is easily illustrated it is not so easily carried out in practice. The difficulties increase with the tap diameter. The effective dia. can now be calculated with the following known elements. 1. The reading of the micrometer (this is the distance across the top of the wires, usually referred to as M) As stated earlier M=E+Q where E is the effective dia. and Q is the constant depending upon the wire diameter and the flank angle. 2. The wire diameter (using the mean value given by the manufacturer). 3. The thread angle Q = W (1+cosec )-p.cot /2 E= M-Q

Calibration of LVDT
Aim: To calibrate the given linear variable differential transformer (LVDT) Apparatus: LVDT instrument set, Instrument Tutor, Screw driver, wires. Linear Variable Differential Transformer ( LVDT ) - A device which provides accurate position indication throughout the range of valve or mechanical travel is a linear variable differential transformer (LVDT), illustrated in Figure F1. Unlike the potentiometer position indicator, no physical connection to the extension is required.

Figure F1 The extension valve shaft, or control rod, is made of a metal suitable for acting as the movable core of a transformer. Moving the extension between the primary and secondary windings of a transformer causes the

inductance between the two windings to vary, thereby varying the output voltage proportional to the position of the valve or control rod extension. Figure F1 illustrates a valve whose position is indicated by an LVDT. If the open and shut position is all that is desired, two small secondary coils could be utilized at each end of the extensions travel. LVDTs are extremely reliable. As a rule, failures are limited to rare electrical faults which cause erratic or erroneous indications. An open primary winding will cause the indication to fail to some predetermined value equal to zero differential voltage. This normally corresponds to mid-stroke of the valve. A failure of either secondary winding will cause the output to indicate either full open or full closed. Procedure: 1. Connect sensor set to tutor 2. Adjust potentiometer to read zero. 3. Give 1 millimeter displacement and adjust potentiometer to read 1mm. 4. Note down the potentiometer reading for each actual displacement. 5. Plot graph of % Error vs. Actual displacement Sl. No. 1 2 3 Actual displacement Meter reading(mm) (mm) 1 2 3 Error % Error

Error = Meter reading Actual Reading % Error = (Error/Actual Reading) x 100

Calibration of Torque Sensor


Aim: To calibrate the given torque sensor Apparatus: Torque set up, Instrument Tutor, screw driver, wires. Procedure: 1. Connect the sensor to tutor 2. Adjust potentiometer to read zero. 3. Apply weight of 1 kg and adjust potentiometer to read 1kg. 4. Note down the potentiometer reading for each increment of weight. 5. Plot graph of % Error vs. Actual torque Sl.No. 1 2 Actual torque (kg- Meter reading m) 1 2 (kg-m) Error % Error

Error = Meter reading Actual torque % Error = (Error/Actual torque) x 100

Calibration of Strain Gauge


Aim: To calibrate the given strain gauge. Apparatus: Strain measuring set up, Instrument Tutor, screw driver. Theory: What Is Strain? Strain is the amount of deformation of a body due to an applied force. More specifically, strain (e) is defined as the fractional change in length, as shown in Figure 1 below.

Figure 1. Definition of Strain

Strain can be positive (tensile) or negative (compressive). Although dimensionless, strain is sometimes expressed in units such as in./in. or mm/mm. In practice, the magnitude of measured strain is very small. Therefore, strain is often expressed as microstrain (me), which is e x 10 -6. When a bar is strained with a uniaxial force, as in Figure 1, a phenomenon known as Poisson Strain causes the girth of the bar, D, to contract in the transverse, or perpendicular, direction. The magnitude of this transverse contraction is a material property indicated by its Poisson's Ratio. The Poisson's Ratio n of a material is defined as the negative ratio of the strain in the transverse direction (perpendicular to the force) to the strain in

the axial direction (parallel to the force), or n = e T/e. Poisson's Ratio for steel, for example, ranges from 0.25 to 0.3. While there are several methods of measuring strain, the most common is with a strain gauge, a device whose electrical resistance varies in proportion to the amount of strain in the device. The most widely used gauge is the bonded metallic strain gauge. The metallic strain gauge consists of a very fine wire or, more commonly, metallic foil arranged in a grid pattern. The grid pattern maximizes the amount of metallic wire or foil subject to strain in the parallel direction (Figure 2). The cross-sectional area of the grid is minimized to reduce the effect of shear strain and Poisson Strain. The grid is bonded to a thin backing, called the carrier, which is attached directly to the test specimen. Therefore, the strain experienced by the test specimen is transferred directly to the strain gauge, which responds with a linear change in electrical resistance. Strain gauges are available commercially with nominal resistance values from 30 to 3000 W, with 120, 350, and 1000 W being the most common values.

Figure 2. Bonded Metallic Strain Gauge It is very important that the strain gauge be properly mounted onto the test specimen so that the strain is accurately transferred from the test specimen, though the adhesive and strain gauge backing, to the foil itself. A fundamental parameter of the strain gauge is its sensitivity to strain, expressed quantitatively as the gauge factor (GF). Gauge factor is defined as the ratio of fractional change in electrical resistance to the fractional change in length (strain):

10

The

Gauge

Factor

for

metallic

strain

gauges

is

typically

around

2.

Procedure:
1. Connect strain gauge sensor to tutor 2. Adjust the potentiometer to read zero. 3. Apply weight of 100gm and adjust potentiometer to read 100gm. 4. Note down the potentiometer reading for each increment of weight. 5. Plot the graph % Error vs. Actual strain Sl. No. 1 2 3 Actual (kg) 1 0.9 0.8 load Meter reading (kg) Bending strain (actual) Bending strain (meter) Error % Error

Error = strain (actual) strain (meter) % Error = {Error/ strain (actual)} x 100 Observations Length of cantilever beam = L = 273 mm 1. Width 2. Thickness 3. Range of meter = 1 kg Calculations: 1. Distance of neutral axis from outer most layer = c = t/2 2. Moment of Inertia (I)= bt3/12 3. Bending moment (M) = applied load x L = 1 kg x 273 mm = 273 kg-mm 4. Bending stress (f) = M x c/ I 5. Bending Strain = Bending stress/E 6. Error = strain (actual) strain (meter) 7. % Error = (Error/strain actual) x 100 because {M/I =f/c} E= Youngs modulus = 2 x10 6 N/mm2 = b = 38 mm = t = 3 mm

11

Calibration of Thermocouple
Aim: To calibrate the given thermocouple. Apparatus: Thermocouple, Thermometer, Instrument tutor, water bath, connecting wires, screw driver, cold bath. Thermocouples are a widely used type of temperature sensor and can also be used as a means to convert thermal potential difference into electric potential difference. They are cheap and interchangeable, have standard connectors, and can measure a wide range of temperatures. The main limitation is precision; system errors of less than 1 C can be difficult to achieve. Principle of operation In 1821, the German-Estonian physicist Thomas Johann Seebeck discovered that when any conductor (such as a metal) is subjected to a thermal gradient, it will generate a voltage. This is now known as the thermoelectric effect or Seebeck effect. Any attempt to measure this voltage necessarily involves connecting another conductor to the "hot" end. This additional conductor will then also experience the temperature gradient, and develop a voltage of its own which will oppose the original. Fortunately, the magnitude of the effect depends on the metal in use. Using a dissimilar metal to complete the circuit will have a different voltage generated, leaving a small difference voltage available for measurement, which increases with temperature. This difference can typically be between 1 to about 70 microvolts per degree Celsius for the modern range of available metal combinations. Certain combinations have become popular as industry standards, driven by cost, availability, convenience, melting point, chemical properties, stability, and output. It is important to note that thermocouples measure the temperature difference between two points, not absolute temperature.

In traditional applications, one of the junctions the cold junction was maintained at a known (reference) temperature, while the other end was attached to a probe. For example, in the image above, the cold junction will be at copper traces on the circuit board. Another temperature sensor will measure the temperature at this point, so that the temperature at the probe tip can be calculated.

12

Thermocouples can be connected in series with each other to form a thermopile, where all the hot junctions are exposed to the higher temperature and all the cold junctions to a lower temperature. Thus, the voltages of the individual thermocouple add up, which allows for a larger voltage. Having available a known temperature cold junction, while useful for laboratory calibrations, is simply not convenient for most directly connected indicating and control instruments. They incorporate into their circuits an artificial cold junction using some other thermally sensitive device (such as a thermistor or diode) to measure the temperature of the input connections at the instrument, with special care being taken to minimize any temperature gradient between terminals. Hence, the voltage from a known cold junction can be simulated, and the appropriate correction applied. This is known as cold junction compensation. Additionally, cold junction compensation can be performed by software. Device voltages can be translated into temperatures by two methods. Values can either be found in look-up tables or approximated using polynomial coefficients. Usually the thermocouple is attached to the indicating device by a special wire known as the compensating or extension cable. The terms are specific. Extension cable uses wires of nominally the same conductors as used at the thermocouple itself. These cables are less costly than thermocouple wire, although not cheap, and are usually produced in a convenient form for carrying over long distances - typically as flexible insulated wiring or multicore cables. They are usually specified for accuracy over a more restricted temperature range than the thermocouple wires. They are recommended for best accuracy. Compensating cables on the other hand, are less precise, but cheaper. They use quite different, relatively low cost alloy conductor materials whose net thermoelectric coefficients are similar to those of the thermocouple in question (over a limited range of temperatures), but which do not match them quite as faithfully as extension cables. The combination develops similar outputs to those of the thermocouple, but the operating temperature range of the compensating cable is restricted to keep the mis-match errors acceptably small. The extension cable or compensating cable must be selected to match the thermocouple. It generates a voltage proportional to the difference between the hot junction and cold junction, and is connected in the correct polarity so that the additional voltage is added to the thermocouple voltage, compensating for the temperature difference between the hot and cold junctions.

13

Voltage-Temperature Relationship The relationship between the temperature difference and the output voltage of a thermocouple is nonlinear and is given by a polynomial interpolation.

The coefficients an are given for n from 0 to between 5 and 9. Applications Thermocouples are most suitable for measuring over a large temperature range, up to 1800 K. They are less suitable for applications where smaller temperature differences need to be measured with high accuracy, for example the range 0100 C with 0.1 C accuracy. For such applications, thermistors and RTDs are more suitable. Procedure: 1. Connect the thermocouple to tutor. 2. Adjust digital meter to read room temperature. 3. Keep thermometer and thermocouple in water bath and heat the water. 4. Note the thermometer reading and meter reading (thermocouple) for each increment of temperature. 5. Switch off the water heater and add cold water and note the readings of both. 5. Plot the graph % Error vs. Actual temperature Sl. No. 1 Thermometer(actual) reading ( C)
o

Thermocouple (oC)

Error while % Error increasing

Error while % Error temp. decreasing

reading (meter) temp.

14

2 Error = thermometer reading thermocouple reading % Error = {Error/ thermometer reading} x 100

Calibration of Pressure Gauge


Aim: To calibrate the given pressure gauge Apparatus: Pressure gauge set up, SAE oil, Weights. Theory:

What is a Pressure Gauge? Many of the processes in the modern world involve the measurement and control of pressurized liquid and gas systems. This monitoring reflects certain performance criteria that must be controlled to produce the desirable results of the process and insure its safe operation. Boilers, refineries, water systems, and compressed gas systems are but a few of the many applications for pressure gauges. The mechanical pressure indicating instrument, or gauge, consists of an elastic pressure element; a threaded connection means called the "socket"; a sector and pinion gear mechanism called the "movement"; and the protective case, dial, and viewing lens assembly. The elastic pressure element is the member that actually displaces or moves due to the influence of pressure. When properly designed, this pressure element is both highly accurate and repeatable. The pressure element is connected to the geared "movement" mechanism, which in turn rotates a pointer throughout a graduated dial. It is the pointer's position relative to the graduations that the viewer uses to determine the pressure indication. The most common pressure gauge design was invented by French industrialist Eugene Bourdon in 1849. It utilizes a curved tube design as the pressure sensing element. A less common pressure element design is the

15

diaphragm or disk type, which is especially sensitive at lower pressures. This article will focus on the Bourdon tube pressure gauge. Calibration Calibration occurs just before the final assembly of the gauge to the protective case and lens. The assembly consisting of the socket, tube, and movement is connected to a pressure source with a known "master" gauge. A "master" gauge is simply a high accuracy gauge of known calibration. Adjustments are made in the assembly until the new gauge reflects the same pressure readings as the master. Accuracy requirements of 2 percent difference are common, but some may be 1 percent, .5 percent, or even .25 percent. Selection of the accuracy range is solely dependant upon how important the information desired is in relationship to the control and safety of the process. Most manufacturers use a graduated dial featuring a 270 degree sweep from zero to full range. These dials can be from less than I inch (2.5 centimeters) to 3 feet (.9 meter) in diameter, with the largest typically used for extreme accuracy. By increasing the dial diameter, the circumference around the graduation line is made longer, allowing for many finely divided markings. These large gauges are usually very fragile and used for master purposes only. Masters themselves are inspected for accuracy periodically using dead weight testers, a very accurate hydraulic apparatus that is traceable to the National Bureau of Standards in the United States.

It is interesting to note that when the gauge manufacturing business was in its infancy, the theoretical design of the pressure element was still developing. The Bourdon tube was made with very general design parameters, because each tube was pressure tested to determine what range of service it was suitable for. One did not know exactly what pressure range was going to result from the rolling and heat treating process, so these instruments were sorted at calibration for specific application. Today, with the development of computer modeling and many decades of experience, modern Bourdon tubes are precisely rolled to specific dimensions that require little, if any, calibration. Modern calibration can be performed by computers using electronically controlled mechanical adjusters to adjust
the components. This unfortunately eliminates the image of the master craftsman sitting at the calibration bench, finely tuning a delicate, watch-like movement to extreme precision. Some instrument repair shops still perform this unique work, and these beautiful pressure gauges stand as equals to the clocks and timepieces created by master craftsmen years ago. Procedure: 1. Turn the screw in anti-clockwise direction and fill up the barrel with SAE oil and cover it with lid. 2. Adjust the pressure gauge to read zero.

16

3. Apply 1 (kg /cm2) weight on the barrel lid and turn the screw clockwise till the 1 (kg /cm 2) weight is just lifted up and note the pressure gauge reading. 4. Repeat step 3 for 2, 3, 4, 5 (kg/cm2) weights. 5. Plot the graph % Error vs. Actual pressure. Sl. No. 1 2 Actual pressure (kg/ cm2) 1 2 Gauge pressure (kg/ cm2) Error % Error

Error = Gauge pressure Actual pressure % Error = (Error/Actual pressure) x 100

Gear Tooth Thickness Measurement


Aim: To measure the tooth thickness of a given spur gear Instruments Required: Gear vernier, Vernier caliper, Spur gear Theory: The tooth thickness is defined as the length of the arc of the pitch circle between opposite faces of the same tooth. Most of the time a gear vernier is used to determine the tooth thickness. As the tooth thickness varies from top to bottom, any instrument for measuring on a single tooth must a) Measure the tooth thickness at a specified position on the tooth. b) Fix that position at which the measurement is taken. The gear tooth vernier, therefore, consists of a vernier caliper for making the measurement M, combined with a vernier depth for setting the dimension h at which the measurement M is to be affected.

17

Terminology for Spur Gears . Fig. Spur Gear Pitch surface: The surface of the imaginary rolling cylinder (cone, etc.) that the toothed gear may be considered to replace.

Pitch circle: A right section of the pitch surface. Addendum circle: A circle bounding the ends of the teeth, in a right section of the gear. Root (or dedendum) circle: The circle bounding the spaces between the teeth, in a right section of the gear. Addendum: The radial distance between the pitch circle and the addendum circle. Dedendum: The radial distance between the pitch circle and the root circle. Clearance: The difference between the dedendum of one gear and the addendum of the mating gear. Face of a tooth: That part of the tooth surface lying outside the pitch surface. Flank of a tooth: The part of the tooth surface lying inside the pitch surface. Circular thickness (also called the tooth thickness) : The thickness of the tooth measured on the pitch circle. It is the length of an arc and not the length of a straight line. Tooth space: The distance between adjacent teeth measured on the pitch circle. Backlash: The difference between the circle thickness of one gear and the tooth space of the mating gear. Circular pitch p: The width of a tooth and a space, measured on the pitch circle. Diametral pitch P: The number of teeth of a gear per inch of its pitch diameter. A toothed gear must have an integral number of teeth. The circular pitch, therefore, equals the pitch circumference divided by the number of teeth. The diametral pitch is, by definition, the number of teeth divided by the pitch diameter. That is,

and

Hence

where

18

p = circular pitch P = diametral pitch N = number of teeth D = pitch diameter That is, the product of the diametral pitch and the circular pitch equals .

Module m: Pitch diameter divided by number of teeth. The pitch diameter is usually specified in inches or millimeters; in the former case the module is the inverse of diametral pitch. Fillet : The small radius that connects the profile of a tooth to the root circle. Pinion: The smaller of any pair of mating gears. The larger of the pair is called simply the gear. Velocity ratio: The ratio of the number of revolutions of the driving (or input) gear to the number of revolutions of the driven (or output) gear, in a unit of time. Pitch point: The point of tangency of the pitch circles of a pair of mating gears. Common tangent: The line tangent to the pitch circle at the pitch point. Line of action: A line normal to a pair of mating tooth profiles at their point of contact. Path of contact: The path traced by the contact point of a pair of tooth profiles. Pressure angle tangent. : The angle between the common normal at the point of tooth contact and the common tangent to the pitch circles. It is also the angle between the line of action and the common

Base circle :An imaginary circle used in involute gearing to generate the involutes that form the tooth profiles.

It should be noted that M is a chord AC, but the tooth thickness is specified as an arc distance ADC. Also h is the distance EB and this is slightly greater than the addendum ED.

19

20

EMBED

MSPhotoEd.3

21

Procedure: 1) Count the number of teeth (N). 2) Calculate the Pitch Diameter (D) = Outer dia. of gear(mm)- depth of tooth(mm) 3) Calculate diametral pitch (P) = N/D 4) Calculate module (m) = 1/P 5) Calculate h (see fig.2) = mN/2{1+(2/N)- cos(90/N)} 6) Measure M using gear vernier set at h 7) From equation M(see fig.2) =mN sin(90/N) Also the actual thickness i.e., arc length (M) can be determined as follows. M= {M sin-1(90/N)}/sin (90/N)

Sine bar

A sine bar is a tool used to measure angles in metalworking.

10" and 100mm Sine bars


It consists of a hardened, precision ground body with two precision ground cylinders fixed at each end. The distance between the centers of the cylinders is precisely controlled, and the top of the bar is parallel to a line through the centers of the two rollers. The dimension between the two rollers is chosen to be a whole number

22

(for ease of later calculations) and forms the hypotenuse of a triangle when in use. The image shows a 10 inch and a 100 mm sine bar. When a sine bar is placed on a level surface the top edge will be parallel to that surface. If one roller is raised by a known distance then the top edge of the bar will be tilted by the same amount forming an angle that may be calculated by the application of the sine rule.

The hypotenuse is a constant dimension (100 mm or 10 inches in the examples shown). The height is obtained from the dimension between the bottom of one roller and the table's surface.

The angle is calculated by using the sine rule.

Angles may be measured or set with this tool. For precision measurements where the bar must be set at an angle, gauge blocks are traditionally used. The sine bar is set up on a surface plate to the nominal angle of the taper plug, which is then placed in position on the bar, being prevented from sliding down by the stop plate at the end. Care must be taken to ensure that the axis of the plug gauge is aligned with the sine bar. Pieces of plasticine will be found to be useful for preventing side ways movement. The dial gauge, supported in a stand on the surface plate, is then passed over the plug gauge near each end and also at one or two positions between the ends. If there is any variation in the readings, two alternatives are available for finding the true angle of the cone. Either the variation over a measured distance along the surface of the plaug gauge can be used to obatain the difference between the true angle and the angle set up, as the height of the slip gauge pile can be adjusted until no variation occurs in the reading of the dial gauge.

Standard Roller Set


Aim: To determine the external taper angle of a plug gauge using standard roller set. Instruments required: Standard roller set, slip gauge box, micrometer, surface plate, taper plug gauge. Procedure: In this method, measurements are made over two equal rollers standing on slip gauges at each side of the taper plug gauge at two positions, one near the small end and one near the big end as shown in fig. (a). From these, together with the differences in heights of the stacks of slip gauge, the angle of taper is easily calculated.

23

Fig. (a)
For satisfactory results, following precautions are to be observed. 1. The plug gauge should be set upon its smaller end so that the force exerted by the micrometer for the diametral measurements tends to wedge the rollers against the plug and the slip gauges. If the gauge is set with its bigger end down, then the rollers have a tendency to ride up the taper. 2. The force applied by the micrometer must be kept small by using the instrument with a light touch to avoid error caused by elastic indentation. 3. Rollers should preferably be of about the same diameter as the micrometer anvils, which can then be rested on the slip gauges to ensure that the measurement is made at right angles to the gauge axis. Tan (/2) = {M2-M1}/2(h2-h1)

Measurement of pitch or pitch errors


It is very necessary to measure the pitch of, say, a tap used to produce a 20 mm ISO coarse thread. The measurement must be made in such a way that other features or dimensions e.g., diameter and thread angle do not influence the results in any way, and in general two methods are employed: 1. Optical projection 2. Pitch measuring machine

24

a) Optical projection. A toolmakers microscope uses the principle of optical projection. The screw
is set between centers on the microscope table, care being taken to ensure that the table is rotated by an angular amount equivalent to the helix angle of the thread. With a master thread file mounted in the projection head, a sharp image of the thread is projected on the screen to match the master profile, and reading is taken on the micrometer dials controlling the movement of the table along the screw axis. b) The table is now traversed until the magnified image of the next thread matches the master profile, and the reading taken at then micrometer indexing dial.

c) The pitch is now the distance traversed by the table, and if the pitch is correct the amount of traverse
will equal the stated value for the pitch. The principle is simply illustrated in the figure below and it may be appreciated that both depth of thread and flank angles may also be checked or measured using the principle of projection.

The dial test indicator (DTI)


It is also known as dial gauge indicator or clock gauge (because of its similarity with a clock).The DTI is a mechanical device for sensing linear variations. It measures the displacement of its plunger or a stylus on a circular dial by means of a rotating pointer. There are two types of DTI. a) Plunger type b) Lever type

25

Plunger type DTI


Figure shows the Plunger type DTI. It generally consists of a rack and pinion mechanism together with a gear train to give a scale movement larger than the plunger movement. The main scale is graduated into equal divisions corresponding to a 0.01 mm movement of the plunger. As there are 100 equal divisions, one complete revolution of the pointer corresponds to 0.01 x 100 i.e., 1 mm of the plunger movement. Hence it is obvious that pointer movement from mark 10 to mark 20 or mark 20 to mark 30 and so on indicates a plunger movement of 0.1 mm. This type has a longer plunger movement and is fitted with a secondary scale and pointer (or a smaller dial) to indicate the number of complete revolutions turned through, one revolution being equivalent to 1 mm of the plunger movement. This secondary scale is also known as revolution counter. To enable the instrument to be zero for any convenient position, the main scale can be rotated and locked into place, using the scale locking screw (bezel clamp) indicated in figure.

Gauge blocks (also known as gage blocks, Johansson gauges, or slip gauges) are precision ground and lapped measuring standards. They are used as references for the setting of measuring equipment such as micrometers, sine bars, dial indicators (when used in an inspection role).

26

Metric gauge blocks Shown at right is an image of a metric gauge block set; close examination of the set will show that the set consists of a range of varying size blocks, along with two wear blocks. In use, the blocks are removed from the set, cleaned of their protective coating (petroleum jelly or oil) and wrung together to form a stack of the required dimension, with the minimum number of blocks. The wear pieces are included at each end of the stack whenever possible as they provide protection against damage to the lapped faces of the main pieces. After use the blocks are reoiled or greased to protect their faces from corrosion. Wringing is the process of sliding the two blocks together so that their faces lightly bond. When combined with a very light film of oil, this action excludes any air from the gap between the two blocks. The alignment of the ultra-smooth surfaces in this manner permits molecular attraction to occur between the blocks, and forms a very strong bond between the blocks along with no discernible alteration to the stack's overall dimensions.

Gauge block accessory set The pictured accessories provides a set of holders and tools to extend the usefulness of the gauge block set. They provide a means of securely clamping large stacks together along with reference points and scribers. Slip gauges are made from a select grade of carbide with hardness of 1500 Vickers hardness. Long series slip gauges are made from high quality steel having cross section (35 x 9 mm) with holes for clamping two slips together.

27

28

You might also like