You are on page 1of 11

I.

OBJECTIVES
II. THEORY

Since the first time that man settled down from his nomadic existence and began employing building
materials, occupying land, and trading with his neighbors, measurement has been crucial. Much better
measurement accuracies are needed in a growing number of sectors, from microelectronics to interplanetary
range, as society has become more technologically oriented. There have been many different measurements
used throughout history, some of which are still in use today and some of which now appear downright odd.
When it comes to measurement in the past, the body was king. Measurements that were widely accepted
included step distance, finger width, and foot length. The Egyptian Royal Cubit was the earliest known
standardized unit of linear measurement. A cubit in prehistoric Egypt was the distance from the tip of the
finger to the elbow. Then, this may be broken into shorter units like the foot, hand, or finger, or it could be
combined together to form longer units like the stride (which, at 4 inches, is still used to indicate the height of
horses). Due of the wide range in individual sizes, the cubit could vary significantly.

The approaches for establishing engineering length standards were discussed. Instruments for
producing accurate and precise linear measurements are readily available, and both direct and indirect linear
measuring tools adhere to these established standards of length. The two linear measuring devices that are
most frequently used in machine shops and tool rooms are the Vernier calliper and Vernier micrometre. Most
people's first experience with linear measurement is with a steel rule or tape measure. Measuring
instruments are either designed for end measurements to measure the distance between two surfaces using
an instrument (e.g., screw gauge) or for line measurements to measure the distance between two surfaces.
The engineer of today, however, has access to a variety of tools, from strictly mechanically operated tools to
digital electronics tools. To determine which instrument is ideal for an application, just the application's
nature and measurement costs need to be taken into account. This chapter covers a wide range of linear
measurement tools, including micrometers and digital callipers as well as a basic steel rule. To achieve
measurement precision, many of these tools, like the depth gauge and height gauge, must be used in
conjunction with a datum. The "datum plane," of which the surface plate and V-block are the two most
significant, serves as the basis for all dimensional measurements.

III. EQUIPMENT

We measured the various diameters of the set of weights' circular objects using a vernier caliper.
The vernier caliper's elongated jaws can precisely measure the circumference of rounded objects. A vernier
caliper has two scales: a moving vernier scale in addition to the primary, fixed scale. The primary scale
displays millimeter readings. A vernier caliper, in contrast to normal scales, can measure readings with an
accuracy of up to 0.001 cm. A vernier caliper and scale are used in conjunction for precise measurement.

IV. DRAWING
V. RESEARCH QUESTIONS
1. Discuss other precision instruments used to measure linear dimensions.

Vernier Height Gauge: This tool is used to measure the height of various items accurately. They
frequently have a marking instrument that acts as a measurement level and enables the user to regularly
mark vertical distances on metal objects. The two most common tools used to take precise vertical
measurements are vernier height gauges and electronic height gauges.

Johnson invented the first slip gauges, which are commonly referred to as "Johnson gauge blocks."
These are steel blocks that are roughly 32 mm x 9 mm in cross-section and are rectangular in shape. Due to
molecular attraction and air pressure, if two slip gauges are swung together under a specific pressure, they
will attach to each other quite securely. We call this process "wrinkling." This method is excellent for joining
several gauge blocks together to create the desired form.

Comparators: A comparator provides only a dimensional difference in relation to a basic dimension


because it operates on relative measurement. A standard or master setting that represents the original size
and dimensional deviations from the master is used by the comparator to compare an unknown component's
dimensions to that setting.

Comparators have the benefit of not requiring much operator experience, which makes them easier
to use. There are four categories of comparators: mechanical, electrical, optical, and pneumatic.

2. Discuss the Sl system of units and the English system of units.

INTERNATIONAL SYSTEM OF UNITS (SI)


The most used unit of measurement is the metric system, which has been updated into the international
system of units. This system is composed of seven fundamental components. To declare multiples and
fractions of each unit, this system specifies twenty prefixes to the unit symbols and names.

As a result of earlier research that started in 1948, the worldwide system of units was made public in 1960.
The meter-kilogram second system of units is the foundation for SI (MKS). SI was designed to change as time
went on. As the technology of the measuring equipment, we use every day continues to progress, it was
expected that units and prefixes would be produced and unit definitions would be amended, on an
international level. The 24th and 25th General Conferences on Weights and Measures (CGPM), which were
held in 2011 and 2014, where a proposal to change the definition of "kilogram" was proposed, are two
outstanding examples of this dynamic system. This was brought up because some people started to think of a
kilogram as a natural constant rather than a unit of mass.

In many regions of the world, as was described in the preceding part, there was a shocking lack of
coordination among the many professions. SI was founded to promote togetherness. The Meter Convention
of 1875 created the CGPM with the goal of uniting various international organizations to develop the
definitions and standards of the proposed new system. Additionally, they were gathered to decide on the
guidelines for how this new system would be promoted and accepted globally.

Theoretically, SI can be applied to any physical measurement. Even yet, it was inevitable that non-SI units of
measurement would continue to be used in technical, scientific, and commercial literature for some time to
come. Additionally, because some units of measurement are so firmly ingrained in history and particular
societies, they will be utilized for a very long time. These units have been classified by the CIPM, and the SI
Brochure now includes them.

3. What is metrology?

Metrology is unquestionably at the heart of all practical scientific endeavors, according to the International
Bureau of Weight and Measures, which defines it as "the science of measurement, embracing both
experimental and theoretical determinations at any level of uncertainty in any field of science and
technology" [1]. Metrology is crucial because practically every aspect of daily life—including practical science,
technology, engineering, and medicine—involves measurements that are essential to human wellbeing,
economic prosperity, quality of life, and environmental protection. The framework of metrology assures that
these measurements are reliable, comparable, and accurate, giving measurement confidence at a given level
(usually by quoting a measurement uncertainty). When these characteristics are connected to a
measurement, waste is reduced, commerce is permitted, infrastructure is operational, technology develops,
the economy thrives, global collaboration, trade, and agreement are encouraged, and our ongoing health,
safety, and quality of life are ensured. In other words, metrology creates frameworks and methods for
quantification and, through them, supports accuracy and consistency in every measurement. By most
definitions, metrology is still a pretty tiny endeavor in comparison to all the activities that rely on it. It is
important to delve deeper into this distinction.

4. Define the following terms:

a) Allowance: This is the difference between the mating parts' fundamental dimensions. The allowance is
positive when the shaft size is smaller than the hole size and negative when the shaft size is larger than the
hole size.
b) Nominal size: The term nominal size refers to the designation used for broad identification.
c) Tolerance: The permitted departure from the norm. It is the variance allowed in maintaining a defined
dimension in a machined component, for example, the difference between the upper limit and lower limit of
a dimension.
d) Interference fit: Press fit and friction fit are other names for interference fits. A fastening between two
pieces known as an interference fit is typically accomplished by friction after the combined parts are pressed
together.

The minimum acceptable shaft diameters in these types of fittings are larger than the maximum permissible
hole sizes. The negative fit is known as the interference fit if there is ever a mismatch in the hole and shaft
sizes before being joined.
e) Unilateral tolerance: In this system, a part's dimension can only vary on one side of the basic size, which
means that the tolerance can only be on either side of the fundamental size.
f) Bilateral tolerance: In this system, the part's dimension is flexible on both sides of the fundamental size. In
other words, the tolerance limits are on either side of the fundamental size.
g) Clearance fit: An air gap or clearance that exists between the shaft and holes is referred to as a clearance
fit. Joints become slack due to such clearance fits.
In order for the clearance fit to have space, it must be connected to other fits. These fits, which are assured to
have clearance, are made for movable couplings of parts that employ clutch discs, hydraulic machine pistons,
guiding bushings, sliding gears, and pivots running and sliding fits of forms.

h) Transition fit: A transition fit is a fit that can offer either interference or clearance.

In transition fit the tolerance zones of the hole and shaft overlap with each other. Transition fit lies between
clearance and interference fit.

These are employed in situations where precise positioning is required, minimal clearances are acceptable,
and holes or shafts must be precise.

5. Enumerate some practices to be observed for the proper care of precision instruments.

If results cannot be trusted to be precise and dependable, even the best measuring devices in the world are
practically useless. By taking good care of them, measuring devices like micrometers, calipers, and dial gauges
can maintain a high level of accuracy and dependability.
*Lubricate Instruments Appropriately to Prevent Corrosion

The proper lubrication of measuring equipment will stop corrosion and oxidation-related damage. After each
use, precision measuring instruments should be lightly lubricated, and any extra oil should be wiped off of
metal surfaces using a dry, clean cloth. Additionally, avoid using penetrating oils or other chemicals made for
uses other than mild lubrication. A seemingly "wet" instrument might collect and keep tiny particles that can
cause interior wear of carefully constructed elements, especially those that move against one another.

*Store Instruments in an Appropriate Environment

Make sure the site is well-protected from damaging elements before storing an instrument there. Precision
instruments should not be kept in a drawer where they may bump into one another; instead, place the
instruments in padded cases or use dividers to keep them apart from one another. Unless the instruments are
well-protected in cases, avoid stacking instruments on top of each other.

*Know How to Properly Handle Instruments During Use

When you have a precise measuring tool in your hands, it is even more crucial to keep it safe from damage or
giving inaccurate readings. Avoiding unexpected shocks or rough treatment is one of several things you
should do to assist prevent damage or miscalibration during use. Measurement devices should not be
dropped, thrown, or struck against hard surfaces.

*Obtain Professional Recalibration Assistance

It's crucial to maintain a professional relationship with a supplier or maker of precision instruments since
some measuring devices need to be periodically recalibrated to guarantee they continue to perform as
intended. They can maintain your instruments to the highest standards, ensuring years of dependable
service.

6. Discuss with the aid of illustrations how to read the vernier in metric and in English units.

Step 1 – Look at the main scale.


Read the value on the main scale before conducting the measurement. This will be displayed in millimeters
(mm) on a metric vernier caliper. 
The smallest value that can read from the main scale is 1mm (indicated by a single increment).

The number directly to the left of the 0 marker on the vernier scale corresponds to the value on the main
scale.
 
In this instance, this value is 27mm.
 
As the zero is almost touching the 28mm mark, we can estimate that the distance between the jaws is
closer to 28mm than 27mm.

Step 2 – Look at the vernier scale.


Look at the vernier scale after that. A metric vernier caliper's vernier scale has a 1 mm measuring range.
The vernier scale in the example we'll be looking at is graduated in 50-unit increments.

Each step corresponds to 0.02mm. Vernier scales can vary in graduation, with some having 20 increments
that each correspond to 0.05mm.

When reading the vernier scale, identify the increment that lines up most accurately with an increment on
the main scale. This value will make up the second part of your measurement.
 
In the example, the increments of both scales coincide at the 0.8mm on the vernier scale.

Step 3 – Add both values together


To get your total reading, add both the value from the main scale and the value from the vernier scale
together.
 
e.g. 27 + 0.8 = 27.8
 
So, the distance between the jaws of the vernier caliper is 27.8mm.

Step 1 - Look at the main scale

When taking a measurement, you should first read the value on the main scale. On an imperial vernier
caliper, this will be given in inches and tenths of an inch. The smallest value that can read from the main
scale is 0.025 inches (indicated by a single increment).

The value on the main scale is the number immediately to the left of the 0 marker of the vernier scale. In
this instance, this value is 0.4 inches.
 
As the zero is half way between 0.4 and 0.425, we know that the distance between the jaws is somewhere
between these two values.
Step 2 - Look at the vernier scale
Next, look at the vernier scale. The vernier scale of an imperial caliper has a measuring range of 0.025
inches, and is graduated in 25 increments. Each increment represents 0.001 inches (a thousandth of an
inch).

When reading the vernier scale, identify the increment that lines up most accurately with an increment on
the main scale. This value will make up the second part of your measurement. 
 
In the example, the increments of both scales coincide at 17 thousandths of an inch (0.017).
Step 3 - Add values together

To get your total reading, add both the value from the main scale and the value from the vernier scale
together.
 
e.g. 0.4 + 0.017 = 0.417
 
So, the distance between the jaws of the vernier caliper is 0.417 inches.

7. Make an engineering drawing of a shaft showing dimensions,tolerances and surface finish.


VI. REFERENCES
VII. APPENDIX

You might also like