Professional Documents
Culture Documents
https://doi.org/10.1007/s00170-021-07416-5
ORIGINAL ARTICLE
Abstract
Automated dimensional inspection is commonly expensive because of the requirement of high-precision measurement devices.
To perform a precision measurement, the technician must be highly skilled and fully understands the operation of the equipment.
This study proposes a method for reconstructing the two-dimensional profiles of ring-shaped objects using image processing. At
first, an industrial camera captures partial images of the object. After that, through several image processing procedures such as
binarization, line detection, and contour recognition, the profiles in the images were detected and grouped. Then, a calibration
model was introduced to calibrate and combine the contours from partial images. This process results in a point cloud consisting
of every point from the outer and inner contours of the object, which can be directly used for the automatic measurement. To
verify the proposed method, the data were compared with those acquired from the ATOS measurement system, revealing a
favourable correlation.
Using the regions of interest in the captured images and a [19] constructed an online measurement model for determin-
unique calibration algorithm, the edges of the component are ing the shaft diameters on a lathe using a laser linear light
detected; subsequently, the bayonet size is calculated. source. The maximum average measurement error was
Gadelmawla [10] presented a computer vision system for the 0.019 mm when the shaft diameter was measured at a rota-
measurement of pinions and spur gears parameters. Through tional speed of 1250 rpm.
image processing algorithms such as image binarization, edge The above literature reviews demonstrate an extensive ef-
detection, edge thinning, and labelling, the outer diameter, fort in the construction of non-contact measurement systems
root diameter, and the number of teeth in the gears can be through machine vision and image processing. These studies
calculated. Moru and Borro [11] developed a measurement focused mainly on determining the dimensions (e.g. diameter,
system which uses telecentric lens to determine the parameters length, and thickness) or calculating other dimensions based
of spur gears with diameters under 70 mm. By assessing the on the determined parameters (e.g. using the pitch diameter
global uncertainty involved in the measurement process, the and the number of teeth to calculate the remaining character-
system effectively reduced the measurement error of the gear istics of gear or examining the features of thread on the basis
diameter to ± 0.02 mm. In the study by Ye and Hsu [12], a of crest and root information). A few methods for measuring
machine vision device was introduced for the inspection of large objects have been proposed; however, the error rates
defects in the assembly of unidirectional bearings in automo- remain quite high. The main reason is that the whole picture
biles. Both online and offline inspection methods were devel- of the object is captured in a single, low-resolution image.
oped to count the number of needles in the bearings as well as Measuring large objects, especially large ring-shaped objects,
to detect errors in part assembly. Hsu et al. [13] proposed a is still a challenge for low-cost systems.
method for classifying a random mixture of metal objects, In the present study, a profile reconstruction system for
such as bolts, nuts, and washers, in a container. The applica- ring-shaped objects was developed and experimental mea-
tion of machine vision under various lighting environments surement was conducted. Comparison with the measurement
and a self-developed calibration method enables the system to results from ATOS scanner revealed that despite its low cost
automatically control the gripper moving parts to the desired and simplicity, the system could accurately measure the com-
positions. plex profile of a 192-mm diameter ring-shaped object with an
In general, the accuracy of an inspection system is affected error under 0.063 mm. The three main contributions of this
by the properties of the camera. For example, the higher res- study are as follows: First, the paper introduces a method for
olution of the cameras, the greater the pixel density of the reconstructing the 2D profiles of large-size ring-shaped ob-
image, therefore improves measurement accuracy. Peng jects using a simple single-camera system. Second, by captur-
et al. [14] constructed a computer vision algorithm for ing the inspection object with multiple partial images, the
inspecting the diameter and thickness of metal O-rings, a com- proposed system enhances the accuracy of the measurement
mon type of seals. Using a camera with a resolution of 4 μm results by increasing the resolution of the image up to 3.6
pixel, and the given tolerance of ± 0.1 mm for the 3-mm-thick times. Third, the paper proposes an efficient and accurate cal-
O-rings, the system yielded a measurement accuracy of ap- ibration model to combine contours from discrete images of
proximately 100%. Rejc et al. [15] developed an automated the object into the same world coordinate system (WCS) with
visual inspection method to measure dimensions of a mechan- small errors.
ical assembly known as the protectors. Using a camera with a
4 μm pixel resolution, the system can measure the edges of the
protector with the range of error within ± 0.02 mm. Wang
et al. [16] presented a 16-camera system for measuring the 2 The inspection object and system setup
radius on three-dimensional (3D) multi-bend tubes. With the
pixel resolution of the camera approximately 4.65 μm, the 2.1 The inspection object
system can achieve an accuracy within ± 0.5 mm.
Gadelmawla [17] introduced a vision system for the automatic The inspection object in this study was the outer ring of
measurement of various types of screw threads with a camera a unidirectional bearing. The maximum and minimum
and backlighting table. The results show that by using a diameters of the bearing are 165 and 192 mm, respec-
charge-coupled device (CCD) camera with a 5 μm pixel res- tively. The outer and inner profiles are broached into
olution, the difference between the standard and measured complex shapes consisting of 16 teeth different in width
values was ± 5.4 μm, which provides a good accuracy for and angle orientations (Fig. 1). The geometrical toler-
the measurement. Ngo et al. [18] presented a method using a ances for the linear dimensions and angular dimensions
two-camera system to measure the diameters of holes drilled between the small teeth are ± 0.25 mm and ± 0.5°, and
in various planes. Using a camera with 0.19 mm pixel resolu- the tolerance of the maximum diameter ranges from +
tion, the system can obtain an accuracy of ± 0.5 mm. Tan et al. 0.4 to + 0.8 mm.
Int J Adv Manuf Technol
°
° ±0.5
13.8
°
0.5
.5° ±
41
1 3.8 °
Ø16
5 +0.5
±0.5°
+0.6
Ø186+0.2
+0.8
Ø192+0.4
.5
+0
41.5°
8
Ø 16
±0.5°
°
.5
° ±0
.5
41
10.8±0.25
55.4°±0.5°
Fig. 1 The shape of the product with basic dimensions (unit: mm)
2.2 System setup light with a dark background, high-intensity light sources
should be employed to take advantage of the reflection of
Because of its complex profiles, dimensional measuring the the metal surface for contour recognition. However, the reflec-
ring-shaped object through the use of conventional equipment tion would cause blur at the border of the surface and influ-
like callipers or micrometres has some limitations. And the ences the overall accuracy (Fig. 2b). In case the light back-
use of modern equipment such as coordinate measuring ma- ground is used, the intensity of the front light can be adjusted
chines or profile projectors requires highly skilled operators. to a moderate level to prevent blurring. At that time, the dark
A more favourable option involves the construction of a vir- image of the object would be easy to distinguish from the light
tual model of the object and conducts the measurement pro- background (Fig. 2c). However, owing to the entocentric
cess using a medium software. Because the object has a uni- lenses’ properties, which regard to the angle of view, images
form height, instead of building a complex 3D model, con- of the inner profile B are hard to be observed, while profile A
structing a 2D profile of the ring-shaped object is an approach (located in the different plane with the outer profile) becomes
that is not only simpler but also reduces the inspection time. more visible. This causes incorrect information to the profile
To estimate the difficulty in measuring the ring-shaped recognition process and affects the measurement results. The
object, an image acquisition method was examined. With the same situation occurs when the system uses backlighting. In
camera set perpendicular to the top surface of the object, im- addition, for this case, the image of the object is completely
ages of the entire object were captured as showed in Fig. 2. It black (only the outer profile and profile A are visible).
is clear that using this system, the layout shows several short- To overcome the above-mentioned problems, the system
comings. First, the image resolution cannot ensure sufficient could employ high-resolution cameras (to have a larger pixel
tolerance. Because for a given image size 2560 × 1920 pixel size) and telecentric lenses (to suppress the image distortion).
of the camera, the pixel resolution of the 192-mm-diameter However, these changes would increase the system cost and
object would be 0.1 mm, meaning that a difference from 3 to 5 only suitable for measuring not very large objects because the
pixels during the image processing will exceed the tolerance limit diameter of the telecentric lens will not be smaller than
of the measurement (0.5 mm, see Fig. 1). Second, the appear- the diameter of the inspection sample.
ance of the inner surface in captured images causes an error for This paper presents a new method to reconstruct the 2D
the reconstruction of inner contours. If the system uses a front profiles of large-size ring-shaped objects. Fig. 3 demonstrates
Int J Adv Manuf Technol
Inner
profile
Real profiles
Profile A
Top surface
Inner surface
Profile B
Inner surface
Ring-shaped object
Fig. 2 Capturing the object using a single-camera system. a Schematic of the system; captured image in the case of b dark background under high-
intensity light and c light background under moderate-intensity light
the structure of the proposed system, which consists of an reducing the appearance of the inner surfaces in the images;
industrial camera, a backlighting source, and a rotary table. thus preventing the inner profiles capture problem. The cap-
Instead of capturing the entire object in a single image (as in tured region of the new method is smaller than that in Fig. 2,
Fig. 2), the camera is positioned on one side to capture a small meaning that the partial images have a higher resolution,
region of it (Fig. 3b). The object is set on the rotary table, hence the measurement accuracy is enhanced. For accurate
which rotates 30° after one image is taken. This shooting edge detection and reducing the influence of blurring,
method allows to capture all of the ring-shaped contours while backlighting source is chosen. A uniform backlight light-
Camera
Lens Camera
Field of view Lens
Calibration plate Calibration plate
Ring-shape object Captured area
Object
Rotary table
Light source
Light source Rotary table
(a) (b)
Fig. 3 Illustration of the proposed system: a the schematic diagram of the system, b the 3D model of the system
Int J Adv Manuf Technol
emitting diode (LED) panel is placed below the camera’s field performed to remove the regions adjacent to the separation
of view, facilitating image segmentation and avoiding the re- lines. Second, calibration points in the masked image are de-
flection of the top surface. tected to calculate the coefficients of the calibration model.
A calibration plate is used to transform the image coordi- Third, the inner and outer contours in partial images are ex-
nates into the world coordinates as well as combine contours tracted and calibrated to the same 2D plane.
from separate images. This plate was made from an acrylic The image procedure is described in detail in the following
sheet with a thickness and a diameter of 0.1 and 250 mm, section.
respectively, is fixed on top of the object during the measure- In stage 1.1, the partial images are converted to binary
ment. The patterns on the plate, i.e. the calibration points and images to reduce unwanted noise and improve the visibility
separation lines, were manufactured by JuanShan Inc. (Taipei, of desired information in the image. The two common
Taiwan) using an Orbotech LP7008 Photoplotter. Regarding methods Otsu and simple thresholding are used for this stage.
the calibration features, the machine uses laser etching tech- As shown in Fig. 7b, the separation lines become visible
niques to achieve the maximum accuracy of 0.1 μm (corre- through the simple thresholding. However, this method makes
sponding to a resolution of 25,400 dpi). Fig. 4a presents the the noises in the image became observable, as well as the
top view of the calibration plate, which consists of 12 separa- image of inner surfaces. The Otsu’s thresholding, on the other
tion lines and 276 points distributed across three concentric hand, can completely remove noises and the inner surface
circles. Fig. 4b shows the placement of the object on the cal- images, but the separation lines are also obscured because of
ibration plate. With the image size of 2560 × 1920 pixels and this automatic method (Fig. 7c). Therefore, simple
the captured field of view is 72 × 54 mm, the resolution of the thresholding is selected for this stage to enhance the visibility
captured images is approximately 0.028 mm/pixel. of the separation lines, and Otsu thresholding is applied after
The programming language Visual Studio C# was used for the separation lines are detected to reduce the effect of noise in
image processing based on the Emgu CV library. The camera the images.
was connected to the computer by using the uEye Cockpit in In stage 1.2, the separation lines, which define the remov-
the IDS Software Suite, and the program outputs were the able regions in partial images, are identified (Fig. 8). First, the
point cloud data of the contours. separation lines are detected using the Hough Line Transform
The initial settings for the measurement process were se- function, through which the coordinates of the endpoints of
lected to optimize image quality. First, the focal length of the these separation lines (P1, P2, P3, and P4 in Fig. 8a) are
lens was adjusted to provide evenly sharp images. Second, the returned. Subsequently, the coordinates of C, i.e. the intersec-
lens aperture and LED lighting must be finely tuned to mini- tion points of these lines, are calculated and the intersection
mize blurring at the object edges. Finally, images were taken points between these lines and the image borders (M, N, P, and
while ensuring two separations constraining a region appeared Q) are determined (see Fig. 8b).
in the field of view. Fig. 5 depicts the system setup and a Because point C is on line P1P2, the slopes of the two lines
sequence of 12 images corresponding to partial regions of CP1 and CP2 are equal and can be determined using the for-
the object. mula:
C x −P1x P2x −P1x
tanφ1 ¼ ¼ ð1Þ
2.3 Image processing procedure C y −P1y P2y −P1y
The proposed image processing system involves a three-part Multiply both sides of Eq. (1) by the product of (Cy − P1y)
procedure described in Fig. 6. First, image masking is and (P2y − P1y):
12 1 2 R100
Separator 3
line 11
R80
°
10 4 7 x 4.3
9 5 R75
8 7 6
°
6.5
5x
30°
R65
Position to Image captured by the system
place object
(a) (b)
Fig. 4 Dimensions and layout of calibration plate. a Calibration plate with detailed dimensions (unit: mm), b placement of the object on the calibration
plate
Int J Adv Manuf Technol
(a) (b)
(c)
Fig. 5 The system structure and the captured images. a System setup; b a sequence of 12 capture images, and c user interface (UI) of the system
START
P4x −P3x
Px ¼ C x − C y −Py : ð13Þ
P4y −P3y
i=1
P −P
4x 3x
Qx ¼ C x − C y −Qy : ð14Þ
Select image P4y −P3y
number i
Because separation lines in partial images can be automat-
Image ically detected using the Hough Line Transform function, a
masking Stage 1.1
Binarization difference of a few degrees over-rotation of the table does not
negatively affect the results of this stage. Even if the table is
manually rotated at an angle within 25° and 35°, as long as te
Stage 1.2
Line detection initial setting is followed, the images of the separation lines
remain detectable.
In stage 1.3, the side regions constrained by the two lines
Stage 1.3
Masking MN and PQ are masked. Because of the properties of the lens,
the features near the image border tend to be radially distorted,
Calibration and the unstable number of calibration points appearance in
Stage 2.1 i = i+1 these regions also make it difficult to proceed with the cali-
Blob detection
bration stage. Thus, to enhance the calibration accuracy, these
regions must be removed. The small areas near the separation
Stage 2.2 line, namely the matching regions (Fig. 9a), are retained to
Calibration calculating
allow for the inspection of overlapping during the combina-
tion of contours from adjacent images. The masked image
Contours
extraction Stage 3.1 obtained in this stage encompasses 23 calibration points and
Contour extraction a part of the object, as shown in Fig. 9b.
In stage 2.1, to prepare for the calibration, the points in the
Stage 3.2 masked image are detected and labelled. Since these points are
Contour calibration oriented in concentric arcs with different angular distributions,
assigning them with the exact world coordinates must be done
False with a specific scheme. First, by using the FindContours func-
i = 12 tion from Emgu CV, the image centres of the blobs (i.e. 23
True calibration points and a part of the object) are detected.
Subsequently, these blobs are grouped based on the distance
END
between their centre and point C. By arranging the centres in
Fig. 6 Flowchart of the image processing procedure ascending order according to the x-coordinates, these
Image noise
Inner
surface
Real profile
P1
P3
(a) (b)
calibration points are numerically labelled (Fig. 10). The ge- Stage 2.2 is an essential step in the contour reconstruction
ometries of the blobs after the sorting scheme are listed in procedure with regard to obtaining the world coordinates from
Table 1. the image coordinates.
With Di is the distance between the blobs’ centres (denoted The literature on image calibration focuses on estimating
by xi and yi) and point C (which of coordinated had been the relationships between actual coordinates and image coor-
calculated in stage 1.2), as presented in the following equa- dinates. For example, the direct linear transformation method
tion: presented by Abdel-Aziz and Karara [20] approximates un-
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi known parameters, and the calibration model developed by
Di ¼ ðxi −xC Þ2 þ ðyi −yC Þ2 ð15Þ Deng et al. [21] combines the differential evolution and parti-
cle swarm optimization algorithms in the calibration of camera
The column range of D in Table 1 is the range of parameters. Because an entocentric lens is used for the system,
the distance Di that contains points in the same circle. the captured images are imperfect and exhibit some aberra-
The data in Table 1 indicate that the stage was com- tions. Numerous methods had been suggested to approximate
pleted through this simple sorting scheme. Given the these distortions and enhance the precise calibration process,
extensive gap between the area of calibration points for example, the calibration technique based on images of a
and the ring-shaped block (2700 vs. 440,000 pixels), calibration plate captured from various positions [22], or the
differentiating them becomes easy. Furthermore, given self-calibration method using a single image combined with
that the difference between concentric arcs varies from geometric constraints, including vanishing points and ellipses
300 to 500 pixels and the distance between points of [23].
the same arc ranges between 100 and 200 pixels, The present study employs the quadratic function proposed
mislabelling will not be occurred. by Hsu et al. [13] with the use of 12 coefficients as follows:
MASKED MASKED
REGION 1 REGION 2
(a) (b)
Int J Adv Manuf Technol
6 7 8 9 10 11
12
1 2 3 4 5
X ¼ a1 x2 þ b1 xy þ c1 y2 þ e1 x þ f 1 y þ g1 are the transformation coefficients of the equations. To
ð16Þ obtain these transformation coefficients for the partial
Y ¼ a2 x2 þ b2 xy þ c2 y2 þ e2 x þ f 2 y þ g2
image, regression analysis involved the mean squared
where (x, y) are the coordinates of the points in the error using the coordinates of the 23 calibration points
image, (X, Y) are the corresponding coordinates in the detected in stage 2.1 was performed. The results are
WCS, and a1, b1, c1, e1, f1, g1, a2, b2, c2, e2, f2, and g2 presented as following matrix equations:
Inner contour
Calibration point
C x
Image 11 Image 3
12 1 2
11 3
10 4
9 5
Image 10 8 Image 4
7 6
Image 9 Image 5
Image 8 Image 6
Image 7
The calibration algorithm is also involved in connecting the magnified image shows that despite the complexity of object
contours from the partial images. When two adjacent point profile, the point clouds from overlapping regions well match
clouds are merged, their matching regions overlap. Fig. 14 with each other, and the deviation may not be observed with
illustrates the final point cloud of the ring-shaped object with the naked eye. Fig. 14b presents the straightening of the points
the points from adjacent images differentiated by colour. The in four overlapping regions. The difference between adjacent
120
Diference 63
L( m) 60
Frequency (Number of points)
9 ~10
60
8~9 46
7~8 39
40
y(mm)
6~7
0 33
5~6 29
28
4~5
3~4 20
-60 14
2~3
9 9
1~2 6
0~1
-120 0
-120 -60 0 60 120 0 1 2 3 4 5 6 7 8 9 10
x(mm) Diference L ( m)
(a) (b)
Fig. 13 Deviation distribution of calibration points, as calculated using the proposed method. a Position of calibration points with the corresponding
deviation values; b frequency of calibration error
Int J Adv Manuf Technol
84.25
Region 4
Radius (mm)
Region 4 84.20 Region 4 Region 5
84.15 0.011 0.033
84.10
Region 3 Region 2 84.05
Reg. 5 Region 5 84.00
0.0 1.0 2.0 Length (mm) 3.0
Region 1 96.35
Region 6
Radius (mm)
Region 6 Region 7
96.30
96.25
96.20
4 3
5
Region 1 96.15 0.044
2 96.10
Region 6 6 1 0.0 1.0 2.0 Length (mm) 3.0
R adius (mm)
8 11 84.20
9 10 84.15 0.010 0.033
Region 7 84.10
Reg. 12 84.05
84.00
0.0 1.0 2.0 Length (mm) 3.0
Region 8
Region 8 Region 11 Reg. 11 82.65
Radius (mm)
82.60 Region 1 Region 12
0.046
82.55
82.50
Region 9 Region 10 82.45
82.40
0.0 1.0 2.0 Length (mm) 3.0
(a) (b)
Fig. 14 Distribution of the point cloud in overlapping regions. a Point cloud of the ring-shaped object and the magnified overlapping regions; b
straightening the points from the four overlapped arcs
point clouds was smaller than 0.011 mm, and the thickness of the left, and the edges of the tooth are left tilted. Magnification
the point cloud in the overlapping regions varied between images show that the edges from the first case (A and B) are
0.033 and 0.046 mm. smooth, whereas those from the second (C and D) are rough.
For a more detailed observation, the point clouds calibrated
3.2 The repeatability test results from the two cases were set side by side for comparison. Fig.
16a and 16b present the results, with the outer contours of the
Because the calibration plate is manually fixed on the top of images extracted separately. The tooth profile in Fig. 16c was
the object, the positions of the plate and the ring-shaped object constructed by rotating the point cloud of case 2. Comparison
in different measurement procedures are not alike. Therefore, of the two point clouds of the tooth revealed that the contours
partial images of the same object from different measurements of the same object in different measurements were not
are not identical. Fig. 15 depicts a tooth captured from two completely identical. As shown in Fig. 16c, the rough edges
different measurements. In the first case (Fig. 15a), the tooth is from case 2 give a larger deviation with the thickness profile
positioned at the centre of the image, with the edges orient up to 0.041 mm. The magnified regions indicate differences in
vertically. In the second case (Fig. 15b), the object rotates to the two cases, which can be explained by the camera distortion
D
A B C
(a) (b)
Int J Adv Manuf Technol
Case 1 Case 2
0.023
Case 1
Case 2
and edges blurring problems. The subpixel may also affect the processor that had 32 GB of RAM. The average processing
point cloud at the edge as well as the thickness of the calibrat- time for each step is presented in Table 3.
ed contours. To improve the quality of the system, using high-resolution
To evaluate the precision of the proposed system, a repeat- cameras is recommended to create higher-quality images,
ability test was conducted. The experiment was performed thereby reducing the deviation of the point cloud. Increasing
three times on the same object. The table was manually rotated the number of captured images and decreasing the field of
to magnify the effects of the binarization algorithm, the light, view can also be achieved by redesigning the calibration plate
and the lens distortion. The ring-shaped object and calibration (with more partial regions) and using lenses with a smaller
plate were repositioned after each measurement to assess the angle of view (reduce the affection of distortion); however,
effectiveness of the calibration algorithm. To measure the de- these strategies also increase the system processing time.
viation between the extracted experimental point clouds,
Geomagic Design X software was used. Fig. 17 shows 28
dimensions of the ring-shaped object, including the angles 3.3 Compared with ATOS scan results
between the teeth (12 angular dimensions, A1 to A12), the
widths of the small teeth (13 alignment dimensions, T1 to Because of the influence of the machining process, the dimen-
T13), and the radial dimensions (R1, R2, and R3). Table 2 sions of the product always have deviations. Therefore, it is
presents the results of the measurement, including the point necessary to use another measuring system to evaluate the
cloud dimensions, the measurement ranges Re, and the means working of the proposed system.
M. In this study, the efficiency and accuracy of the proposed
As shown in Table 2, the dimensions range of the point system in the measurement of the ring-shaped object was
cloud Re changed slightly. The maximum range of the angular compared with those of an ATOS scanner for the following
dimensions, 0.040°, did not depend on the magnitude of the reasons: First, as a popular used commercial vision system,
angle to be measured. For example, although angle A12 was ATOS scanners collect measurement data and reproduce the
three times larger than A1, they had the same range of 0.036°. 3D models of the object with high accuracy and reliability.
Angles A3 and A6 had the same magnitude (≈13.8°), but their Second, from the 3D point cloud obtained from an ATOS
ranges were 0.008° (the smallest value) and 0.04° (the largest scanner, 2D point cloud profiles of the object can be extracted,
value), respectively. Regarding linear dimensions, the greatest facilitating comparison with those obtained from the proposed
range was 0.047 mm (for the tooth width T5 = 10.8 mm) but system.
was only 0.02 mm for the radius R1 (96.3 mm; which is 9 An ATOS I scanner model was used as a reference for the
times larger). These deviations are partially attributable to the evaluation of system accuracy. The scanner yielded a 3D point
minor errors the calibration algorithm generated during the cloud of the object that contained 438,450 points. These data
point cloud construction. Other explanations are the subpixel were imported into Gom Inspect software and three sections
effect and the blurred edges caused the instability in the parallel to the top surface of the object, with offset distances of
measurement. 2, 7, and 12 mm, respectively, are created (Fig. 18b). The
The running time for a single process was measured using measurement process was the same as that used for the 2D
an HP Z6 Workstation with an Intel Xeon Silver 4108 points of the original system; specifically, each section was
Int J Adv Manuf Technol
T1 T2 T
3
T4
3
A12
T1
R3
R1
T12 R2
A11
T11 T
T5
A10
10
T6
A5
T7
T8
T9
A6
A9
A7
A8
measured three times, and the final dimension was the average between the point clouds and the standard profile are generat-
of these values. ed as presented in Fig. 19c.
The point clouds from the two systems were merged with From Fig. 19c, parameters of the point cloud, including the
the standard profile and six regions from the merged images measurement precision, the error range, and the density of the
were selected for initial analysis of the similarities. Analyses point cloud, were analysed. Compared with the standard pro-
of these regions are magnified and displayed in Fig. 19. file, contours from the ATOS scanner were smaller about
As shown in the magnified images of the selected regions −0.10 and −0.05 mm, greater than that the differences of the
(Fig. 19b), the point cloud produced by the proposed system point cloud created by the proposed system (in the range be-
was smaller than that constructed by the ATOS scanner. This tween −0.15 and −0.10 mm). Furthermore, the ATOS point
is because of the blurring in the object profiles makes them cloud was smoother, with the thickness of the contour range
appear smaller in captured images. The differences between from 0.003 to 0.013 mm, smaller than that of the point cloud
these point clouds can be further examined by straightening constructed by the proposed system, with the maximum thick-
the six magnified regions. With the distance between points ness of which up to 0.047 mm. The point spacing in the sec-
and the standard profile taken as y-coordinates, the differences tion of the ATOS scanner varied between 0.008 and 0.5 mm,
Plane 2
Plane 3
(a) (b)
Start point
Region 4 End point Start point
ion g
region point
End
point Start
End point
point
Region 6 Region 5 ATOS point cloud New system point cloud Standard profile
(section 2)
(a) (b)
Arc length (mm) Arc length (mm) Arc length (mm)
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
0.00 0.00 0.00
0.006 0.013 0.003
-0.05 -0.05 -0.05
Error (mm)
Error (mm)
Error (mm)
Error (mm)
0.036 0.039
-0.10 -0.10 -0.10
while the point spacing in the model of the proposed system mm), whereas the deviation of R3 yielded a positive range
was constant and equals to the pixel size (0.028 mm). (from +0.056 to +0.067 mm). These deviations can be ex-
To evaluate the system accuracy, dimensions from the plained as follows: First, because the image processing of
scanned model are measured and compared with those from the proposed system under thresholding binarization is influ-
the proposed system. By averaging the measurement results enced by the lighting, pixels in the blurred regions tend to be
from the three sections, 28 dimensions (mentioned in Fig. 17) diminished, which results in the shrinking in the outer con-
of the scanned model are calculated. These dimensions are tours (which contains the dimensions of Ti, R1, and R2) and
used as references to compare with the results of the three- the expansion in the inner contours (dimension R3) of the
time measurements from Table 2. The deviations of the mea- captured images (Fig. 20). The subpixel effect also impacts
surement results obtained from the two models, calculated the point clouds’ thickness in the three-time measurement (see
using the formula (Δi = di − D), are presented in Table 4. ‘Section 3.2’) and increases the difference between measure-
The result from Table 4 shows that compared with the point ments from the two system’s point clouds.
clouds of the ATOS scanner, the deviations of those generated Another explanation for these deviations, as mentioned
from the proposed systems range within ± 0.042° for the an- above, is the dimensions obtained from the three sections of
gular dimensions and range from −0.081 to −0.015 mm for the the ATOS scanned model were measured separately then av-
tooth width. Regarding the radius measurements, deviations eraged, whereas the dimensions measured from the proposed
of R1 and R2 had a negative range (from −0.029 to −0.004 system were obtained only from the top surface. If the side
Table 5 Dimensional comparisons between the point cloud from the two systems
Sample no. Dimension Nominal dimension Proposed system measurement number ATOS scanner Deviation Δi = di − DT
D0 DT
#1 (d1) #2 (d2) #3 (d3) Δ1 Δ2 Δ3
Table 5 (continued)
Sample no. Dimension Nominal dimension Proposed system measurement number ATOS scanner Deviation Δi = di − DT
D0 DT
#1 (d1) #2 (d2) #3 (d3) Δ1 Δ2 Δ3
dimension), the proposed system can be used in the It can be seen that, despite having higher deviation mea-
industrial environment for checking the dimension after surement, the proposed system is still a cost-effective and
broaching. If the system employs some modifications easy-to-operate measurement system. The installation of the
such as redesigning the calibration plate, using whole system accounts for around US$2,000 cost while the
telecentric lens, and collimate light sources, the accura- price of an ATOS I scanner is US$100,000, not to mention
cy of the system can be improved and reduce the devi- operating on ATOS scanners requires skilled technicians.
ation measurement. Furthermore, the warmed up time before each measurement
Int J Adv Manuf Technol