You are on page 1of 9

Journal of Manufacturing Processes 65 (2021) 373–381

Contents lists available at ScienceDirect

Journal of Manufacturing Processes


journal homepage: www.elsevier.com/locate/manpro

An effective defect detection method based on improved Generative


Adversarial Networks (iGAN) for machined surfaces
Qingxia Wang a, Ronghui Yang a, Chongjun Wu a, b, *, Yao Liu b
a
College of Mechanical Engineering, Donghua University, 201620, Shanghai, China
b
Shanxi Key Laboratory of Advanced Manufacturing Technology, North University of China, 030051, Taiyuan, Shanxi, China

A R T I C L E I N F O A B S T R A C T

Keywords: In actual industrial applications, the predictive performance of the deep learning model mainly depends on the
Defect detection size and quality of the training samples, while the collection period of defective samples is long or even difficult
Improved Generative Adversarial Networks to obtain. In this article, an effective method based on improved Generative Adversarial Networks (iGAN) is
(iGAN)
proposed to detect defects of machined surfaces on the basis of positive sample training and image restoration.
Image restoration
This model could help to detect surface defects through positive sample learning without the training for defect
samples and traditional artificial labels. In this method, an improved reconstructed image network model is
constructed based on the Generative Adversarial Networks. Through this method, a defective image could be
repaired by determining the threshold of the residual image through the Otsu algorithm, and the difference
between the input image and the repaired image will be compared to obtain the defect area. Finally, the
experiment for the model validation is conducted on the complex surface image of the engine cylinder head. The
result shows that the improved Generative Adversarial Networks is effective in both the image restoration and
defect identification process.

1. Introduction PCBs and other fields [14–18]. Traditional machine vision detection
methods are generally based on feature operators to describe defects
Cylinder head is an important component being used in automobile through geometric shape features and texture features. At the same time,
engines, which will inevitably produce machining defects [1–3], such as support vector machines, neural networks and other classifiers are used
holes, bumps, scratches, etc. Machined defects will always affect the to classify features to achieve surface defect detection and recognition
product service life and durability [4,5]. Moreover, the machined de­ [19–22]. However, this type of feature operator is only effective for
fects are generally composed by different sizes and irregular shapes. specific defects, and changes in the size and shape of the defect will
Hole defects are mainly caused by the formation of shrinkage cavities affect the detection effect of the feature operator. In fact, the surface
and porosity on the metal surface due to excessively high temperature geometry of mechanical parts is complex, the target defect and the
during the casting and pouring process, turbulent flow of molten metal, background area tend to have low contrast, high similarity between
insufficient runner area, etc., which are exposed after rough machining noise, subtle defects and defect morphology. Traditional image pro­
[6,7]. Bumps and scratches are generally caused by scratches and cessing techniques often have the problem of low recognition accuracy.
bruises resulting from human causes during the handling of the work­ Deep learning uses multilayer neural networks to automatically learn
station [8,9]. These defects not only affect the beauty of the product, but high-level features from a large amount of training data, which provides
may also reduce the performance and life of the car engine [10]. Due to a direction for improving the accuracy and speed of defect recognition.
factors such as strong subjectivity, labor intensity, high cost and un­ Jing et al. [23] proposed a scheme for fabric defect detection using deep
controllable detection results of manual visual detection, the study of convolutional neural networks by decomposing fabric images into local
automatic detection technology for cylinder head surface defects is of patches as input. Ding et al. [24] uses the inherent multi-scale and
great significance for improving product quality [11–13]. pyramid hierarchical structure of deep convolutional networks to
In recent years, machine vision technology has been widely used in construct feature pyramids, and designs reasonable anchors through
industrial production quality detection in silicon wafers, steel plates, k-means clustering, so that the network adapts to the detection of tiny

* Corresponding author at: College of Mechanical Engineering, Donghua University, 201620, Shanghai, China.
E-mail address: wcjunm@dhu.edu.cn (C. Wu).

https://doi.org/10.1016/j.jmapro.2021.03.053
Received 8 September 2020; Received in revised form 27 March 2021; Accepted 27 March 2021
Available online 7 April 2021
1526-6125/© 2021 The Society of Manufacturing Engineers. Published by Elsevier Ltd. All rights reserved.
Q. Wang et al. Journal of Manufacturing Processes 65 (2021) 373–381

defects and improves the performance of PCB defect detection. Ren et al. image comparison only on the basis of enough positive sample images
[25] used deep separable convolution to replace the convolutional layer without collecting defect samples and manual annotations. This paper
in the convolutional network, and added the central loss to the loss will introduce the network structure of GAN and the improved GAN
function, which improved the recognition ability of different types of (iGAN) for image restoration in detail, accompanying with the training
defects on the strip surface. He et al. [26] proposed a multi-level feature method for the collected images. Experimental results will be given to
fusion network that combines multiple hierarchical features into one illustrate the effectiveness of the iGAN in the effects of image restoration
feature, and improved the accuracy of steel plate defect detection and and defect identification.
recognition. Chen et al. [27] used a deep convolutional neural network
and used feature maps of different layers to detect the fastener defects of 2. Generative adversarial networks for image restoration
the catenary support device, which improved the defect detection rate.
Existing research work has shown that the performance of deep learning 2.1. Generative Adversarial Networks
models mainly depends on the size and quality of training samples.
However, in actual industrial scenarios, it is impossible to obtain enough The basic network structure of GAN is shown in Fig. 1. In the training
samples due to the complex conditions, and the types of samples are process, the generator G receives the random noise vector z to generate a
difficult to balance. For the problem of small samples, the research can new image G(z) which is compared with the input real image, and they
improve the recognition effect by generating new samples to expand the are input to the discriminator D for discrimination, then the loss value LD
data set. Gang et al. [28] used convolutional neural networks to mine LD is output, and the network parameters of generator G are optimized
deep features of data to identify targets, but CNN has many parameters according to the loss value LD LD LD. Through the game of G and D, the
and requires a large number of samples to prevent overfitting. Ding et al. authenticity of the image G(z) generated by the generator G is gradually
[29] proposed three data enhancement methods to expand the data set increased until the discriminator D cannot discriminate where the input
based on the characteristics of the composite image, which improved the data come from, that is, the probability that the input data given by the
effectiveness of CNN in identifying targets. Sun et al. [30] used synthetic classifier come from the real sample and the generated sample is
minority oversampling technology to expand the data set to improve the approximately 0.5, reaching a state of Nash equilibrium.
recognition rate, but the SMOTE algorithm may blur the boundary be­ The objective function of GAN can be expressed as:
tween the majority class and the minority class and use noise to syn­
minmaxV(D, G) = IE [logD(x) ] + IE [log(1 − D(G(z) ) ) ] (1)
thesize new samples, the quality of the generated samples is poor, and G D x∼Px (x) Z∼Pz (z)
the data set expansion effect not ideal.
Generative Adversarial Networks (GAN), as a generative model, can Where E IE represents the expected distribution; Px represents the dis­
generate data that conforms to the distribution of real samples through tribution of training samples of normal product image; Pz is the random
continuous confrontation between the generator and the discriminator prior distribution used for generator G input; D(x) denotes the proba­
[31]. At present, GAN has been widely used in image generation, bility of x x from the real data distribution Px (x).
completion, style conversion and other fields [32]. Qi et al. [33] pro­ According to formula (1), the training goal of the GAN model is to
posed a loss-sensitive generative confrontation network, which realized maximize the output of discriminator D and minimize the output of the
mutual cooperation between samples by pairing real samples and generator G. Then the loss functions of discriminator D and generator G
generated samples to improve the optimization and use efficiency of the can be given as:
model. Dou et al. [34] used a three-dimensional convolution based on
L(D) = − IE [logD(x) ] − IE [log(1 − D(G(z) ) ) ] (2)
the super-resolution generation of the confrontation network structure. x∼Px (x) z∼Pz (z)

By modifying the loss function to a content-based loss function and a loss


function based on confrontation learning, the high-frequency details of L(G) = − IE [logD(G(z) ) ] (3)
z∼Pz (z)
the reconstructed image were enriched. The spectral characteristics of
hyperspectral remote sensing images are retained. Radford et al. [35] Finally, the image generated by the generator G can reflect the law of
proposed a deep convolutional generation adversarial network, which mapping between the real data samples and the noise vector z in an
eliminates the fully connected layer on top of the convolutional features, unsupervised manner, and more images can be generated to meet the
and normalizes the input in batches. By training a dataset containing distribution law of normal product through the mapping law.
bedroom images, it generates Realistic renderings. Based on the gener­
ative confrontation network, Liu et al. [36] proposed a new simulation 2.2. Improved Generative Adversarial Networks (iGAN) for image
network with an encoder architecture, through wavelet fusion to restoration
improve the characteristics of the generated image, and generate a
high-quality button image set. Cao et al. [37] proposed a 3D-assisted Encoder-decoder network is always adopted in GAN. When this
dual generation confrontation network. The normalizer and the editor network is used for defect image restoration, the input defect image is
are used to complete the positive of the image, and the positive image is first subjected to a series of down-sampling until it encounters an in­
rotated to the desired pose guided by the remote code. The two-part task termediate hidden layer, and then a new image is generated through a
is adjusted by conditional self-period loss and perceptual loss to accu­ series of up-sampling. Therefore, the input image will pass through all
rately rotate the face image to any specified angle. Huang et al. [38] layers including the intermediate layer, and the information cannot be
used the architecture of Generative Adversarial Networks to solve the directly transferred from the encoding layer to the decoding layer.
problem of small data sets with surface defects, and then used con­ Hence, key features such as the geometric information of the non-defect
volutional neural networks combined with Chunk-Max Pooling method area in the input image may be compressed or even lost, which may
training program, experimental results show that as the amount of significantly affect the authenticity of the generated image. In order to
sample data based on GAN increases, the recognition accuracy is have these key features bypass the intermediate layer, the U-Net
significantly improved. Singh et al. [39] adopted conditional generation network form is adopted in this paper, that is, a jump connection is
confrontation network CGAN to expand the data set. The recognition added between layer‘i’and the layer‘n-i’, so that all channels of the
rate of the above methods is improved to a certain extent compared with layer‘i’and the channel information of the layer‘n-i’can be shared.
CNN. Under normal circumstances, the discriminator network of the or­
Based on the idea of generating adversarial networks, this paper dinary GAN model is a two-class discriminator. The discriminator makes
proposes a defect detection method based on positive sample training. a distinction on the input data and judges whether it is real data or
The training process could realize defect identification through residual generated data. The output results of PatchGAN [40] and traditional

374
Q. Wang et al. Journal of Manufacturing Processes 65 (2021) 373–381

Fig. 1. GAN network structure.

discriminators are different. PatchGAN finally outputs an N×N matrix,


Table 1
and then calculates the mean value of the matrix as the output for
Network structure of the generator.
judging true or false. This is the same as the original two-class
discriminator outputting a true or false Vector is fundamentally Network Name Parameter Value
layer
different. Among them, each output of the output matrix is a receptive
field in the original image corresponding to the model, which corre­ 1 Full-connection
Number of neurons 128 × 6
sponds to a part of the original image. In order to improve the ability of Activation function ReLU
Two-dimensional under-
generator to capture the details of the input image, the discriminator 2
sampling
Under-sampling factor 2×2
should be able to obtain high-frequency information, so the PatchGAN Size of convolution
7×7
discriminator structure is used in this study to discriminate the image on Two-dimensional kernel
3
the patch scale. This discriminator can make a true or false judgment on convolution Step size 2
Activation function ReLU
each N × N area in the image, and the feature information of the image
Dynamic mean
block judged to be true will be retained, so that the authenticity of the 4 Batch normalization
momentum
0.8
detail area of the input image can be distinguished. The new network is Two-dimensional under-
5 Under-sampling factor 2×2
defined as an improved Generative Adversarial Networks (iGAN). Fig. 2 sampling
shows the overall framework of the image restoration network model Size of convolution
5×5
Two-dimensional kernel
iGAN proposed in this paper. 6
convolution Step size 2
The specific network parameters of generator G are shown in Table 1. Activation function tanh
The dimension of the random noise signal z is set to 256 dimensions, and

Fig. 2. Framework of image restoration network model iGAN.

375
Q. Wang et al. Journal of Manufacturing Processes 65 (2021) 373–381

the two-dimensional convolutional layers are used to reduce the data brightness conditions to further test the robustness of the method.
dimension. A batch normalization layer is added after each two-
dimensional convolutional layer. At the same time, ReLU is adopted as
the activation function of the network to improve the training speed of 3.1. Image restoration effects
the network.
The network parameters of discriminator D are displayed in Table 2. The input images are shown in Fig. 3, there are hole defects in the
The activation function of the two-dimensional convolutional networks rectangular boxes. The pixel size of the input image is 500 × 500. GAN
adopts LeakyReLU that can improve the recognition performance. The and iGAN methods were used to restore the images, and the restored
output of the discriminator is the probability of the input image images are shown in the second column of Fig. 3. In this paper, the image
belonging to the real sample image. difference method was employed to analyze the effect of image resto­
ration. The principle is to subtract the gray value of the coordinates
corresponding to the restored image and the input original image to
2.3. Model training setup obtain the residual image. The third column in the figure is the three-
dimensional grayscale distribution map of the residual image, in
The image acquisition equipment used in this study is Mercury MER- which the red mark reflects the grayscale peak. Under ideal conditions,
500− 14GM/C area-array camera, with Kowa LM8JCM lens, and the the gray value of non-defect areas in the residual image is close to the
field of view of 50 mm × 50 mm. During the test, a total of 1300 images value zero. Therefore, it can be seen from the figure that the interference
of the cylinder head surface of a certain model of an auto parts manu­ of false defects in the residual map obtained by the improved generative
facturer in Shanghai were collected, of which 1000 are normal and non- confrontation network is greatly reduced, and the effect of image
defect images and 300 are images containing defects. Three typical restoration is better, which is helpful for accurate detection of defects.
surface defects, holes, bumps and scratches are collected. The image In order to better illustrate the restoration effect, a 14 × 14 pixel area
pixel size is 500 × 500.The computer hardware is CPU Intel(R) Core was extracted near the defect area in the original image shown in Fig. 3
(TM) i7− 7700 @2.80 GHz × 8 and NVIDIA GTX1060 GPU (6 G). for local enlargement, as shown in Fig. 4, where the red box is the
CUDA9.0 and cuDNN7.3 are used for acceleration. approximate area where the defect is located. It can be seen from the
In the study, TensorFlow and Keras frameworks in Google’s neural comparison between Fig. 4(b) and (c) and the original image (a) that the
network algorithm library were used to construct iGAN Generative gray value of pixels of the non-defect area in the image obtained by
Adversarial Network, and uniform random sampling was adopted to iGAN method is more approximate to the gray value of the corre­
extract 80 % and 20 % of the established data set as the training set and sponding location in the original image. Therefore, the difference be­
validation set respectively. During the network training, the generator G tween the two can highlight the defect area, that is:
and the discriminator D were alternately trained, that is, real data
samples were obtained from the normal product images. The noise ΔP(x, y) = |Pt (x, y) − PG (x, y) | (4)
vector z was input to the generator G to generate the restored images,
Where ΔP(x, y) ΔP(x, y) is the residual image; Pt (x, y) Pt (x, y) and PG (x,
and then the results of discriminator D were calculated according to the
y) PG (x, y) are the original image and restored image respectively.
formula (3). Adam optimizer was used to continuously update the
According to formula (4), the residual images ΔPGAN (x, y) ΔPGAN (x,
network parameters. In the training process, the initial learning rate is
y) and ΔPiGAN (x, y) ΔPiGAN (x, y) were obtained, as shown in Fig. 4(b) and
set to 0.0002, and the batch size of 64.
(c) respectively, and the non-defect area was set as the background area.
Fig. 5 shows the three-dimensional gray distribution of the two residual
3. Performance evaluation of the iGAN model
images. The overall grayscale range of the image is [0100] in the re­
sidual image ΔPiGAN (x, y). It can be found that the gray value of the
In order to verify the accuracy and reliability of the improved
background area in ΔPiGAN (x, y) ΔPiGAN (x, y) varies within [0,[15]], the
Generative Adversarial Network method based on positive samples and
maximum gray value of the defect area is 100, and the minimum gray
image restoration for the detection of machined surface defects,three
value is 50. The overall grayscale range of the image is [0150] in the
typical defect sample data sets are established, such as holes, scratches
residual image ΔPGAN (x, y).The gray value of the background area in
and bumps. The detection test of the surface defects of the engine cyl­
ΔPGAN (x, y) ΔPGAN (x, y) changes within [0, 100], the maximum gray
inder head was carried out. By comparing the repair effects of the two
value of the defect area is 150, and the minimum value is 60.
ways, GAN and iGAN, on the input image, the advancement of the
To further explain the restoration effect of the iGAN method, by
method proposed in this paper is verified, because the effect of image
observing the gray distribution of the residual image in Fig. 3, it is found
repair plays a very important role in the accuracy of defect detection. At
that discrete pulse-shaped gray distribution appears in the non-defect
the same time, it is tested by image samples taken under different
area of the residual image ΔPGAN (x, y) ΔPGAN (x, y), and is mainly
concentrated on the edge of geometric features such as holes and
Table 2 grooves, indicating that in the image generation, the GAN method has a
Network parameters of the discriminator.
weak ability to maintain the information unchanged on the edges of
Network Name Parameter Value high-frequency geometric structures with large curvature changes,
layer
while iGAN method has a better ability to restore these areas, which is
Size of convolution
5×5
also confirmed by the gray average and mean square error of the re­
Two-dimensional kernel sidual image in Fig.6.
1
convolution Step size 2
Activation function LeakyReLU
In order to quantitatively evaluate the restoration effect, this paper
Dynamic mean used the gray average and mean square error to evaluate the residual
2 Batch normalization 0.8
momentum images. Fig. 6 shows the statistics of the residual images shown in Figs. 4
Size of convolution
5×5 and 3. It can be seen that in terms of the background area, the gray mean
Two-dimensional kernel
3 and mean square error of the residual image ΔPiGAN (x, y) ΔPiGAN (x, y) are
convolution Step size 2
Activation function LeakyReLU 25.9 and 13.1 lower than those of ΔPGAN (x, y) ΔPGAN (x, y), respectively.
Dynamic mean In addition, the mean and mean square error values of the local residual
4 Batch normalization 0.8
momentum images are also reduced by 35.3 and 26.9 respectively. For the defect
Number of neurons 1 area, the gray mean and mean square error of the residual image
5 Full-connection
Activation function sigmoid
ΔPiGAN (x, y) ΔPiGAN (x, y) are 9.1 and 2.9 lower than those of ΔPGAN (x, y)

376
Q. Wang et al. Journal of Manufacturing Processes 65 (2021) 373–381

Fig. 3. Comparison of the image restoration effects by two methods.

Fig. 4. Comparison of gray values in local areas: a) Gray scale of the original image; b) Gray scale of the image restored by GAN; c) Gray scale of the image restored
by iGAN.

Fig. 5. Three-dimensional gray distribution in residual image.

ΔPGAN (x, y), respectively. In addition, the mean and mean square error area in the residual image PiGAN (x, y) ΔPiGAN (x, y) vary within [4,8] and
values of the local residual images are also reduced by 23.0 and 3.8 [6,10] respectively. The gray average and mean square error of the
respectively. These all indicate that the generated image PiGAN (x, y) background area in ΔPGAN (x, y) ΔPGAN (x, y) change within [24,38] and
PiGAN (x, y) is closer to the input image Pt (x, y) Pt (x, y). [18,24] respectively, suggesting that the generated image PiGAN (x, y)
The iGAN method was used to restore the collected defect images, PiGAN (x, y) is closer to the input image Pt (x, y) Pt (x, y).
and the gray average and mean square error of the background area It can be concluded that: 1) Compared with the GAN method, the
(non-defect area) in the residual image were calculated. Fig. 7 shows the iGAN method has a better restoration effect for the input image; 2) In
statistical results of 20 randomly selected residual images. It can be terms of the defect area, the gray average of the residual image PiGAN (x,
found that the gray average and mean square error of the background y) ΔPiGAN (x, y) and ΔPGAN (x, y) ΔPGAN (x, y) is basically the same, but is

377
Q. Wang et al. Journal of Manufacturing Processes 65 (2021) 373–381

Fig. 6. Statistics of residual image : a) Grayscale mean and mean square error of the residual image; b) Grayscale mean and mean square error of the residual
image(local).

Fig. 7. Gray statistics of residual images (background areas).

greater than the gray average of the input image Pt (x, y) Pt (x, y). relatively weak ability to recognize scratch defects. This is mainly due to
Therefore, the residual images obtained by this method can be used to the depth of scratch defects and the brightness of the illumination
locate the defects in the image. system.
In order to further test the robustness of the proposed iGAN algo­
rithm, images of the 300 defective products used in the previous test
3.2. Defect identification effects were obtained by sequentially changing light source brightness. Here,
1500 defective samples were collected, with 1250 holes, 100 bumps and
Otsu algorithm was used to determine the threshold, the inter-class 150 scratches samples respectively. Fig. 9 describes the recognition of
variance can be expressed as: scratch defects at different brightness. It can be seen that when the
relative brightness is 20, the test scene is dark, and the processed texture
g = w0 *w1 *(u0 − u1 )*(u0 − u1 ) (5)
on the surface of the product is prominent, so the light scratches will mix
Where w0 w0 is the proportion of the number of pixels of the defect areas with it. When the relative brightness is 40, the test scene is bright, and
in the total pixels of the residual image, w1 is the ratio of the number of scratches and surface texture are ignored and cannot be recognized.
pixels of background areas to the total number of pixels of the residual Fig. 10 shows the recognition accuracy rate of three types of defects
image, uo is the average pixel value of all defect areas, u1 is the average under different brightness conditions. It can be seen that when the
pixel value of all background areas. When the variance g reaches the relative brightness is between 25–35, the recognition accuracy rate of
maximum, it can be considered that the threshold for defect segmenta­ holes and bumps reaches more than 95 %, and the recognition accuracy
tion is the optimal threshold. rate of scratches is about 80 %. The results show that the proposed iGAN
In the residual image, if ΔP(x, y) > th ΔP(x, y) > th, the pixel (x, y) is method could be better used in hole and bump detection than scratch
a defect point, which is marked as 1; otherwise, it is a non-defect point, detection.
which is marked as 0. Therefore, the residual image after binarization
can be expressed as: 4. Conclusion
{
B(x, y) =
1, ΔP(x, y) > th
(6) In order to solve the problems of deep learning-based defect detec­
0, ΔP(x, y) ≤ th tion methods such as large sample size and long period of obtaining a
large number of defect samples in practice, an improved Generative
Fig. 8 shows the recognition results of three kinds of defects: holes,
Adversarial Networks (iGAN) was proposed, which can detect surfaces
scratches and bumps. It can be seen from the binarized images that the
through active sample learning Defects without training defect samples
method proposed in this paper can locate defects, and shows the best
and traditional manual labels. This article adopted the U-Net network
performance in separating hole defects, then the bump defects, but has

378
Q. Wang et al. Journal of Manufacturing Processes 65 (2021) 373–381

Fig. 8. Recognition effects of different types of defects.

form and adds jump connections, so that the key features such as geo­ (1) In the effect of image restoration, both the gray mean and mean
metric information could bypass the middle layer to achieve channel square error of the residual image ΔPiGAN (x, y) is much lower than
information sharing. In addition, in order to improve the ability of the those of ΔPGAN (x, y), which indicate that the generated image
generator to capture the details of the input image, this paper proposed a PiGAN (x, y) PiGAN (x, y) is closer to the input image Pt (x, y) Pt (x, y)
discriminator structure that discriminates the image on the patch scale, Pt (x, y). Moreover, the iGAN method has a better ability to restore
so that it could distinguish the authenticity of the detail area of the input the edge of geometric features such as holes and grooves.
image. From the performance evaluation of the iGAN model, the results (2) In terms of the defect area, the gray average of the residual image
could be concluded: ΔPiGAN (x, y) is basically the same with ΔPGAN (x, y) ΔPGAN (x, y),
but it is greater than the gray average of the input image Pt (x, y)

379
Q. Wang et al. Journal of Manufacturing Processes 65 (2021) 373–381

Fig. 9. Images of scratch defects under different brightness.

Foundation of China (52005098), Shanghai Sailing Program


(19YF1401400) and the Opening Foundation of Shanxi Key Laboratory
of Advanced Manufacturing Technology (No. XJZZ202003). The authors
wish to record their gratitude for their generous supports.

References

[1] Li C, Li X, Wu Y, et al. Deformation mechanism and force modelling of the grinding


of YAG single crystals. Int J Mach Tools Manuf 2019;143:23–37.
[2] Zhu D, Feng X, Xu X, et al. Robotic grinding of complex components: A step
towards efficient and intelligent machining – challenges, solutions, and
applications. Robot Comput Integr Manuf 2020;65:101908.
[3] Ding Z, Sun G, Guo M, et al. Effect of phase transition on micro-grinding-induced
residual stress. J Mater Process Technol 2020;281:116647.
[4] Jiang X, Kong X, Zhang Z, et al. Modeling the effects of Undeformed Chip Volume
(UCV) on residual stresses during the milling of curved thin-walled parts. Int J
Mech Sci 2019;167:105162.
[5] Wu C, Pang J, Li B, et al. High Speed Grinding of HIP-SiC Ceramics on
Transformation of Microscopic Features. Int J Adv Manuf Technol 2019;102:
1913–21.
[6] Wu C, Li B, Liu Y, et al. Surface roughness modeling for grinding of silicon carbide
ceramics considering co-existence of brittleness and ductility. Int J Mech Sci 2017;
Fig. 10. Comparison of defect identification effects under different brightness. 133:167–77.
[7] Chen N, Li H, Wu J, et al. Advances in micro milling: From tool fabrication to
process outcomes. Int J Mach Tools Manuf 2020;160:103670.
Pt (x, y), which means that the residual images obtained by this [8] Li C, Li X, Huang S, et al. Ultra-precision grinding of Gd3Ga5O12 crystals with
method can be used to locate the image defects. graphene oxide coolant: Material deformation mechanism and performance
(3) In the defect identification, experimental results show that the evaluation. J Manuf Process 2020;61:417–27.
[9] Ma T, Dang J, An Q. Burr formation simulation of end face grinding core of servo
machined surface defect detection iGAN method has a good valve based on DEFORM3D. Diamond Abras Eng 2019;39(05):50–6.
recognition accuracy in the detection of engine cylinder head [10] Wu C, Guo W, Li R, et al. Thermal effect on oxidation layer evolution and phase
defects, with 95 % above of the recognition rate of holes and transformation in grinding of Fe-Ni super alloy. Mater Lett 2020;275:128072.
[11] Zhu H, Li H, Zhang L, et al. Study on simulation prediction and control method of
bumps and 80 % above of scratches. burns in plunge grinding. Diamond Abras Eng 2019;39(05):44–9.
[12] Wu C, Li B, Yang J, et al. Experimental investigations of machining characteristics
of SiC in high speed plunge grinding. J Ceram Process Res 2016;17(3):223–31.
Declaration of Competing Interest [13] Guo W, Wu C, Ding Z, et al. Prediction of surface roughness based on a hybrid
feature selection method and long short-term memory network in grinding. Int J
Adv Manuf Technol 2021;112:2853–71.
The authors declare that they have no known competing financial [14] Wang Z, Zhu D. An accurate detection method for surface defects of complex
interests or personal relationships that could have appeared to influence components based on support vector machine and spreading algorithm.
the work reported in this paper. Measurement 2019;147:10686.
[15] Wang G, Li W, Jiang C, et al. Simultaneous Calibration of Multicoordinates for a
Dual-Robot System by Solving the AXB = YCZ Problem. IEEE Trans Robot 2021.
Acknowledgements https://doi.org/10.1109/TRO.2020.3043688.
[16] Xie H, Li W, Zhu D, et al. A Systematic Model of Machining Error Reduction in
Robotic Grinding. IEEE/ASME Trans Mechatron 2020;25(6):2961–72.
This work is supported in by the National Natural Science

380
Q. Wang et al. Journal of Manufacturing Processes 65 (2021) 373–381

[17] Yi L, Li G, Jiang M. An end-to-end steel strip surface defects recognition system [29] Ding J, Chen B, Liu H, et al. Convolutional neural network with data augmentation
based on convolutional neural networks. Steel Res Int 2017;88:1600068. for SAR target recognition. IEEE Geosci Remote Sens Lett 2016;13(3):364–8.
[18] Wei P, Liu C, Liu M, et al. CNN-based reference comparison method for classifying [30] Sun X, Li J, Gu L, et al. Identifying the characteristics of the hypusination sites
bare PCB defects. J Eng 2018;16:1528–33. using SMOTE and SVM algorithm with feature selection. Curr Proteomics 2018;15
[19] Ghosh A, Guha T, Bhar R, et al. Pattern classification of fabric defects using support (2):111–8.
vector machines. Int J Cloth Sci Technol 2011;23(2-3):142–51. [31] Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets.
[20] Liu H, Song W, Zhang Y, et al. Generalized Cauchy Degradation Model With Long- Proceedings of the 27th International Conference on Neural Information Processing
Range Dependence and Maximum Lyapunov Exponent for Remaining Useful Life. Systems 2014:2672–80.
IEEE Trans Instrum Meas 2021;70:1–12. [32] Wang K, Zuo W, Tan Y, et al. Generative Adversarial Networks: From Generating
[21] Liu H, Song W, Niu Y, et al. A generalized cauchy method for remaining useful life Data to Creating Intelligence. Acta Autom Sin 2018;44(5):769–74.
prediction of wind turbine gearboxes. Mech Syst Signal Process 2021;153(6): [33] Qi G. Loss-Sensitive Generative Adversarial Networks on Lipschitz Densities. Int J
107471. Comput Vis 2020;128(5):1118–40.
[22] Zhou Q, Chen R, Huang B, et al. An automatic surface defect detection system for [34] Dou X, Li C, Shi Q, et al. Super Resolution for Hyperspectral Remote Sensing
automobiles using machine vision methods. Sensors 2019;19(3):644. Images Based on the 3D Attention‑SRGAN Network. Remote Sens (Basel) 2020;12
[23] Jing J, Ma H, Zhang H. Automatic fabric defect detection using a deep (7):1204.
convolutional neural network. Color Technol 2019;135(3):213–23. [35] Radford A, Metz L, Chintala S. Unsupervised Representation Learning with Deep
[24] Ding R, Dai L, Li G, et al. TDD-Net: A tiny defect detection network for printed Convolutional Generative Adversarial Networks. arXiv e-prints 2018;1511:06434.
circuit boards. CAAI Trans Intell Technol 2019;4(2):110–6. [36] Liu L, Cao D, Wu Y, et al. Defective samples simulation through adversarial training
[25] Ren Q, Geng J, Li J. Slighter Faster R-CNN for real-time detection of steel strip for automatic surface inspection. Neurocomputing 2019;360:230–45.
surface defects. 2018 Chinese Automation Congress (CAC); 2018. [37] Cao J, Hu Y, Yu B, et al. 3D Aided Duet GANs for Multi-View Face Image Synthesis.
[26] He Y, Song K, Meng Q, et al. An End-to-End Steel Surface Defect Detection IEEE Trans Inf Forensics Secur 2019;14(8):2028–42.
Approach via Fusing Multiple Hierarchical Features. IEEE Trans Instrum Meas [38] Huang C, Lin X. Study on machine learning based intelligent defect detection
2020;69(4):1493–504. system. MATEC Web of Conferences, vol. 201; 2018. p. 01010.
[27] Chen J, Liu Z, Wang H, et al. Automatic defect detection of fasteners on the [39] Singh V, Romani S, Rashwan H, et al. Conditional generative adversarial and
catenary support device using deep convolutional neural network. IEEE Trans convolutional networks for x-ray breast mass segmentation and shape
Instrum Meas 2017;67(2):257–69. classification. arXiv e-prints 2018;1805:10207.
[28] Gang H, Wang K, Peng Y, et al. Deep learning methods for underwater target [40] Isola P, Zhu J, Zhou T, et al. Image-to-image translation with conditional
feature extraction and recognition. Comput Intell Neurosci 2018:1–10. adversarial networks. Proceedings of the IEEE computer vision and pattern
recognition 2017:1125–34.

381

You might also like