You are on page 1of 4

1.

0 Introduction
In the industry of metal casting and manufacturing, there are higher probabilities of flaw or
defect in the final product whether from manufacturing process or in service induced cracks.
These flaws are very important when considering the life cycle of the metal plates or parts and
the service they are providing to. Therefore, it is significantly important to inspect the metal parts
and take action to ensure a safe work environment.

2.0 Inspection type and application


The article (Koshkinen, T., 2021) focuses on creating a Machine Learning (ML) Ultrasonic
inspection model that detects the flaw data close to a human inspector. They have designed an
algorithm which results in better Probability of Detection (POD) for the model. For testing the
model, multiple training data set has been generated using CIVA2019 simulation software.

2.1 Specimen for inspection


As for the specimen, they have collected a Dissimilar Metal Weld (DMW) pipe mock-up provided
by Swedish Qualification Centre AB (SQC). The defects of the specimen were EDM notch and
implanted defects. In total, there were six different flaw data available on that specimen. Two
large solidification flaws of 17mm and 26mm where the 17mm tilted towards the carbon steel
side. Two small solidification flaws of 2mm and 3mm and two 6mm sized flaws. Also, there were
EDM notch and a solidification flaw. For flaw scanning, the circumferential flaws were considered
rather than axial flaws.

2.2 Inspection set up


They used the Dynaray Lite that comes with two Imasonic 1.5 MHz 32 element phased array
probes and for the TRL acquisition, a 7⁰ roof angle set-up wedge. Feed water was used for
coupling. To minimize the data calculation, only one B-scan line was considered with 60⁰
inclination. The data was recorded at a 16-bit depth towards the inner surface of the pipe for
best possible quality.
As inspection type, they have considered ultrasonic B-scan from an optimized version of Zetec
Inc.’s procedure C3467 Zetec OmniscanPA 03 Rev A.

2.3 Data augmentation


For ultrasonic inspection, training data is scarce. Therefore, to generate more training data to
develop an ML model, the raw scanned flaw data were rotated, reflected, cropped, or scaled. As
a result, the six different flaw data augmentation generated 10,000 flaw data by scaling down
the original scanned data by Trueflaw’s eFlaw software. It provided an ample data set to improve
the POD of the model. The artificially generated data sets from the software were used while
training the model along with the real raw data resulted from scanning the object

3.0 Neural network


To familiarize the model with ultrasonic inspection and flaw detection, they have developed an
algorithm which acts as the neural network for the process. The difference between a human
and a machine inspection is the theoretical reasoning behind the task. Unlike human minds, the
machine does not have any imagination other than the data it has analyzed before. Therefore,
lots of training data sets are required to let the model learn different flaw perspectives.

Figure 1: Neural network of the ML model (Koshkinen, T., 2021)

3.1 Virtual flaw data generalization


Generalizing the training data is a viable approach by tuning the hyperparameters. Through
generalization, the model can learn and remember the variation of probable flaw data. As
example, Batch size normalization acts by removing the covariate shift from the internal
activations within the network. According to Masters D. (2018), the best batch normalization
layer can be achieved using a batch size of 2 to 32 which can extend to 64. The larger the batch
size, the faster the teaching of the model with accuracy. Another approach is introducing Dropout
layer. This acts by zeroing out the layer’s output values at random while training. It affects the
training of the model.
Garbin C. (2020) has proved that the combination of using dropout and batch normalization layer
together drastically decrease the accuracy of the machine inspection. On the other hand, Li X
(2018) suggests that the dropout layer should be used after all batch normalization layer for the
best possible data.
Based on Augmented ultrasonic data for machine learning by Virkkunen (2020), the authors of
the article (Koskinen T., 2021) have developed a deep refined neural model. For greater accuracy,
the convolutional and dense layer dimensions were enhanced. Moreover, after each
convolutional layer, max pooling and batch normalization layer was used for better
generalization and reducing data overfitting. Through trial and error, the dimensions of each
layer were adjusted in this network structure.

4.0 Methodology
For proving the ability of ML model to detect real flaws, they compared the raw scanned data
with the simulated flaw data and measured the efficiency of the model. The original B-scan
dimension was 480x2000 where the sound path was narrowed down to 2000 samples to
represent the inner area of the mock-up. To minimize the calculation time, it was further pre-
1
processed by max-pooling at 𝜆 . This reduced the original data to 480x118. The image was
4
reduced by mean value and divided by standard deviation. If it was labelled as a flaw, then ML
would introduce one flaw in the image at random location along the weld. This created overfitting
on the weld. Cropping the image into half allowed the ML to introduce flaws in multiple locations
around the weld. From the 480-sample, the ML learnt a better and proper fitted version of the
image.
The flaw data set for training included All flaws, small, medium, large, no large and no small flaws
in terms of 2 to 26 mm flaw size. For the simulated data, all flaws and no small flaws ranging from
1 to 6 mm was considered. The performance of ML was evaluated by POD curve with hit/miss
according to MIL-HDBK-1823a (Annis C., 2009).

5.0 Results
The result of the study highlighted the testing the generalization capability and comparison to
VRR data from human performance with similar ultrasonic data. As per Koshkinen, the summary
of their work focuses on the following points. While designing training data set for ML model, it
is a must to remember,

• Detection accuracy largely depends on the smallest flaw size in the training.
• Flaw types may generalize differently, e.g. solidification cracks generalize worse than EDM
notch.
• Using very small flaws deteriorate the model performance.
At the point of comparing the ML POD vs VRR data, the model demonstrated consistent
performance with larger flaws when it was trained with 6 mm or larger training flaw data. From
the obtained POD result in this article, they concluded that, the best a90/95 value occurs when
there are no small flaws included in the training data set.

6.0 Potential applications


From the study of the article, it is evident that the performance of the machine learning model
to detect flaw data from an image could be improved by simulating the subject in more detail.
Thus, in most metal casting industries, if the ML model is pre-installed, the number of defected
products can be brought down to zero. Moreover, in terms of inspecting industrial products that
induced in service cracks, this ML model can detect flaws that seems undetectable by the human
eye.

7.0 Reference
Annis, C., 2009, MILl-HDBK-1823a, Nondestructive Evaluation System Reliability Assessment.
Technical report, Department of Defence, viewed on 28 August 2021,
<http://www.statisticalengineering.com/mh1823/MIL-HDBK-1823A(2009).pdf>

Garbin, C., Zhum, X., Marques, O., 2020, Dropout vs. batch normalization: an empirical study of
their impact to deep learning, Multimed. Tools Appl., viewed on 29 August 2021,
<https://doi.org/10.1007/s11042-019-08453-9>

Koshkinen, T., Virkkunen, L., February 2021, The effect of different flaw data to Machine Learning
powered Ultrasonic inspection, Journal of Nondestructive Evaluation,
<https://doi.org/10.1007/s10921-021-00757-x>

Li, X., Chen, S., Hu, X., Yang, J., 2018, Understanding the disharmony between dropout and batch
normalization by variance shift, viewed on 30 August 2021,
<http://arxiv.org/abs/1801.05134arXiv:1801.05134>

Masters, D., Luschi, C., 2018, Revisiting small batch training for deep neural networks, viewed on
30 August 2021, <http://arxiv.org/abs/1804.07612 arXiv:1804.07612>

Virkkunen, I., Koskinen, T., Jessen-Juhler, O., Rinta-Aho, J., 2021, Augmented ultrasonic data for
machine learning, Journal of Nondestructive Evaluation, viewed on
<https://doi.org/10.1007/s10921-020-00739-5>

You might also like