Professional Documents
Culture Documents
Abstract— Pig face recognition has a wide range of applica- it can enable precision feeding by identifying the health status
tions in breeding farms, including precision feeding and disease of individual pigs [3]. Hence, pig face recognition has great
surveillance. This article proposes a method to guarantee its significance and application value.
performance in complex environments such as with dirty faces
and in unconstrained outdoor conditions. First, inspired by the Several methods for individual pig identification and trace-
shape of the pig face, a trapezoid normalized pixel difference ability have been proposed. One of the earliest is manual color
(T-NPD) feature is designed to achieve more accurate detection marking, where each pig is painted with a different color,
in unconstrained outdoor conditions. Subsequently, a trimmed which is compared with a sample library to obtain a pig’s
mean attention mechanism (TMAM) uses the trimmed mean- identity number [4]. This is physically harmful to the pig and
based squeeze method to assign more precise weights to feature
channels, and then fuses it into a 50-layer ResNet (ResNet50) time-consuming for farmers. Radio frequency identification
backbone network to classify detected pig face images with high (RFID) [5] is achieved through an ear tag, but it has only
accuracy. In addition, the TMAM can be applied in numerous achieved 88.6% accuracy, even at close range, and is costly,
common networks due to its universality. Finally, comprehensive as each pig requires a tag [6].
experiments conducted on the publicly available JD pig face Deep learning methods based on convolutional neural net-
dataset indicate that the proposed method has superior perfor-
mance compared with other methods, with an overall accuracy works (CNNs) have enjoyed great success in the domain
of 95.06%. of image classification [7], [8], [9], and some related meth-
ods have been used for pig face recognition. For example,
Index Terms— Attention mechanism, pig face detection, pig
face recognition, trapezoid normalized pixel difference (T-NPD) a shallow CNN consisting of six convolutional layers with
feature, trimmed mean. alternating dropout and max-pooling layers was designed for
pig face recognition [10]. However, this study required the pig
face image to be painted to achieve manual features, which
I. I NTRODUCTION
is time-consuming in practice. A Haar cascade classifier was
Authorized licensed use limited to: Tamkang Univ.. Downloaded on June 22,2023 at 07:17:38 UTC from IEEE Xplore. Restrictions apply.
3500713 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 72, 2023
model (GPN) was combined with an attention mechanism to (TMAM_ResNet50) can realize high-precision classifi-
perform cattle face re-identification [18]. Nevertheless, these cation of pig faces, and improve the overall accuracy by
attention mechanisms are currently mainly applied to the 2.13% compared to ResNet50. The proposed attention
recognition of human or cattle faces, and the application in mechanism has excellent portability and can help to
pig face recognition needs further exploration. enhance the performance of common networks such as
Face recognition of other animals has attracted attention. ResNet18, ResNet34, ResNet50, ResNet101 [30], and
Inception-V3 was adopted to extract cattle face features from ResNext50 [31].
a rear-view video dataset, and these features were used to train The remainder of this article is organized as follows. Section II
a long short-term memory model to identify each individual reviews related work on CNNs and attention mechanisms.
cattle [19]. Other cattle face recognition methods, including In Section III, we present the proposed pig face recognition
PnasNet-5 [20], VGG-16 [21], and CattleFaceNet [22], have method. Section IV presents experimental results and discus-
been well studied. Sheep face recognition has also been sion. Conclusions are drawn in Section V.
studied. Faster R-CNN was applied to detect sheep faces, and a
ResNet50V2 model with the ArcFace loss function was used to II. R ELATED W ORK
classify them [23]. SheepFaceNet [24] and YOLOv4 [25] were
proposed to realize sheep face recognition, and researchers Many deep learning-based technologies have been proposed
have also worked on dog [26], [27] and giant panda [28], [29] for pig face recognition and other image classification tasks.
face recognition. Due to differences in facial features, it has We briefly review these in the two main categories of CNNs
been difficult to apply the above methods directly to pig face and the attention mechanism.
recognition.
The aforementioned studies indicate that it is necessary to A. CNNs
present a highly accurate recognition method for unconstrained Because a CNN can automatically learn features from the
outdoor scenes, including variable environments and dirty pig input image without manual operators, it has tremendous
faces. We propose a pig face recognition method to address potential in image classification. The association of CNN
these problems. With a wide forehead and narrow mouth performance and network depth has been researched through
area, the pig face is trapezoidal. Hence, we extract features the design of the VGG [32]. GoogLeNet was developed to
based on trapezoidal pixel regions rather than pixel points. boost the multiscale feature extraction ability of CNNs through
We develop a trimmed mean attention mechanism (TMAM) parallel convolution modes [33]. DenseNet [34] connects each
to assign weights for different channels, which helps the back- layer to each other layer in a feedforward manner. DLA [35]
bone network to achieve more accurate pig face classification. ensures that the network has higher accuracy and fewer
In addition, it should be noted that this paper mainly considers parameters by combining layers in a tree structure. The study
the common breeding scenario of only one pig in an enclosure of ResNet [30] showed that the problem of gradient vanishing
in the pig industry. can be mitigated through a skip structure, and its bottle-
The main contributions of this article can be summarized neck architecture improves the capability of the computation.
as follows. Consequently, networks such as Res2Net [36], ResNeXt [31],
1) A pig face recognition method based on trapezoid nor- and ResNeSt [37] have been proposed. Although many CNNs
malized pixel difference (T-NPD) features and TMAM have been proposed to improve performance from different
is proposed to realize highly accurate pig face detection perspectives, the ResNet50 has obvious advantages. It has been
and classification, which is practical when complex shown to require fewer training parameters in CNNs with the
environments such as dirty and unconstrained outdoor same depth, and is less costly to train than networks with
conditions are considered. Particularly, it is able to similar residual structures [38]. More importantly, Resnet50
achieve pig face detection and recognition from different has shown high stability in image recognition tasks [9]. Hence
angles. we use ResNet50 as the backbone network in this article.
2) A new image feature, T-NPD, is developed by analyzing
the biological characteristics of the pig face with a wide
forehead and narrow mouth, which is more suitable for B. Attention Mechanism
complex backgrounds due to its ability to avoid the The attention mechanism is inspired by human perception,
effects of dirty face and facial variations by extracting by which a person concentrates on important features and
features based on trapezoidal pixel regions. The exper- ignores others. The attention mechanism was first used in
iments results show that the proposed pig face detector natural language processing, and has shown great potential
using T-NPD can achieve excellent performance in terms when fused with CNNs for feature extraction and image
of accuracy, precision, recall, and F1. classification [39], [40], [41].
3) An attention mechanism, TMAM, is proposed, which To this end, the design of attention mechanism has received
uses trimmed mean squeeze operation instead of global extensive attention. For instance, GENet [42] introduced a
average pooling (GAP) to eliminate the effect of edge spatial attention mechanism to focus on task-related regions,
values on channel descriptors, thus assigning more accu- which is suitable for object detection. SENet [16] used a
rate weights to feature channels. Experimental results channel attention mechanism to perform GAP to squeeze
show that 50-layer ResNet (ResNet50) with TMAM feature channels, whose weights were calculated with three
Authorized licensed use limited to: Tamkang Univ.. Downloaded on June 22,2023 at 07:17:38 UTC from IEEE Xplore. Restrictions apply.
XU et al.: PIG FACE RECOGNITION BASED ON T-NPD FEATURE AND TMAM 3500713
Authorized licensed use limited to: Tamkang Univ.. Downloaded on June 22,2023 at 07:17:38 UTC from IEEE Xplore. Restrictions apply.
3500713 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 72, 2023
Authorized licensed use limited to: Tamkang Univ.. Downloaded on June 22,2023 at 07:17:38 UTC from IEEE Xplore. Restrictions apply.
XU et al.: PIG FACE RECOGNITION BASED ON T-NPD FEATURE AND TMAM 3500713
TABLE I
PARAMETERS AND VALUES U SED IN THE TMAM_R ES N ET 50
Authorized licensed use limited to: Tamkang Univ.. Downloaded on June 22,2023 at 07:17:38 UTC from IEEE Xplore. Restrictions apply.
3500713 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 72, 2023
Authorized licensed use limited to: Tamkang Univ.. Downloaded on June 22,2023 at 07:17:38 UTC from IEEE Xplore. Restrictions apply.
XU et al.: PIG FACE RECOGNITION BASED ON T-NPD FEATURE AND TMAM 3500713
Fig. 8. Pig faces detected by T-NPD detector in unconstrained outdoor conditions. Green boxes represent detected pig faces.
from NPD detector. Consequently, it can be confirmed that the a visual comparison between T-NPD and NPD. It is obvious
proposed T-NPD detector is superior to the NPD detector, and that T-NPD correctly obtained the location of a pig face
has excellent practicability in pig face detection. Fig. 7 shows that was incorrectly detected by NPD. Fig. 8 displays more
Authorized licensed use limited to: Tamkang Univ.. Downloaded on June 22,2023 at 07:17:38 UTC from IEEE Xplore. Restrictions apply.
3500713 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 72, 2023
Authorized licensed use limited to: Tamkang Univ.. Downloaded on June 22,2023 at 07:17:38 UTC from IEEE Xplore. Restrictions apply.
XU et al.: PIG FACE RECOGNITION BASED ON T-NPD FEATURE AND TMAM 3500713
Fig. 10. Class activation maps with and without the proposed attention mechanism for pig face images. (Top) Original pig face images. (Middle) Class
activation maps drawn by ResNet50. (Bottom) Class activation maps drawn by TMAM_ResNet50.
Fig. 11. Confusion matrix of incorporating different attention mechanisms into ResNet50 for pig face classification tasks. (a) ResNet50. (b) ResNet50+SE.
(c) ResNet50+CBAM (d) ResNet50+FCA. (e) ResNet50+TMAM.
and width of the output feature map. Furthermore, it is worth are better. This demonstrates that the original model equipped
noting that the training parameters of these models in this part with TMAM can have a better ability to achieve pig face
are remained consistent with those in Section IV-A. classification. Moreover, it should be noted that the experi-
As can be observed from Table V, although the Params mental results of Table V are obtained by training the model
and FLOPs of TMAM are comparable to those of other several times and averaging these results. Fig. 11 shows the
attention mechanisms, the mean AUC and overall accuracy confusion matrix of different methods, where the diagonal
Authorized licensed use limited to: Tamkang Univ.. Downloaded on June 22,2023 at 07:17:38 UTC from IEEE Xplore. Restrictions apply.
3500713 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 72, 2023
TABLE V TABLE VI
A CCURACIES AND PARAMETERS F ROM D IFFERENT C LASSIFICATION R ESULTS OF D IFFERENT A RCHITECTURE
A RCHITECTURES AND ATTENTION M ECHANISMS
Fig. 12. Overall accuracy curves on validation set for different attention
mechanisms.
Fig. 13. Overall accuracy curves on validation set for different classification
methods.
and non-diagonal values indicate the proportion of correct
and incorrect predictions, respectively. It is clear that the
diagonal response of TMAM outperforms the other attention those in Sections IV-A and IV-C, respectively. The classi-
mechanisms, which means that it can improve classification fication results obtained by different algorithms are shown
performance. Fig. 12 presents the overall accuracy curves of in Table VI, from which it can be seen that the overall
pig face classification of various attention modules in different accuracy, mean recall, mean precision, and mean F1 of
epochs, which intuitively suggests that TMAM can be better TMAM_ResNet50 reached 95.06%, 94.82%, 95.28%, and
equipped with the backbone network than other attention 95.05%, respectively, which were superior to other methods in
mechanisms. Based on the above results, it is concluded that pig face classification tasks. In addition, it should be pointed
TMAM can better improve the performance of the backbone out that the experimental results of Table VI are also achieved
network compared to other attention mechanisms with the by training the model several times and averaging these
same Params and FLOPs. results.
Fig. 13 presents the overall accuracy curves for dif-
ferent epochs and classification algorithms, which were
D. Evaluation of Proposed TMAM_ResNet50 usually higher for TMAM_RseNet50 than for other algo-
In the last experiment, to validate the proposed method of rithms. Fig. 14 displays some examples of recognition of
pig face classification, TMAM_ResNet50 was compared with pig face images in unconstrained outdoor conditions by
six state-of-the-art classification algorithms: ResNet50 [30], combining T-NPD and TMAM_ResNet50, which demon-
MobileNetV2 [53], MobileNetV3 [54], RegNet [55], Shuf- strates the effectiveness of TMAM_ResNet50. Consequently,
fleNetV2 [56], and EfficientNetB0 [57]. The network training it can be confirmed that TMAM_ResNet50 can realize pig
parameters and performance metrics remained consistent with face classification in unconstrained outdoor conditions, and
Authorized licensed use limited to: Tamkang Univ.. Downloaded on June 22,2023 at 07:17:38 UTC from IEEE Xplore. Restrictions apply.
XU et al.: PIG FACE RECOGNITION BASED ON T-NPD FEATURE AND TMAM 3500713
Fig. 14. Examples of pig face recognition in unconstrained outdoor conditions by combining T-NPD and TMAM_ResNet50. Green boxes represent detected
pig faces, class represents identity of pig, and prob represents confidence.
outperforms other methods on all classification performance [5] E. Rigall, X. Wang, Q. Chen, S. Zhang, and J. Dong, “An RFID
metrics. tag localization method based on hologram mask and discrete cosine
transform,” IEEE Trans. Instrum. Meas., vol. 71, pp. 1–12, 2022.
[6] J. Maselyne et al., “Validation of a high frequency radio frequency iden-
V. C ONCLUSION tification (HF RFID) system for registering feeding patterns of growing-
finishing pigs,” Comput. Electron. Agricult., vol. 102, pp. 10–18,
In this article, we proposed a method for pig face recog- Mar. 2014.
[7] G. Shi, Y. He, and C. Zhang, “Feature extraction and classification of
nition in unconstrained outdoor conditions. A detector using cataluminescence images based on sparse coding convolutional neural
the T-NPD feature was developed to obtain pig face regions networks,” IEEE Trans. Instrum. Meas., vol. 70, pp. 1–11, 2021.
in an image. The TMAM attention mechanism was added to [8] J. Ni, K. Shen, Y. Chen, W. Cao, and S. X. Yang, “An improved deep
network-based scene classification method for self-driving cars,” IEEE
ResNet50 to recognize the detected pig face. Experimental Trans. Instrum. Meas., vol. 71, pp. 1–14, 2022.
results indicated that the T-NPD feature can better describe [9] R. Xia, G. Li, Z. Huang, L. Wen, and Y. Pang, “Classify and local-
the biological characteristics of a pig face and improve the ize threat items in X-ray imagery with multiple attention mechanism
effectiveness of its detection. TMAM was shown to be generic and high-resolution and high-semantic features,” IEEE Trans. Instrum.
Meas., vol. 70, pp. 1–10, 2021.
when applied in common networks and more effective than [10] M. F. Hansen et al., “Towards on-farm pig face recognition using
other attention mechanisms. TMAM_ResNet50 also showed convolutional neural networks,” Comput. Ind., vol. 98, pp. 145–152,
excellent performance, with overall accuracy of 95.06% at the Jun. 2018.
[11] M. Marsot et al., “An adaptive pig face recognition approach using
pig face recognition task. In the future, we plan to improve convolutional neural networks,” Comput. Electron. Agricult., vol. 173,
our method with model pruning, which could make it more Jun. 2020, Art. no. 105386.
lightweight and available in practice. Moreover, we will also [12] R. Wang, Z. Shi, Q. Li, R. Gao, C. Zhao, and L. Feng, “Pig face
recognition model based on a cascaded network,” Appl. Eng. Agricult.,
carry out some new research directions, such as the recognition vol. 37, no. 5, pp. 879–890, 2021.
of multiple pigs in one image, and the identification of pigs [13] H. Uppal, A. Sepas-Moghaddam, M. Greenspan, and A. Etemad, “Depth
of different ages. as attention for face representation learning,” IEEE Trans. Inf. Forensics
Security, vol. 16, pp. 2461–2476, 2021.
[14] X. Wang, W. Fan, M. Hu, Y. Wang, and F. Ren, “A self-fusion
R EFERENCES network based on contrastive learning for group emotion recognition,”
IEEE Trans. Computat. Social Syst., early access, Sep. 12, 2022, doi:
[1] H. Liu et al., “Development of a face recognition system and its 10.1109/TIM.2022.3193711.
intelligent lighting compensation method for dark-field application,” [15] C. Wang, J. Xue, K. Lu, and Y. Yan, “Light attention embedding for
IEEE Trans. Instrum. Meas., vol. 70, pp. 1–16, 2021. facial expression recognition,” IEEE Trans. Circuits Syst. Video Technol.,
[2] S. A. Perdomo et al., “SenSARS: A low-cost portable electrochemical vol. 32, no. 4, pp. 1834–1847, Apr. 2022.
system for ultra-sensitive, near real-time, diagnostics of SARS-CoV-2 [16] J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in
infections,” IEEE Trans. Instrum. Meas., vol. 70, pp. 1–10, 2021. Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018,
[3] Z. Wang and T. Liu, “Two-stage method based on triplet margin loss for pp. 7132–7141.
pig face recognition,” Comput. Electron. Agricult., vol. 194, Mar. 2022, [17] X. Chen et al., “Holstein cattle face re-identification unifying global and
Art. no. 106737. part feature deep network with attention mechanism,” Animals, vol. 12,
[4] Y. Gómez et al., “A systematic review on validated precision livestock no. 8, p. 1047, Apr. 2022.
farming technologies for pig production and its potential to assess animal [18] Z. Li and X. Lei, “Cattle face recognition under partial occlusion,”
welfare,” Frontiers Veterinary Sci., vol. 8, May 2021, Art. no. 660565. J. Intell. Fuzzy Syst., vol. 43, no. 1, pp. 67–77, Jun. 2022.
Authorized licensed use limited to: Tamkang Univ.. Downloaded on June 22,2023 at 07:17:38 UTC from IEEE Xplore. Restrictions apply.
3500713 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 72, 2023
[19] Y. Qiao, D. Su, H. Kong, S. Sukkarieh, S. Lomax, and C. Clark, [44] Q. Hou, D. Zhou, and J. Feng, “Coordinate attention for efficient
“Individual cattle identification using a deep learning based framework,” mobile network design,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern
IFAC-PapersOnLine, vol. 52, no. 30, pp. 318–323, 2019. Recognit. (CVPR), Jun. 2021, pp. 13713–13722.
[20] L. Yao, Z. Hu, C. Liu, H. Liu, Y. Kuang, and Y. Gao, “Cow face detection [45] H. Lee, H.-E. Kim, and H. Nam, “SRM: A style-based recalibration
and recognition based on automatic feature extraction algorithm,” in module for convolutional neural networks,” in Proc. IEEE/CVF Int.
Proc. ACM Turing Celebration Conf. China, May 2019, pp. 1–5. Conf. Comput. Vis. (ICCV), Oct. 2019, pp. 1854–1862.
[21] H. Wang, J. Qin, Q. Hou, and S. Gong, “Cattle face recognition method [46] S. Liao, A. K. Jain, and S. Z. Li, “A fast and accurate unconstrained
based on parameter transfer and deep learning,” J. Phys., Conf. Ser., face detector,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 2,
vol. 1453, no. 1, Jan. 2020, Art. no. 012054. pp. 211–223, Feb. 2016.
[22] B. Xu et al., “CattleFaceNet: A cattle face identification approach based [47] J. Friedman, T. Hastie, and R. Tibshirani, “Additive logistic regression:
on RetinaFace and ArcFace loss,” Comput. Electron. Agricult., vol. 193, A statistical view of boosting,” Ann. Statist., vol. 28, no. 2, pp. 337–407,
Feb. 2022, Art. no. 106675. 1998.
[23] A. Hitelman, Y. Edan, A. Godo, R. Berenstein, J. Lepar, and I. Halachmi, [48] L. Bourdev and J. Brandt, “Robust object detection via soft cascade,” in
“Biometric identification of sheep via a machine-vision system,” Com- Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. (CVPR),
put. Electron. Agricult., vol. 194, Mar. 2022, Art. no. 106713. vol. 2, Jun. 2005, pp. 236–243.
[49] K. Mohan, A. Seal, O. Krejcar, and A. Yazidi, “Facial expression recog-
[24] H. Xue, J. Qin, C. Quan, W. Ren, T. Gao, and J. Zhao, “Open set sheep
face recognition based on Euclidean space metric,” Math. Problems Eng., nition using local gravitational force descriptor-based deep convolution
vol. 2021, pp. 1–15, Nov. 2021. neural networks,” IEEE Trans. Instrum. Meas., vol. 70, pp. 1–12, 2021.
[50] S. Liu, W. Jiang, L. Wu, H. Wen, M. Liu, and Y. Wang, “Real-time
[25] M. Billah, X. Wang, J. Yu, and Y. Jiang, “Real-time goat face recogni-
classification of rubber wood boards using an SSR-based CNN,” IEEE
tion using convolutional neural network,” Comput. Electron. Agricult.,
Trans. Instrum. Meas., vol. 69, no. 11, pp. 8725–8734, Nov. 2020.
vol. 194, Mar. 2022, Art. no. 106730. [51] S. Du, K. Gu, and T. Ikenaga, “Subpixel displacement measurement
[26] G. Mougeot, D. Li, and S. Jia, “A deep learning approach for dog face at 784 FPS: From algorithm to hardware system,” IEEE Trans. Instrum.
verification and recognition,” in Proc. Pacific Rim Int. Conf. Artif. Intell., Meas., vol. 71, pp. 1–10, 2022.
2019, pp. 418–430. [52] Z. Qin, P. Zhang, F. Wu, and X. Li, “FcaNet: Frequency channel
[27] B. Yoon, H. So, and J. Rhee, “A methodology for utilizing vector space attention networks,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV),
to improve the performance of a dog face identification model,” Appl. Oct. 2021, pp. 783–792.
Sci., vol. 11, no. 5, p. 2074, Feb. 2021. [53] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen,
[28] L. Wang et al., “Giant panda identification,” IEEE Trans. Image Process., “MobileNetV2: Inverted residuals and linear bottlenecks,” in Proc.
vol. 30, pp. 2837–2849, 2021. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018,
[29] P. Chen et al., “A study on giant panda recognition based on images pp. 4510–4520.
of a large proportion of captive pandas,” Ecology Evol., vol. 10, no. 7, [54] A. Howard et al., “Searching for MobileNetV3,” in Proc. IEEE/CVF Int.
pp. 3561–3573, Apr. 2020. Conf. Comput. Vis. (ICCV), Oct. 2019, pp. 1314–1324.
[30] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for [55] I. Radosavovic, R. P. Kosaraju, R. Girshick, K. He, and P. Dollar,
image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. “Designing network design spaces,” in Proc. IEEE/CVF Conf. Comput.
(CVPR), Jun. 2016, pp. 770–778. Vis. Pattern Recognit. (CVPR), Jun. 2020, pp. 10428–10436.
[31] S. Xie, R. Girshick, P. Dollar, Z. Tu, and K. He, “Aggregated residual [56] N. Ma, X. Zhang, H.-T. Zheng, and J. Sun, “ShuffleNet V2: Practical
transformations for deep neural networks,” in Proc. IEEE Conf. Comput. guidelines for efficient CNN architecture design,” in Proc. Eur. Conf.
Vis. Pattern Recognit. (CVPR), Jul. 2017, pp. 1492–1500. Comput. Vis. (ECCV), Sep. 2018, pp. 116–131.
[32] K. Simonyan and A. Zisserman, “Very deep convolutional networks for [57] M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for con-
large-scale image recognition,” in Proc. Int. Conf. Learn. Represent. volutional neural networks,” in Proc. ACM Int. Conf. Ser., 2019,
(ICLR), 2015, pp. 1–16. pp. 6105–6114.
[33] C. Szegedy et al., “Going deeper with convolutions,” in Proc. IEEE
Conf. Comput. Vis. Pattern Recognit., Jun. 2015, pp. 1–9. Shuiqing Xu (Member, IEEE) received the Ph.D.
[34] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely degree from the Department of Automation,
connected convolutional networks,” in Proc. IEEE Conf. Comput. Vis. Chongqing University, Chongqing, China, in 2017.
Pattern Recognit. (CVPR), Jul. 2017, pp. 4700–4708. Since 2017, he has been an Associate Profes-
[35] F. Yu, D. Wang, E. Shelhamer, and T. Darrell, “Deep layer aggregation,” sor with the College of Electrical Engineering and
in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, Automation, Hefei University of Technology, Hefei,
pp. 2403–2412. China. His current research interests include image
[36] S. H. Gao, M. M. Cheng, and K. Zhao, “Res2Net: A new multi-scale processing, computer vision, and deep learning.
backbone architecture,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 43,
no. 2, pp. 652–662, Feb. 2021.
[37] H. Zhang et al., “ResNeSt: Split-attention networks,” in Proc. IEEE/CVF
Conf. Comput. Vis. Pattern Recognit. Workshops (CVPRW), Jun. 2022,
pp. 2736–2746. Qihang He received the B.E. degree from the
[38] R. Chen, D. Cai, X. Hu, Z. Zhan, and S. Wang, “Defect detection method School of Electrical and Information Engineering,
of aluminum profile surface using deep self-attention mechanism under Anhui University of Technology, Ma’anshan, China,
hybrid noise conditions,” IEEE Trans. Instrum. Meas., vol. 70, pp. 1–9, in 2020. He is currently pursuing the M.E. degree
2021. with the College of Electrical Engineering and
[39] Y. Cui, Y. An, W. Sun, H. Hu, and X. Song, “Lightweight attention Automation, Hefei University of Technology, Hefei,
module for deep learning on classification and segmentation of 3-D point China.
clouds,” IEEE Trans. Instrum. Meas., vol. 70, pp. 1–12, 2021. His research interests include machine learning
[40] X. Wang, W. Fan, M. Hu, Y. Wang, and F. Ren, “CFJLNet: and deep learning.
Coarse and fine feature joint learning network for bone age assess-
ment,” IEEE Trans. Instrum. Meas., vol. 71, pp. 1–11, 2022, doi:
10.1109/TIM.2022.3193711. Songbing Tao received the Ph.D. degree from the
[41] H. Li, X.-J. Wu, and T. Durrani, “NestFuse: An infrared and visible Department of Automation, Chongqing University,
image fusion architecture based on nest connection and spatial/channel Chongqing, China, in 2020.
attention models,” IEEE Trans. Instrum. Meas., vol. 69, no. 12, Since 2020, he has been a Post-Doctoral
pp. 9645–9656, Dec. 2020. Researcher with the College of Electrical Engineer-
[42] J. Hu, L. Shen, S. Albanie, G. Sun, and A. Vedaldi, “Gather-excite: ing and Automation, Hefei University of Technol-
Exploiting feature context in convolutional neural networks,” Adv. ogy, Hefei, China. His research interests include
Neural Inform. Process. Syst., vol. 31, pp. 9401–9411, 2018. deep learning and pattern recognition.
[43] S. Woo, J. Park, J. Y. Lee, and I. S. Kweon, “CBAM: Convolutional
block attention module,” in Proc. Eur. Conf. Comput. Vis. (ECCV),
Sep. 2018, pp. 3–19.
Authorized licensed use limited to: Tamkang Univ.. Downloaded on June 22,2023 at 07:17:38 UTC from IEEE Xplore. Restrictions apply.
XU et al.: PIG FACE RECOGNITION BASED ON T-NPD FEATURE AND TMAM 3500713
Hongtian Chen (Member, IEEE) received the B.S. Weixing Zheng (Fellow, IEEE) received the B.Sc.
and M.S. degrees from the School of Electrical and degree in applied mathematics and the M.Sc. and
Automation Engineering, Nanjing Normal Univer- Ph.D. degrees in electrical engineering from South-
sity, Nanjing, China, in 2012 and 2015, respectively, east University, Nanjing, China, in 1982, 1984, and
and the Ph.D. degree from the College of Automa- 1989, respectively.
tion Engineering, Nanjing University of Aeronautics Over the years, he has held various faculty/
and Astronautics, Nanjing, in 2019. research/visiting positions at Southeast University;
He was a Visiting Scholar with the Institute for the Imperial College of Science, Technology and
Automatic Control and Complex Systems, Uni- Medicine, London, U.K.; The University of Western
versity of Duisburg–Essen, Duisburg, Germany, Australia, Perth, WA, Australia; the Curtin Univer-
in 2018. He is currently a Post-Doctoral Fellow with sity of Technology, Perth; the Munich University of
the Department of Chemical and Materials Engineering, University of Alberta, Technology, Munich, Germany; the University of Virginia, Charlottesville,
Edmonton, AB, Canada. His research interests include machine learning and VA, USA; and the University of California at Davis, Davis, CA, USA.
pattern recognition. He is currently a University Distinguished Professor with Western Sydney
Dr. Chen was a recipient of the Grand Prize of Innovation Award of University, Sydney, NSW, Australia.
Ministry of Industry and Information Technology of the People’s Republic Dr. Zheng has served as an Associate Editor for IEEE T RANSACTIONS
of China in 2019, the Excellent Ph.D. Thesis Award of Jiangsu Province ON AUTOMATIC C ONTROL, IEEE T RANSACTIONS ON F UZZY S YSTEMS ,
in 2020, and the Excellent Doctoral Dissertation Award from the Chinese IEEE T RANSACTIONS ON N EURAL N ETWORKS AND L EARNING S YSTEMS ,
Association of Automation (CAA) in 2020. He currently serves as an IEEE T RANSACTIONS ON C YBERNETICS , IEEE T RANSACTIONS ON C ON -
Associate Editor and a Guest Editor for a number of scholarly journals such as TROL OF N ETWORK S YSTEMS , IEEE T RANSACTIONS ON C IRCUITS AND
IEEE T RANSACTIONS ON I NSTRUMENTATION AND M EASUREMENT, IEEE S YSTEMS —I: R EGULAR PAPERS , and several other flagship journals. He has
T RANSACTIONS ON N EURAL N ETWORKS AND L EARNING S YSTEMS , and been an IEEE Distinguished Lecturer of IEEE Control Systems Society.
IEEE T RANSACTIONS ON A RTIFICIAL I NTELLIGENCE.
Authorized licensed use limited to: Tamkang Univ.. Downloaded on June 22,2023 at 07:17:38 UTC from IEEE Xplore. Restrictions apply.