You are on page 1of 6

XAI based model evaluation based by applying

domain knowledge
Srikanth K S T K Ramesh Suja Palaniswamy
Department of Electronics and Department of Electronics and Department of Computer Science
Communication Communication, Engineeing,
Amrita Vishwapeetam , Amrita School Amrita Vishwapeetam , Amrita School Amrita Vishwapeetam , Amrita School
of Engineering, of Engineering, of Engineering,
Bengaluru, India Bengaluru, India Bengaluru, India
ks_srikanth@blr.amrita.edu tk_ramesh@blr.amrita.edu p_suja@blr.amrita.edu

Ranganathan Srinivasan . .
Department of Chemical Engineeing,
Indian Institute of Technology Madras,
Chennai, India
ranga@iitm.ac.in

Abstract— Artificial intelligence and machine learning is learning technique. The features that were considered
estimated to be used in more than 80% of Internet of Things significant in arriving at the decision is analysed using
(IoT) devices and has been the driving factor for digital GradCAM [2]. Feature analysis is done to verify the features
transformation across the globe. Models trained using Neural that were considered significant by the model and compare it
Networks are typically black box in nature due to the nature in
which the network learns the features. Hence model accuracy
against a classification done by a human for the same input
alone is not a necessary and sufficient metric in AI. It is essential scenario.
to explain a model’s outcome and verify if the model has selected This paper is organised into the following sections
features that are significant. This approach can be very domain
dependent. In this paper GradCAM Explainable AI (XAI) is 1) II gives the motivation from COVID19 studies
used as a technique to explain state of art models. Facial features 2) III describes the usecase, the datasets used and model
are identified using dLib library and feature importance metric accuracies obtained from the state of art models
is computed based on the outcome of GradCAM. The
classification problem considered in this paper is face mask
3) IV describes the experimental results obtained by
detection, state of art models is compared using XAI and applying XAI
verified against features that a human user will consider as 4) V summarises the observations and conclusion
significant. A recommendation is provided based on the model
accuracies and explainability obtained herewith. II. MOTIVATION FROM COVID19 STUDIES
The outbreak of COVID19 pandemic has provided
Keywords— XAI, CNN, Deep Learning, Artificial intelligence significant opportunities to the information technology sector
I. INTRODUCTION especially in healthcare, assisted technology and automation
where the use of ML/DL has significantly helped researchers
Software is increasingly being used in autonomous in identifying the presence of COVID19 [3][4][5][6],
systems such as automobiles, aircrafts, health care, etc. These identifying patterns that can be used to contain the
autonomous systems are developed with an objective to build spread[7][8][9], discover new vaccines [10], trying to
intelligent agents that perceive, learn, decide, and act on their understand the features considered significant by the model
own. However, the effectiveness and trust on these systems [11] , etc. An incorrect decision made by the ML/DL model
are limited to the models Explainability. Explainable AI (XAI) can have an adverse effect, hence it is important to evaluate
is a relatively new concept that has been introduced to have DL model particularly used in safety critical applications.
the ability to understand the contents of ML/DL algorithm. Typically, DL models are evaluated based on the metrics
XAI is a branch of AI where methods and techniques are mentioned in Table 1
applied to Artificial Intelligence (AI) so as the results can be
human interpretable. This field has opened a new problem Table 1 Accuracy metrics for DL models
domain called “Interpretability problem [1]. Metric Formula
With the wide spread of corona virus (COVID19) there Accuracy 𝑁𝑜 𝑜𝑓 𝑐𝑜𝑟𝑟𝑒𝑐𝑡 𝑝𝑟𝑒𝑑𝑖𝑐𝑡𝑖𝑜𝑛𝑠
has been great focus on accelerated research, deploying DL 𝑇𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑝𝑟𝑒𝑑𝑖𝑐𝑡𝑖𝑜𝑛𝑠
True positive rate 𝑇𝑟𝑢𝑒 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒𝑠
for identifying patters in transmission and possible control of (Sensitivity) 𝐹𝑎𝑙𝑠𝑒 𝑁𝑒𝑔𝑎𝑡𝑖𝑣𝑒 + 𝑇𝑟𝑢𝑒 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒
the infections. Most of the models have not been explained True negative rate 𝑇𝑟𝑢𝑒 𝑛𝑒𝑔𝑎𝑡𝑖𝑣𝑒
using XAI techniques. Application of XAI techniques and (Specificity) 𝑇𝑟𝑢𝑒 𝑁𝑒𝑔𝑎𝑡𝑖𝑣𝑒 + 𝐹𝑎𝑙𝑠𝑒 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒
interpreting a model to provide explanations that are human Recall 𝑇𝑟𝑢𝑒 𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒
interpretable has been taken as the objective of this paper. 𝑇𝑟𝑢𝑒 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒 + 𝐹𝑎𝑙𝑠𝑒 𝑁𝑒𝑔𝑎𝑡𝑖𝑣𝑒
F1 Score
An image classification problem specifically for 2∗
1
identifying face mask with three output classes is considered 1
+
1
𝑝𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 𝑟𝑒𝑐𝑎𝑙𝑙
as an example. State of art models are trained using transfer

XXX-X-XXXX-XXXX-X/XX/$XX.00 ©20XX IEEE


The metric mentioned in Table 1 alone cannot be applied to reduce the training time. Pre trained weights
considered as a necessary and sufficient condition due to the from ImageNet was used. In all around 1000 images were
inherent black box nature of the models and lack of considered for each class.
explainability. The authors of [12] mention how a model with
The training accuracies are summarized in Table 3
high accuracy was inherently flawed because it was not
looking at the right features to arrive at an outcome. One way Table 3 Training accuracies
to overcome this problem is to use model explainability.
Explainable AI (XAI) is a research initiative of DARPA to Model Name Training accuracy Validation accuracy
develop algorithms that unravel the shortcomings of AI [13]. VGG16 97.83% 97.80%
VGG19 98.99% 98.90%
In the subsequent section, the use case considered in this MobileNetV2 99.71% 96.73%
paper is described, state of art models is compared against the InceptionV3 99.56% 98.90%
accuracy metrics mentioned in Table 1. Resnet50 96.82% 97.80%

III. FACE MASK CLASSIFIER It was observed that the training and validation accuracies are
Due to the outbreak of COVID 19 the World health abut the same in all the data CNN models.
organization has recommended people to wear a face mask Table 2 summarizes the additional matrix for the multiclass
and maintain social while in public places. Outbreaks have classification problem. PyCM module in python was used to
been reported in restaurants, malls, offices, etc. where people capture the metrics.
tend to gather in large groups thereby forcing the government Table 4 Comparison of key metrics
to enforce compliance. Implementing an efficient covid
Metric Model Class A Class B Class C
compliance protocol has gained a lot of interest in recent
times. There are several DL models available in the public Incorrect With Without
domain which have been developed in very short span of time. Mask Mask Mask
Table 2 Papers published around COVID compliance VGG16 97.25 96.89 98.29
Area VGG19 97.00 97.67 99.05
Application Method used Accuracies Reference Dataset
(%) used under the MobileNetV2 98.81 98.06 100
MTCNN 89.06% [14] - curve in % InceptionV3 99.76 99.30 99.38
RCNN 90.6% [15] [16]
RCNN 86.41 [17] [18] Resnet50 95.08 96.92 96.67
ResNet50 96.0% [19] CCTV VGG16 0.1516 0.0881 0.0844
footage Confusion
Face mask VGG19 0.1273 0.07193 0.07453
YoloV3 90.0% [20] [18]
detection entropy
MobileNetV2 99.0% [21] [22] MobileNetV2 0.0745 0.05594 0.0
MobileNetV2 96.85 [23] [24] Range
ResNet50 98.2 [25] [18] InceptionV3 0.02211 0.03105 0.02277
[0-1]
VGG19 96.82 [26] - Resnet50 0.21364 0.1084 0.11336
Inception V3 99.9 [27] [28]
Social distance Fast RCNN 87.3% [29] [30] VGG16 (94.069, 98.422)
Although all the papers mentioned in Table 2 have high 95% VGG19 (94.953, 98.904)
accuracy, none of the work used XAI techniques to verify if Confidence MobileNetV2 (96.811, 99.776)
the mode was indeed looking at the right features for arriving
interval InceptionV3 (98.375, 100)
at the right decision. Application of XAI techniques for
solving a classification problem specifically a face mask Resnet50 (92.779, 97.664)
detection has been considered as a problem statement in this
paper. Area under the curve metric is the measure of the ability
of a classifier to distinguish between classes. A value
A. Problem Formulation between 90% to 100% indicates that the model is
The input images use for classification are pre-processed considered as Excellent.
such that the faces are extracted from the scene. Each face is Confusion entropy measure evaluates the confusion level
passed through an inference engine to classify the face into of the class distribution of misclassified samples. A value
three classes (Class A Class B and Class C} where Class A 0 indicates perfect classification so if the metric is as close
refers to person wearing incorrect mask, Class B refers to to zero then the model performance is excellent.
persons wearing correct mask and Class C refers to persons
not wearing a mask. State of art CNN model viz. VGG16, Confidence interval is the range of true classification for
VGG19, IneptionV3, MobilenetV2 and Resnet 50 are trained. unseen dataset. The value greater than 95% indicates an
XAI is applied on each of the trained model to verify if the excellent model.
right features have been considered for arriving at the
From Table 4 all additional metrics indicate that the model
classification outcome.
is trained well and will produce an accuracy greater than
B. Model Training and accuracies 90% for unseen data.
CNN is one of the famous models for supervised learning. In the next section XAI is applied on these models to
XAI techniques can be applied on CNN to inspect the identify the features considered significant by the model
reasons for the classifications obtained. Data sets were
taken from open source [31], [32]. Transfer learning was
IV. EXPERIMENTAL RESULTS USING XAI
GradCAM [37] is a post-hoc explanation method used to
produce heatmaps for pre trained neural networks. The image
was masked based on the intensity of the heat maps. The heat
maps are normalized from 0 to 255 where 0 indicates that the
given pixel was insignificant for arriving at the target
classification. As the gradient value increases, the significance
of the pixel in the target classification also increases. In Table
5 the pixels that were considered in-significant were masked
out. The masked images were again passed through the model
prediction and the revised predictions were compared with the Figure 1 Significant regions in human face.
original prediction. In all the cases the masked image also
A survey was collected from 25 participants, the regions that
arrived at the same classification as the original image. Hence
the regions that are unmasked in Table 5 alone are sufficient they would consider significant for identifying presence of
to obtain the same prediction results. mask or mask worn incorrectly is shown in Figure 2. As can
be seen, nose and mouth region followed by the left and
Table 5 GradCAM Post-hoc explanations right cheek were considered most significant by the
participants.
MobileNe Inception Resnet
Image VGG16 VGG19
tV2 V3 50
Regions considered significant
A 25

Number of responses
20
B 15
10

C 5
0
Forehead Nose and Left Right Entire
It is easy to look at the explanations of a subset of images mouth cheek cheek face
during the training and validation process however it is not
Figure 2 Survey response.
practically possible to verify the explanation of every image.
Further it is important to consider the features that are
considered significant for the model to arrive at the target
classification. In the subsequent section, the method followed B. Facial land mark analysis and results
for identifying the regions considered significant In this paper, DLibC was used to identify the facial
A. Identification of region of interest landmarks. The model identifies 68 facial landmarks. After
obtaining the predictions from the model, the face was passed
A human face has prominent landmarks like eyes, nose, through the DLibC model to identify the facial landmarks. The
mouth, and cheeks. These landmarks can be used to segment face was split into four regions viz., forehead, nose, and
a face into regions and identify the regions that are considered mouth, left and right cheek. GradCAM was applied to the
significant by the model. The face can be classified into model to identify pixels that were considered significant. The
specific regions and GradCAM can be applied to identify the percentage of pixels that were active for each region were
number of pixels that were significant. calculated. All pixels with a gradient value less than a
A prior art search was done to identify models used to detect threshold were masked and the masked image was used by the
facial landmarks. They are summarized in Table 6 inference engine. The gradient cutoff values selected for each
model such that the masked image also gave the same
classification. If the gradient cutoff is selected as 0, then the
Table 6 Facial landmark detection entire image will be considered as significant. If we consider
the gradient cutoff as 255, then the entire image will be
Methodology Reference considered as insignificant. The results observed are captured
Using DLibC and OpenCV [38]
Emotion level using feature level fusion of facial features [39]
in Table 7.
and body gestures. Table 7 Comparison of the region significance
Emotion recognition from facial expressions for 3D [40]
videos using Siamese network Model Gradi Class/ Mean Mean Mean Mean
ent No of % % Nose % Left %
A human face can be classified into for major regions viz., cutoff sample Fore Cheek Right
forehead, nose and mouth, right cheek and left cheek regions s head Chin
as shown in Figure 1 Class A 1.46 43.85 33.29 23.98
/ 173
VGG16
140 Class B 26.26 51.54 41.05 40.59
/ 214
Class C 3.18 39.34 40.76 35.63
/ 140 compared and models with similar accuracies were selected.
Class A 5.05 43.24 46.39 40.54 Each model was subjected to post-hoc explanation using
/ 173 GradCAM. The results showed the regions that the models
VGG19 Class B 43.16 55.25 42.44 41.23
102
/ 214
considered significant in arriving at the decision.
Class C 11.46 34.29 64.50 60.72 Post-hoc explanations were further enhanced by
/ 140
identifying the portion of the face that was considered the
Class A 16.32 66.88 75.81 68.14
/ 173 most significant and identifying if these regions were like the
Mobile decisions that will be taken by a human for performing the
Class B 44.05 58.57 63.14 56.21
NetV2 140
/ 214 same classification.
Class C 27.85 54.61 58.73 61.33
/ 140 It was found that although all the state of art models had
Class A 27.31 70.17 65.34 69.11 accuracies greater than 90%, they looked at different regions
/ 173 of the face to arrive at the decision. It was found that VGG16
Inception Class B 61.71 94.89 81.52 95.44 model implemented using transfer learning selected the right
51
V3 / 214 features to arrive at the classification although its accuracies
Class C 16.34 44.17 38.38 41.18
/ 140
were less than MobileNetV2 and InceptionV3. Hence it is
Class A 56.61 93.60 86.74 91.88 important to consider model explainability as a factor before
/ 173 selecting the best model to be used for a given use case.
Resnet
Class B 81.50 93.04 92.04 94.68
50 51
/ 214 In future we aim to develop an algorithm which can further
Class C 26.85 95.11 92.40 96.10 improve the accuracies and the choice of feature selection.
/ 140 The algorithm can be generalized to obtain an explainability
It can be observed that VGG16 uses only 1.46% and 3.18% metric based on a given use case.
of pixels from the forehead region for classifying images
into Class A (Incorrect mask ) and Class C (With correct REFERENCES
mask) respectively. Similarly, the percentage of nose and [1] P. Voosen, “How AI detectives are cracking open
mouth region, left cheek and right cheek regions are the black box of deep learning,” Science, 2017.
relatively high compared to other models. The gradient https://www.sciencemag.org/news/2017/07/how-ai-
cutoff set for VGG16 model was 140 which is the regions detectives-are-cracking-open-black-box-deep-
considered significant had pixels with gradient more than learning (accessed Dec. 20, 2020).
140 indicating that the model had learnt very specific [2] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam,
features from the images. D. Parikh, and D. Batra, “Grad-CAM: Visual
Although all the models used for training gave the same Explanations from Deep Networks via Gradient-
training accuracy, they showed different insights into the Based Localization,” Int. J. Comput. Vis., vol. 128,
regions considered significant when XAI techniques were no. 2, pp. 336–359, Oct. 2020, doi: 10.1007/s11263-
applied. 019-01228-7.
From Table 7 VGG16 model trained using transfer learning [3] B. B. Gawde, “A Fast, Automatic Risk Detector for
has performed very well compared to the other models. The COVID-19,” in 2020 IEEE Pune Section
parameters considered for arriving at this conclusion are International Conference, PuneCon 2020, 2020, pp.
accuracy, area under the curve, confusion entropy, 146–151. doi:
GradCAM gradient selected was greater than 140 indicating 10.1109/PuneCon50868.2020.9362389.
that the pixels selected were the most significant pixels in [4] S. Wang et al., “A deep learning algorithm using CT
arriving at the given classicization and the regions images to screen for Corona Virus Disease
considered significant by the model align with the features (COVID-19),” medRxiv, 2020, doi:
that will be considered as important by a human user. 10.1101/2020.02.14.20023028.
VGG16 model was subjected to unseen dataset and the [5] S. Anjomshoae, A. Najjar, D. Calvaresi, and K.
results obtained are captured in Table 8. Främling, “Explainable Agents and Robots: Results
from a Systematic Literature Review,” in
Table 8 Comparison with Unseen Dataset Proceedings of the 18th International Conference
on Autonomous Agents and MultiAgent Systems,
Model Gradient Class/ Mean % Mean Mean Mean
cutoff No of Forehead % % %
2019, pp. 1078–1088.
samples Nose Left Right [6] K. Ahmed and N. Gouda, “AI Techniques and
cheek cheek Mathematical Modeling to Detect Coronavirus,” J.
Class A 1.34 45.25 36.29 22.49 Inst. Eng. Ser. B, vol. 102, no. 6, pp. 1283–1292,
/ 59 Nov. 2021, doi: 10.1007/s40031-020-00514-0.
VGG16 Class B 26.73 79.77 38.39 39.23
140
/ 99 [7] S. Raghav et al., “Suraksha: Low Cost Device to
Class C 3.41 39.09 40.05 35.88 Maintain Social Distancing during CoVID-19,” in
/ 70 Proceedings of the 4th International Conference on
The results obtained were inline with the training data. Electronics, Communication and Aerospace
Technology, ICECA 2020, 2020, pp. 1476–1480.
V. CONCLUSION doi: 10.1109/ICECA49313.2020.9297503.
In this paper, we evaluated multiple standard models for [8] K. Bhambani, T. Jain, and K. A. Sultanpure, “Real-
face mask detector use case. State of art models were Time Face Mask and Social Distancing Violation
Detection System using YOLO,” in Proceedings of and Applications, ACOMP 2020, 2020, pp. 146–
B-HTC 2020 - 1st IEEE Bangalore Humanitarian 149. doi: 10.1109/ACOMP50827.2020.00029.
Technology Conference, 2020, pp. 1–6. doi: [21] B. U. Mata, “Face Mask Detection Using
10.1109/B-HTC50970.2020.9297902. Convolutional Neural Network,” J. Nat. Remedies,
[9] S. Srinivasan, R. Rujula Singh, R. R. Biradar, and S. vol. 21, no. 12 (1), pp. 14–19, 2021.
A. Revathi, “COVID-19 monitoring system using [22] M. Loey, G. Manogaran, M. H. N. Taha, and N. E.
social distancing and face mask detection on M. Khalifa, “A hybrid deep transfer learning model
surveillance video datasets,” in 2021 International with machine learning methods for face mask
Conference on Emerging Smart Computing and detection in the era of the COVID-19 pandemic,”
Informatics, ESCI 2021, 2021, pp. 449–455. doi: Meas. J. Int. Meas. Confed., vol. 167, p. 108288,
10.1109/ESCI50559.2021.9396783. 2021, doi: 10.1016/j.measurement.2020.108288.
[10] N. Arora, A. K. Banerjee, and M. L. Narasu, “The [23] S. A. Sanjaya and S. Adi Rakhmawan, “Face Mask
role of artificial intelligence in tackling COVID- Detection Using MobileNetV2 in The Era of
19,” Future Virol., vol. 15, no. 11, pp. 717–724, Oct. 2020, COVID-19 Pandemic,” in 2020 International
doi: 10.2217/fvl-2020-0130. Conference on Data Analytics for Business and
[11] K. Sanjana, V. Sowmya, E. A. Gopalakrishnan, and Industry: Way Towards a Sustainable Economy
K. P. Soman, “Explainable artificial intelligence for (ICDABI), 2020, pp. 1–5. doi:
heart rate variability in ECG signal,” Healthc. 10.1109/ICDABI51230.2020.9325631.
Technol. Lett., vol. 7, no. 6, pp. 146–154, 2020, doi: [24] Baojin Huang, “Real-World Masked Face dataset
10.1049/htl.2020.0033. (RFMD),” 2020. https://github.com/X-
[12] M. T. Ribeiro, S. Singh, and C. Guestrin, “‘Why zhangyang/Real-World-Masked-Face-
should i trust you?’ Explaining the predictions of Dataset#readme
any classifier,” in Proceedings of the ACM SIGKDD [25] S. Sethi, M. Kathuria, and T. Kaushik, “Face mask
International Conference on Knowledge Discovery detection using deep learning: An approach to
and Data Mining, 2016, vol. 13-17-Augu, pp. 1135– reduce risk of Coronavirus spread.,” J. Biomed.
1144. doi: 10.1145/2939672.2939778. Inform., vol. 120, p. 103848, Aug. 2021, doi:
[13] D. Gunning and D. W. Aha, “DARPA’s explainable 10.1016/j.jbi.2021.103848.
artificial intelligence program,” AI Mag., vol. 40, [26] J. Xiao, J. Wang, S. Cao, and B. Li, “Application of
no. 2, pp. 44–58, 2019, doi: a Novel and Improved {VGG}-19 Network in the
10.1609/aimag.v40i2.2850. Detection of Workers Wearing Masks,” vol. 1518,
[14] A. S. Joshi, S. S. Joshi, G. Kanahasabai, R. Kapil, p. 12041, Apr. 2020, doi: 10.1088/1742-
and S. Gupta, “Deep Learning Framework to Detect 6596/1518/1/012041.
Face Masks from Video Footage,” in Proceedings - [27] G. J. Chowdary, N. S. Punn, S. K. Sonbhadra, and
2020 12th International Conference on S. Agarwal, “Face Mask Detection using Transfer
Computational Intelligence and Communication Learning of InceptionV3.” 2020.
Networks, CICN 2020, 2020, pp. 435–440. doi: [28] Prajna Bhandary, “Simulated Masked Face Dataset
10.1109/CICN49253.2020.9242625. (SMFD).” https://github.com/prajnasb/observations
[15] O. Cakiroglu, C. Ozer, and B. Gunsel, “Design of a [29] K. Suresh, S. Bhuvan, and B. Palangappa M,
deep face detector by mask R-CNN,” in 27th Signal “Social Distance Identification Using Optimized
Processing and Communications Applications Faster Region-Based Convolutional Neural
Conference, SIU 2019, 2019, pp. 1–4. doi: Network,” in Proceedings - 5th International
10.1109/SIU.2019.8806447. Conference on Computing Methodologies and
[16] S. Yang, P. Luo, C. C. Loy, and X. Tang, “WIDER Communication, ICCMC 2021, 2021, pp. 753–760.
FACE: A Face Detection Benchmark,” 2016. doi: 10.1109/ICCMC51019.2021.9418478.
[17] J. Zhang, F. Han, Y. Chun, and W. Chen, “A Novel [30] T.-Y. Lin et al., “Microsoft {COCO:} Common
Detection Framework about Conditions of Wearing Objects in Context,” CoRR, vol. abs/1405.0, 2014.
Face Mask for Helping Control the Spread of [31] P. Bhandary, “Dataset - Facemask detection,”
COVID-19,” IEEE Access, vol. 9, pp. 42975–42984, Datasets, 2020.
2021, doi: 10.1109/ACCESS.2021.3066538. https://github.com/prajnasb/observations/tree/master
[18] Revanth, “Masked face Dataset (MAFA),” 2020. %0A/mask_classifier/Data_Generator (accessed
https://www.kaggle.com/revanthrex/mafadataset?sel May 23, 2021).
ect=1q0UwRZsNGuPtoUMFrP-U06DSYp_2E1wB [32] A. Cabani, K. Hammoudi, H. Benhabiles, and M.
[19] G. T. S. Draughon, P. Sun, and J. P. Lynch, Melkemi, “MaskedFace-Net – A dataset of
“Implementation of a Computer Vision Framework correctly/incorrectly masked face images in the
for Tracking and Visualizing Face Mask Usage in context of COVID-19,” Smart Heal., vol. 19, p.
Urban Environments,” in 2020 IEEE International 100144, Mar. 2021, doi:
Smart Cities Conference, ISC2 2020, 2020, pp. 1–8. 10.1016/j.smhl.2020.100144.
doi: 10.1109/ISC251055.2020.9239012.
[20] T. Q. Vinh and N. T. N. Anh, “Real-Time Face
Mask Detector Using YOLOv3 Algorithm and Haar .
Cascade Classifier,” in Proceedings - 2020
International Conference on Advanced Computing

You might also like