You are on page 1of 2

Literature Survey

Paper Model used Dataset Results Remarks

[1] Training of the CNN- Coswara Dataset • Overall discrimination


BiLSTM neural network accuracy of 94.58% and
92.08% using shallow
and deep recordings
• Overall AUC ROC of
0.90
• Overall Accuracy
94.58%

[2] CNN & Resnet University of • AUC ROC of 0.846


Cambridge Dataset

[3] Deep CNN(DCNN) University of • 95.45% accuracy


Cambridge Dataset

[4] SSL(Self-Supervised (Virufy, COUGHVID, • SSL, CNN, and SVM


Learning), CNN,(SVM) Coswara) approaches achieve an
AUC of 0.807, 0.802,
and 0.75 on the
validation set,
• SSL & CNN approaches
achieve an AUC of
0.791 and 0.775

3/1/2021 1
Literature Survey
Paper Model used Dataset Results Remarks
[5] YamNet, CNN University of Lleida, • The classification
University of models performed
Cambridge, better when comparing
Coswara, Virufy C vs. PT than when
comparing C vs. N, C
vs. In C vs. PT, the
metrics that performed
better were Accuracy =
94.81%, Sensitivity =
98.91% for RF,
Precision = 97.13% for
LR, F − score = 97% for
RF and AUC = 97.29 for
SVM.

[6] CNN, an LSTM and • Coswara, • ROC AUC) of 0.98, 0.94


aResnet50 Sarcos, and 0.92respectively for
ComParE, all three sound classes:
• TASK, Brooklyn coughs, breaths and
and speech
Wallacedene
• Google Audio
Set & Freesound
and Lib-rispeech

3/1/2021 2

You might also like