Professional Documents
Culture Documents
(a) Front image (b) Profile image Auto(mean) Manual(mean) Mean deviation from Manual
mean(abs(Auto-Man)/Man)%
Figure 1: Front and side facial images. 14 landmarks shown L27 7.111cm 7.009cm 10.0%
on the front (a) and 21 landmarks shown on the profile L62 14.262cm 13.252cm 9.0%
image (b) L65 2.701cm 2.614cm 9.2%
A18 155.547deg 160.169cm 5.3%
2.2. Methods
Table 1: The measured facial features using manual and
Two methods have been used in automatic prediction of
automatic landmarks on Dataset 1
OSA using frontal and profile facial images:
In the first method, we followed the work presented in Table 1 compares the automatically and manually
[10] but replaced the manual landmarking step with an determined features on Dataset 1 for the four selected
216
uncalibrated facial features. The table shows that the
automatic landmark detection introduced approximately a
10% error relative in the determination of the features.
217
Dataset 1: Training set Dataset 2: Testing set
Acc. (%) Sens (%) Spec (%) Acc. (%) Sens. (%) Spec. (%)
Manual 76.11 78.8 70.1 63.7 67.6 55.0
Auto 68.6 71.1 58.8 69.8 68.5 76.4
Table 2: Performance results of the manually and automatically determined calibrated facial features on the training and
testing datasets
Data set 75 subjects 125 subjects 165 subjects apnea.,” Sleep, vol. 33, no. 9, pp. 1249–54, 2010.
[2] C. Ding and D. Tao, “A Comprehensive Survey on Pose-
Dataset 1 57.0 ∓ 5% 61.0 ∓ 5% 61.8 ∓ 5% Invariant Face Recognition,” Cvpr2015, no. 2014, pp. 1–40,
Dataset 2 59.0 ∓ 5% 59.0 ∓ 5% 60.0 ∓ 5% 2015.
[3] X. Zhu and D. Ramanan, “Face detection, pose estimation, and
Table 4: OSA detection accuracy using preprocessed image landmark estimation in the wild.,” Proc. Int. Conf. Comput. Vis.
pixle values as neural network inputs. Pattern Recognit., vol. X, no. X, pp. 2879–2886, 2012.
[4] M. C. EL Rai;, N. Werghi;, H. Al Muhairi;, and H. Alsafar,
“Using facial images for the diagnosis of genetic syndromes: A
Data set 75 subjects 125 subjects 165 subjects survey,” in Communications, Signal Processing, and their
Applications (ICCSPA),International Conference on, 2015, pp.
Dataset 1 61.0 ∓ 5% 63.0 ∓ 5% 66.0 ∓ 5% 1–6.
Dataset 2 58.0 ∓ 5% 62.0 ∓ 5% 64.0 ∓ 5% [5] P. A. Schwab RJ1, Pasirstein M, Pierson R, Mackley A,
Hachadoorian R, Arens R, Maislin G, “Identification of upper
Table 5: OSA detection accuracy using the four selected airway anatomic risk factors for obstructive sleep apnea with
facial features as the neural network inputs. volumetric magnetic resonance imaging,” Am J Respir Crit Care
Med, vol. 168(5):522, p. 168(5):522-30, 2003.
[6] M. Okubo et al., “Morphologic analyses of mandible and upper
airway soft tissue by MRI of patients with obstructive sleep
measuring facial landmark features boosts the apnea hypopnea syndrome.,” Sleep, vol. 29, no. 7, pp. 909–915,
classifier’s predicting performance. 2006.
[7] F. Espinoza-cuadros, R. Fernández-pozo, D. Toledano, J.
Alcázar-ramírez, E. López-gonzalo, and L. Hernández-gómez,
“Speech Signal and Facial Image Processing for Obstructive
4. CONCLUSION AND FUTURE DIRECTIONS Sleep Apnea Assessment,” Comput. Math. Methods Med., vol.
2015, pp. 1–13, 2015.
In this research, the possibility of automatic detection of [8] R. W. W. Lee, P. Petocz, T. Prvan, A. S. L. Chan, R. R.
Grunstein, and P. a Cistulli, “Prediction of obstructive sleep
OSA using frontal and profile facial images was apnea with craniofacial photographic analysis.,” Sleep, vol. 32,
investigated. In a first step, the facial images with their no. 1, pp. 46–52, 2009.
manually annotated landmarks were used to automatically [9] H. Nosrati and P. De Chazal, “Apnoea-hypopnoea Index
detect the facial landmarks of the new images. These Estimation using Craniofacial Photographic Measurements,” in
Computing in Cardiology, 2016, p. In Press.
landmarks were then used in classifier to predict OSA. An [10] R. Lee, A. Chan, R. Grunstein, and P. Cistulli, “Craniofacial
accuracy of 70% was achieved in this stage using a logistic phenotyping in obstructive sleep apnea--a novel quantitative
classifier. In a second step the same training images (after photographic approach.,” Sleep, vol. 32, no. 1, pp. 37–45, 2009.
pre-processing) were directly fed into a neural network [11] T. F. Cootes, G. J. Edwards, and C. J. Taylor, “Active
Appearance Models,” IEEE Trans. Pattern Anal. Mach. Intell.,
classifier and an average accuracy of 62% was achieved in vol. 23, no. 6, pp. 681–685, 2001.
the detection of OSA. Although with these accuracies, our [12] X. P. Burgos-Artizzu, P. Perona, and P. Dollar, “Robust face
algorithms in the current stage may not be considered to landmark estimation under occlusion,” Proc. IEEE Int. Conf.
replace the normal sleep monitoring procedures to diagnose Comput. Vis., no. October 2016, pp. 1513–1520, 2013.
[13] D. E. King, “Dlib-ml: A Machine Learning Toolkit,” J. Mach.
OSA, they demonstrate some of the features that can be Learn. Res., vol. 10, pp. 1755–1758, 2009.
used in achieving this goal. New facial features can possibly [14] V. Kazemi and J. Sullivan, “One millisecond face alignment with
be incorporated in the algorithm to increase the accuracy. an ensemble of regression trees,” in Proceedings of the IEEE
Computer Society Conference on Computer Vision and Pattern
Recognition, 2014, pp. 1867–1874.
5. REFERENCES [15] “Python machine learning module: http://scikit-
learn.org/stable/".”
[1] R. W. W. Lee et al., “Relationship between surface facial
dimensions and upper airway structures in obstructive sleep
218