Face Recognizability Evaluation for ATM Applications With Exceptional Occlusion Handling

Sungmin Eum, Jae Kyu Suhr, and Jaihie Kim School of Electrical and Electronic Engineering, Yonsei University Republic of Korea

Biometrics has been extensively utilized to lessen the ATM-related crimes. One of the most widely used methods is to capture the facial images of the users for follow-up criminal investigations. However, this method is vulnerable to attacks made by the criminals with heavy facial occlusions. To overcome this drawback, this paper proposes a novel method for face recognizability evaluation with exceptional occlusion handling (EOH). The proposed method conducts a recognizability evaluation based on local regions of the facial components. Subsequently, the resulting decisions are reaffirmed by the EOH exploiting the global aspect of the frequently occurring facial occlusions. The EOH can be divided into two separate approaches: 1) accepting the falsely rejected cases, 2) rejecting the falsely accepted cases. In this paper, two typical facial occlusions, eyeglasses and sunglasses, are chosen to prove the validity of the EOH. To evaluate the proposed method in the most realistic environment, an ATM database was constructed by using an off-the-shelf ATM while the users were asked to make withdrawals as they would in real situations. The proposed method was evaluated by the ATM database which includes 480 video sequences with 20 subjects. The results showed the feasibility of the face recognizability evaluation with the EOH in practical ATM environments.

1. Introduction
Automatic Teller Machine (ATM) use is now one of the standard methods for making financial transactions and is continuously increasing due to its convenience [1]. However, as the usage of ATMs increased, related crimes have also been on the rise to become major threats for both the customers and banks worldwide [2]. For the purpose of suppressing the related crimes, there have been considerable efforts by introducing various methods of biometrics. These biometrics-driven efforts can be categorized into two approaches. The first approach is by requesting the biometrics such as face, fingerprints or

finger veins as an essential part of on-site user authentication [3]-[5] before being allowed to make any financial transactions. The other approach is to capture the images of a user at the ATM and use those images in the process of criminal face matching for follow-up criminal investigations [6]. The second approach is more commonly utilized by the ATM systems because of its advantages in providing non-intrusive environment, and less time consumption in making a transaction. However, it suffers from difficulties in tracking down the suspects when their faces are heavily occluded unable to be recognized. It is reported that the suspects tend to take advantage of its weakness by occluding their faces with typical objects such as sunglasses or masks [7]-[14]. Figure 1 shows the images of the suspects with heavy facial occlusions captured by the actual ATMs mostly difficult to be recognized. To reduce the tendency of similar sorts of fraud, an extensive research has been devised and can be categorized into three approaches: specific attack detection, skin color-based occlusion detection, and frontal bare face detection. The first approach, specific attack detection, searches for specific occluding objects such as helmets, masks, or scarves that are commonly used by the criminals [7]-[9]. The systems based on this approach reject the users when those specific objects are detected. These methods proved their feasibility for each specific occluding object. However, this approach is constrained to specifically assigned occlusions which make it inadequate to handle various occlusions that may occur in real ATM situations. Skin color-based occlusion detection is categorized as the second approach. This determines the degree of facial occlusion by using several methods for skin color analysis: skin color ratio in the whole face area [10] or specific facial regions [11]. Although the skin color-based methods can be applied to various facial poses, they tend to show unstable performance towards various lightings commonly occurred in the actual ATM environments [12]. The last category is frontal bare face detection-based approach [13], [14]. This approach takes advantage of using holistic face detectors to locate the regions of interest. A process of determining the facial occlusion is then carried out by partially analyzing the detected face regions:


In general. longitudinal half faces are proven to provide adequate information for criminal investigations [15] and automatic face recognition [16]. a facial image that contains a mouth and at least an eye visible is defined as recognizable. rejecting the falsely accepted cases. In this paper. the above method bears the following problems when applied in the actual ATM environments due to the reason that it uses the local information of facial components such as eyes or mouth.(a) Figure 1: Suspects with facial occlusions captured by the cameras on the ATMs. The first type of problem arises when partial. 2) 90 (b) Figure 2: Faces with typical occlusions difficult to be handled by facial component-driven approach. Nose is excluded from the definition assuming that there are almost no criminals occluding only the nose area when other facial components are completely visible. (a) Eyes are slightly occluded by the frame of eyeglasses. Figure 2(a) shows a typical example where eyes are slightly occluded by the frame of eyeglasses. First of all. Moreover. this problem can be overcome by applying a scenario that guarantees to provide images of frontal faces. this paper proposes a method that combines an exceptional occlusion handling (EOH) and a facial component-driven recognizability evaluation. In this paper. To overcome these obstacles. After the system carries out a face recognizability evaluation based on verifying the facial components within the face area. To build a more realistic database we have used an off-the-shelf ATM while the users were asked to make the actual withdrawals. it is required to define what a recognizable facial image is. we have acquired a database which consists of 480 video sequences including 20 subjects. The proposed method is operated according to the flowchart shown in Figure 3. the method using the facial components found inside a frontal face region [14] possesses several advantages over the first and the second approaches (specific attack detection and skin color-based occlusion detection) in the following aspects. Finally. 2. yet acceptable. In the experiments. Images which have higher probabilities of containing frontal faces are selected to be used in the following procedures. the EOH reaffirms whether the user is falsely rejected or falsely accepted possibly caused by any misleading occlusions. Moreover. The EOH process can be carried out by two different approaches: 1) accepting the falsely rejected cases. The evaluation results using the acquired ATM database clearly showed a reasonable performance in the practical ATM environments. upper and lower facial regions [13] or regions based on facial component detection [14]. a visible mouth and a visible eye mostly guarantees a half of a face visible in longitudinal direction. the exceptional occlusion handling (EOH) process is carried out to handle . this method can show a relatively superior performance over the second approach in various lighting conditions by employing a gray image-based detection and verification scheme. Among the methods in the third approach. the method is able to handle users with various partial occlusions since it tries to look for the existence of facial components instead of detecting specific occluding objects. In spite of all the advantages. the image sequences are acquired during the 4 to 5 second period centered at the moment of the card insertion. (b) Local patterns on the reflecting surface of sunglasses may be mistaken for the real eyes. occlusion over the facial features causes the system to reject the face even if it is recognizable in the global perspective. Another type of problem is falsely accepting a user by wrongly locating a local region closely similar to the regions in the actual facial components. Overview of the Proposed Method Before discussing on the face recognizability. two typical facial occlusions (eyeglasses and sunglasses) which frequently interrupt the facial component-driven recognizability evaluation are chosen to represent the two different approaches of the EOH process. Although the method can show a degraded performance with the non-frontal faces. Figure 2(b) shows a frequently occurred situation where local patterns found on the reflecting surfaces of sunglasses could be mistaken for real eyes (Details will be discussed in the later sections). Recognizability evaluation is performed by verifying the facial components (eyes and mouth) found in the face regions using the selected images. To begin with.

the recognizability evaluation is required to be finished as soon as possible for the convenience of the users. sunglasses) within the face regions. The former approach is activated when the recognizability evaluation procedure makes a decision that the face is non-recognizable. 3. the detector produces a greater number of face responses on the frontal faces with slightly variant locations and scales than 3. almost Figure 5: Frontal face response for three different facial postures. 91 . we have picked up a simple fact that there is a higher probability of acquiring images of frontal faces around the moment of inserting the card than any other moments of a transaction process. “frontal face response”. we need to select a certain number of images that contain high probability of being recognizable. As shown in the flowchart. ComNet-9000DM [17]. Details on the ATM database acquisition will be discussed in section 6. one of the popular ATM models in major Korean banks.1. (a) Recognizability evaluation is being conducted. problematic. This scenario was used to acquire our ATM database containing 480 video sequences of 20 subjects. the EOH process consists of two different approaches. Thus. yet typical occlusions which frequently occur in real ATM situations. In order to select the quality images from each video sequence. near-frontal. we have set up our scenario as evaluating the recognizability of the facial images obtained during the 4 to 5 second period centered at the moment of card insertion. (b) ATM used for database acquisition Figure 3: Flowchart of the proposed method. One is accepting the falsely rejected cases and the other is rejecting the falsely accepted cases. we have recorded and analyzed several video sequences where a user walks in and performs a financial transaction using a real ATM as shown in Figure 4(a). ATM Scenario In the process of designing our recognizability evaluation scenario. Scenario and Image Selection 3.1. Frontal face response of an image measures the number of faces detected by the Viola-Jones frontal face detector [18]. yet commonly occurring occlusions (in this case. From the observation of the recorded videos. and non-frontal face postures are shown. This detector is designed to look for the frontal faces by sliding a window in various locations and scales. The latter approach is activated when the face is evaluated as recognizable. we have come up with a simple measure called. Selecting Quality Images The computational resources are often limited in real-time embedded systems such as ATMs. it is designed to be terminated as soon as a recognizable facial image is acquired. It reaffirms the primary decision by searching for non-acceptable. Moreover.2. The reason lies in the fact that the camera on the ATM model is installed right above the card slot as shown in Figure 4(b). Since the objective of the system is to acquire a recognizable face of the user. Thus. frontal face responses for frontal. eyeglasses) near the facial components. all of the users were shown to take a glance at the card slot on the ATM for one or two seconds while they try to insert the cards.(a) (b) Figure 4: ATM Environment. The decision is reaffirmed by checking the existence of typical acceptable occlusions (in this case. Therefore. was selected to be used in acquiring the video sequence. Each video sequence contains 61 frames. Under this assumption that the ATM users are likely to show their frontal faces sometime while they insert the cards. From left to right.

After a careful comparison of the two methods.(a) Figure 7: Facial component detection. A sequence is composed of 61 frames. After the ROIs are defined. Figure 6(a) shows the histogram of the face responses of 29280 images from 480 video sequences of the acquired ATM database (details are explained in section 6. The rest of the facial component detection results are discarded from usage as shown in the bottom row of Figure 7. This procedure is shown on the 1st row of Figure 7. the consecutive image selection. This figure shows the frontal face responses for three different facial postures. Facial Component-driven recognizability Evaluation Before the system searches for facial features such as eyes or mouths. 1st row: Detecting the facial components inside the face region. Note that choosing the number of frames and selecting a set of frames to be used in the system should be considered based on the performance of the camera and the hardware specifications of the ATM. Each bin of the histogram indicates the frame number for each sequence. each bin indicating the frame index of each sequence. holistic frontal face detection is applied to the whole image to define the region of interest (ROI). 2nd row: Selecting the facial components within the region that satisfy the predefined geometric constraints. and mouths are shown by the solid. eyes. we have decided on utilizing several frames per sequence centered around the 32nd frame. The proposed system utilizes the Viola-Jones general object detector [19] for detecting the frontal faces and facial components. Sample frames of a sequence are depicted above the histogram. the intermittent selection method was chosen to provide the system with higher variations of frontal faces than the similar frontal faces in . It is observable that the quality images for our scenario are found around the 32nd frame. Frontal face response is used as an indirect method to measure the image’s degree of being recognizable under the assumption that frontal faces have higher probability to be verified as recognizable. face candidates are then generated by selecting only the eye-mouth combinations that satisfy the predefined geometric constraints among every combination that can be made inside the ROI. Moreover. and the numbers above the images indicate the frame numbers. a simple verification method is applied to the facial component regions to confirm the recognizability of the 92 on non-frontal faces as shown in Figure 5.1). (b) Figure 6: Selecting Quality Images. Several sample frames of a typical video sequence is shown above the histogram. the consecutive (top) and the intermittent image selection (bottom). Figure 6(b) presents the two image selection methods. (b) Consecutive image selection (top) and the intermittent (bottom) image selection. (a) Histogram of the “face response” acquired from 480 sequences. which is in accordance with the moment of card insertion. the locations of eyes and mouths are being searched for inside the given ROIs. and dashed lines. Finally. selecting the images intermittently was shown to avoid occasional cases where the set of consecutive frames are aggregated around a certain frame containing a non-frontal face of the user. 4. respectively. After the facial component detection process within the ROIs. In this figure (along with Figure 8 and Figure 10). dotted. The detector is one of the widely used object detectors because of its high detection rate [20] and computational efficiency [21]. Therefore. The large original images are shown in Figure 11. Detected regions for face. only the face regions from the large original images were cropped after the evaluation to show the details of the detection results.

As we have shown in Figure 3. eyes. Four-pointed star and the red circle indicate the boundary of the recognizable faces and the acceptance boundary. 2nd row: Faces are determined as non-recognizable which were falsely accepted without the EOH faces. Figure 10: Detecting the faces with typical facial occlusions. and mouths regions. the EOH proceeds by investigating whether the user is falsely . A conceptual diagram in Figure 9 describes the need of the EOH to handle two different types of misclassifications. 1st row: Faces determined as non-recognizable due to adjacent frame of eyeglasses near the eye region. the second case can be shown as indirectly giving permission to ATM frauds along with bringing difficulties in the follow-up investigations. Solid. When the facial component-driven phase accepts the user. In short. recognizability evaluation becomes a challenging problem when the typical facial occlusions tend to exist close to eye or mouth regions directly interfering with the detection and the verification process. The existence of a typical facial wear like eyeglasses may hinder a normal innocent user from getting an approval to use the ATM. As can be seen in the figure (1st row). and dashed lines indicate the faces. The EOH can be divided into two different approaches: 1) accepting the falsely rejected cases. 93 hand. Image on the right is rejected due to failing to detect the eyes. Each image corresponds to the images found in Figure 8. 5. 2) rejecting the falsely accepted cases. In other words. principal component analysis (PCA) feature extractors and support vector machine (SVM) classifiers are utilized. the EOH should be designed to detect the target objects exclusively although it may hold a slightly inadequate detection rate. 2nd row: Faces determined to be recognizable due to misclassifying the reflection on the sunglasses as an eye. false eye approvals may occur due to falsely detected eye regions on the reflections of the sunglasses. On the other Figure 9: A conceptual diagram describing the Exceptional Occlusion Handing (EOH). while an eyeglasses detector can be devised in purpose of accepting the falsely rejected users wearing eyeglasses. 1st row: Faces are determined as recognizable which were falsely rejected without the EOH. Exceptional Occlusion Handling Facial component-driven face recognizability evaluation holds weakness in dealing with various facial occlusions that occur frequently in real situations. In order to handle these problems. While the first case could induce numerous customer complaints. That is.Figure 8: Two types of misclassification caused by typical facial occlusions. this detector should avoid in generating false detections of eyeglasses when given a face wearing sunglasses. respectively. a suspicious user wearing sunglasses could be evaluated as a recognizable user due to reflective surfaces of sunglasses that bear eye-like regions. These procedures were conducted using the methods explained in [22]. eye detectors may fail in detecting or verifying when the eyes are present close to the frame of the eyeglasses. the proposed method facilitates the exceptional occlusion handling (EOH) process to handle both types of falsely evaluated cases. dotted. Falsely accepted faces (fan-shaped regions) and falsely rejected faces (triangle-shaped regions) are handled by separate schemes in the process of EOH. Figure 8 depicts several cases where the two types of misclassifications occur due to typical facial occlusions. In the cases of sunglasses (2nd row of Figure 8). Both of the approaches ought to follow a rule that each approach should concentrate on its sole purpose while maintaining the adverse effects under control. For verifying the facial component regions. Images on the left and center are falsely rejected by the verification process. the type of the EOH process is chosen based on the outcome of the facial component-driven recognizability evaluation.

The ATM has a built-in camera located just above the card slot as shown in Figure 4(b). and our own facial occlusions database acquired using a web camera. Several manipulations to these original samples have been carried out to construct a bigger set of positive samples containing 7600 eyeglasses images which could better represent the variations of the target object. respectively. Then. The subjects were given a cash card in a wallet and were asked to act as natural as possible at the ATM. 150 from our facial occlusion database) were cropped to be used in the training. the decision whether the user should be accepted or not is made based on the results from the EOH. the original samples were manually rotated to be aligned in horizontal direction before being added to the training set. This sample manipulation procedure was referred from [24]. the mirrored images of these newly generated samples were added to the training set. The negative samples were obtained from the general face-free images from [25] along with eyeglasses-free facial images from the Internet and from several publicly available face databases such as AR [26]. GEORGIA TECH [27]. Eyeglasses-free facial images were included in the negative set to minimize possible false detections inside the face region. we have constructed an ATM database which can reflect the real ATM environments. 6. Finally. found within the same ROIs to form a face candidate.accepted or not. the face candidate is confirmed as a face wearing a certain object after assuring that the detected components satisfy the geometric constraints to be a reasonable face. When the opposite occurs where the user is rejected. From these databases. CAS-PEAL-R1 [23]. 200 from CASPEAL-R1. In the similar manner. Accordingly. the user is rejected by the system if found wearing a pair of sunglasses. These samples were obtained by rotating the aligned samples in 5 randomly chosen degrees between -5° and +5°. the eyeglasses and the sunglasses detectors were tuned to detect the target objects exclusively although it may hold a slightly inadequate detection rate. and the downloaded images from the Figure 11: Example images of the acquired ATM database.M. and 9 94 . Although the ATM was located indoors. two typical facial occlusions. Positive samples for training the eyeglasses detector were obtained from various sources such as the Internet. eyeglasses and sunglasses. the facial image is reexamined to prevent the false rejections. CAS-PEAL-R1 [23]. Top two rows and bottom two rows are acceptable cases and non-acceptable cases. ComNet-9000DM [17]. These images were manipulated to generate 8000 sunglasses images. 700 original sunglasses images were cropped from AR [26].1. The rotated samples were also added to the set. if any. In the process of training a sunglasses detector. they were not given any information regarding the scenario nor the location of the camera. lighting environments were slightly varied according to the time of the acquisition randomly chosen between 9 A. To strengthen the reality. 950 eyeglasses images (600 from the Internet. The user is given approval if he or she is proven to be wearing normal eyeglasses. Internet. we have chosen to use one of the popular ATM models. and VALID [26]. our facial occlusion database. Last. they are paired with a mouth. To satisfy the sole purpose of the EOH as mentioned previously. The process begins with an effort to detect eyeglasses or sunglasses inside the predefined face ROIs. First. To demonstrate the feasibility of the proposed method. Figure 10 depicts how the EOH can handle the misclassifications shown in Figure 8. were chosen as specific targets to be handled by the two types of EOH process. The camera’s resolution and the approximate field of view are 640×480 pixels and 90°×70° in horizontal and vertical directions. Experimental Results 6. If the objects are detected. ATM Database Acquisition To evaluate the proposed method in the most practical manner. The manipulations for the positive samples and the construction of the negative samples were carried out in the equivalent manner as done in the eyeglasses case. Training the detectors for EOH Detectors for eyeglasses and sunglasses have been trained by adopting the Viola-Jones object detection framework [19]. respectively. CASPEAL-R1 [23] face database. extensively used by major Korean banks.

The numbers in the table are expressed in In the case of the recognizability evaluation without using the EOH.2. respectively. and with EOH for both of the occlusions (eyeglasses and sunglasses). The 5 to 6% performance enhancements are significant in the actual ATM situations considering the enormous number of ATM users. For detailed performance evaluation and comparison for the 4 implementations. This table presents the accuracies for 4 different occlusion cases.M. the sequence is regarded as ‘non-recognizable’ if and only if all the frames are evaluated as ‘non-recognizable’. and a mask (non-acceptable). each sequence comprising 61 single images (frames). Lastly. Table 2: Performance of the proposed system for typical facial occlusions. respectively. with EOH for eyeglasses (EG) only. That is. They were obtained based on the acceptability of each facial occlusion. Figure 12: ROC Curves showing the optimized performance using the EOH 6. the false acceptance rate (FAR) indicates the percentage of the falsely approved users wearing non-acceptable occlusions while the true acceptance rate (TAR) indicates the percentage of the correctly approved users wearing acceptable occlusions. the receiver operating characteristic (ROC) curves are depicted in Figure 12. 3 different eyeglasses and 3 different sunglasses were given for each subject.1. the performances for users wearing eyeglasses and the users wearing sunglasses were clearly poorer than the other occlusion cases. Each subject was asked to wear 3 different typical occlusions including eyeglasses (acceptable). P. the red solid line shows the performance of the proposed method supported by the EOH for both (eyeglasses and sunglasses) of the occlusion cases. To secure the diversity of the occluding objects. EG and SG indicates eyeglasses and sunglasses. In this figure.Table 1: Description of the acquired ATM database. Performance Evaluation The performance of the proposed method was evaluated using the acquired ATM database described in section 6. Table 2 shows the evaluation results for 4 different implementations: recognizability evaluation without EOH. Example images of the acceptable and non-acceptable occlusions are shown in Figure 11 and the details regarding the database are described in Table 1. The black dotted line presents the performance of the recognizability evaluation without any EOH process. the performance for the users wearing the eyeglasses was improved by 5% while maintaining other performances under control. Video sequences without any facial occlusions were also acquired. sunglasses (non-acceptable). with EOH for sunglasses (SG) only. The green dash-dotted and blue dashed lines indicate the performances when the EOH for the eyeglasses and the EOH for the sunglasses were applied. the recognizability evaluation supported by the EOH considering both of the occluding objects were shown to perform better than the other implementations without bringing any adverse effects towards the cases of bare face or mask. Lastly. a sequence is regarded as ‘recognizable’ if more than one frame is evaluated as ‘recognizable’. The ROC curves were obtained by gradually incrementing the threshold values for the facial component verifiers. The implementation with the EOH for only the sunglasses was showed the similar trend by 6% increase in the performance. The database consists of 480 video sequences. In the proposed method. When the EOH with eyeglasses handling was added. The figure clearly 95 .

Wen. Mitsubishi Electric Research Laboratories.” In Proceedings of the Annual IEEE International Carnahan Conference on Security Technology. 2009. 40. no. pp. and J. Liu. Jones. 3. Prabhakar. [9] R.1106-1122.com/english/. Dugelay. May 2005. C. vol. Brubaker. no. no. Georgia tech face database. Zhang. B. 2011. vol. and R. 2009. D’Angelo. A. and Cybernetics . X. H. “Recognizability Assessment of Facial Images for Automated Teller Machine Applications. 33-42.com/SciSoftware/haartraining. 259-263. F. 1. 2005. Kim. Mar. pp. 282-285. [11] G. vol.” In Proceedings of the 18th European Signal Processing Conference. Dec. [8] C. Porikli. P. 2011. 2001. Kim.reveals the superiority of the recognizability evaluation with the EOH targeted for both of the typical facial occlusions. Gao. N. Pankanti. “A comprehensive evaluation framework and a comparative study for human detectors. and Comparative Results. 2007. J. D. G.” International Journal of Computer Vision. 2010. Wen. S. Davis. vol. N.” European Network and Information Security Agency (ENISA). pp.” http://note. Kim.” Lecture Notes in Computer Science. 3546. J. [13] [14] [15] [16] [17] [18] 8. Y. X. pp. “The CAS-PEAL large-scale Chinese face database and baseline evaluations”. 3349-3361. 31. A. 1. no. Viola and M. A. In the following research. [5] G. and Q.777-786. G. 3. Graevenitz. 2003. G. H. Jan. 1. 2010.sonots. Liaw.” In Proceedings of the Asia Pacific Police Technology Conference. 10. 30. Aug. K. Mullin. 511–518. Aug. Jun. 2008. Sung. 3. 2006. M. Q. pp. Dec. Jung. 4319. Viola and M. Rehg. X. 369-382. S. pp. 3533. N. no. Jones. 2006. “Biometric authentication in relation to payment systems and ATMs. Min.” DATENSCHUTZ UND DATENSICHERHEIT. [6] L. Bourbakis.” CVC Technical Report #24. Fox.” IEEE Transactions on Image Processing. and J. 1-9. vol. pp. 2008. 38. 2003. “A new video surveillance system employing occluded face detection”. EOH that deals with other types of typical occlusions such as mustache near the mouth will be considered. 9. 2. pp. Park. “VALID: A New Practical Audio-Visual Database. 15.Part A. “Face verification across age progression.com/research/face_reco. Kim. 627-632.” Journal of Forensic Science. vol. A. vol. B. “The AR face database. Benavente. S. pp. vol. S. Tseng. “A survey of skin-color modeling and detection methods. “Tutorial: OpenCV haartraining. S. [4] S. pp. “Robust real-time face detection. Jul. W. 137-154. S. M. References [1] A. pp.M.” In Proceedings of the International Conference on Information Technology: Research and Education. 57.” Lecture Notes in Computer Science.549-553. 1991. Dec. pp. “The mask detection technology for occluded face analysis in the surveillance system. and N.” IEEE Security & Privacy. Lu. Cao. [7] C.” In Proceedings of the International [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] Conference on Control. Shan. and J. Suhr. vol. Suhr. vol. “Fast multi-view face detection. Lecture Notes in Computer Science. Paterson. Aug. Jones. http://www. Chellappa.” IEEE Transactions on Pattern Analysis and Machine Intelligence. P. http://www. Kakumanu. [2] “ATM Crime: Overview of the European situation and golden rules on how to avoid it. Hussein. 2003. J. and C. vol. no. 2007. In addition. 135-144. “Rapid object detection using a boosted cascade of simple features. Mar. Sako and T. Choi and D. “Computerised facial construction and reconstruction. and C. May 2004. “Face pose analysis from mpeg compressed video for surveillance applications. Y. B. Jung. 2010. Aug. Jun. 417-427. “Efficient scarf detection prior to face recognition. Yu. 6455. no.” Pattern Recognition. Nov. Seo. [10] D. “Facial fraud discrimination using detection and classification. and L. I. D. “The safety helmet detection for ATM’s surveillance system via the modified Hough transform. “Face occlusion detection by using b-spline active contour and skin color information. 11. K. Dec. Chiu.” Popular Mechanics.” In Proceedings of the International Conference on Pattern Recognition. Sep. 2005. 641-651. and D. pp. 2003. Mar. “Face occlusion detection for automated teller machine surveillance. Conclusions This paper proposes a recognizability evaluation method which combines a facial component-driven approach and the exceptional occlusion handling (EOH) taking eyeglasses and sunglasses into account.chunghocomnet.html.-L. J.” Technical Report TR2003-96. J. 2011. 2005.htm. Zhou. and B.anefian. 199-208. Duan. pp149-161. S. “Image-recognition technologies towards advanced automated teller machines. M. Jul. pp. 2. Comnet-9000DM. Makrogiannis. Robotics and Vision. pp. and J. Jain. and A. vol. G. Kim. Wu. J. M. “Biometric recognition: security and privacy concerns. Li.” In Proceedings of the International Conference on computer vision and pattern recognition. 2004. Chiu. vol. Hutchinson. Tian. 3. no. Sun. no. vol. Oct. Reilly. P. 96 . Lu. pp. pp. 2010. Miyatake. O'Mullane. vol. 3.” Lecture Notes in Computer Science. Chen. Viola and M. 364-369. IEEE Transactions on System Man. 1998. K.” submitted to Pattern Recognition. Lin and M. Yoon. Zhao. Eum. G. [3] H. Automation. “The top 50 inventions of the past 50 years.” IEEE Transactions of Intelligent Transportation Systems. S. Jan. 65-68. Martinez and R. Ramanathan and R. The feasibility of the proposed method was proved using a practical database acquired by an off-the-shelf ATM along with a realistic withdrawal scenario. 50. A. pp. Kim. P. [12] 7. Dec. “Fast asymmetric learning for cascade face detection. we are planning to devise a recognizability evaluation scheme robust to various unconstrained illumination conditions and facial postures.