You are on page 1of 5

LITERATURE SURVEY

Literature survey is the most important step in software development process. Before developing the tool it is necessary to determine the time factor, economy n company strength. Once these things r satisfied, ten next steps is to determine which operating system and language can be used for developing the tool. Once the programmers start building the tool the programmers need lot of external support. This support can be obtained from senior programmers, from book or from websites. Before building the system the above consideration r taken into account for developing the proposed system.

NABS proposal encompasses two possible architectures based on the comparative speeds of the involved biometries. It also provides a novel solution for the data normalization problem, with the new quasi-linear sigmoid (QLS) normalization function. This function can overcome a number of common limitations, according to the presented experimental comparisons. A further contribution is the system response reliability (SRR) index to measure response confidence. Its theoretical definition allows taking into account the gallery composition at hand in assigning a system reliability measure on a single-response basis. The unified experimental setting aims at evaluating such aspects both separately and together, using face, ear, and fingerprint as test biometries. The results provide a positive feedback for the overall theoretical framework developed herein. Since NABS is designed to allow both a flexible choice of the adopted architecture, and a variable compositions and/or

substitution of its optional modules, i.e., QLS and SRR, it can support different operational settings.

The paper analyzes the features affecting novel approaches for biometric systems (NABS) framework design, typically characterize any biometric application setting: 1. The biometry set: to exemplify a possible instantiating of NABS framework, we combine face, ear, and fingerprint biometries. We try to meet both effectiveness and efficiency, since face and ear are efficient but not sufficiently effective, while fingerprints, in contrast, guarantee a high recognition rate (RR), yet in slow times; 2. The normalization method: each system may return results using different dimensionalities and scales; we propose a normalization function providing good results even when the maximum value to normalize is not known; 3. The integration schema: present systems follow three possible design choices, i.e., parallel, serial, or hierarchic ; one of the NABS architectures is a hierarchical schema, where face and ear modules work in parallel, while fingerprint module is connected in cascade; we further propose a new architectural schema called N-cross testing protocol (NCTP) and present the results obtained by a multimodal system implemented according to it; 4. A reliability measure: each subsystem in a multimodal architecture should return a reliability measure, to express how much its response can be trusted; in fact, not all subsystems might be equally reliable and/or single responses might deserve different confidence; this is

important for fusing results; we propose two alternative measures, based on the composition of the stored gallery; 5. The fusion process: the integration of information by different biometries is possible in three moments, i.e., during feature extraction, matching, or decision The sooner the fusion is performed, the higher amount of information can be saved. N-Cross Testing Protocol (NCTP) N-cross testing protocol implements fusion through a novel kind of collaboration among subsystems. In this novel kind of architecture, N subsystems Tk , k = 1, 2, . . . , N, work in parallel, first in identification mode and then in lookup mode, and exchange information in fixed points (see Fig. 1). Different data are acquired for the probe subject (e.g., face image, ear image, voice). The N subsystems start up independently and extract biometric features. Each Tk retrieves a list of candidates, where each list item includes the ID of a subject in the database and a score measuring its similarity with the input. The lists are ordered by increasing similarity. Each subsystem sends to the others a sublist with only the first M subjects, for a given M fixed in advance. Each Tk merges the N 1 received lists in a single one. The length of a merged list will vary in the range [M, M(N 1)], depending on whether the same subjects are present in all the lists or only in some (or none) of them. For a correct fusion, scores from different subsystems are made consistent through the quasi-linear sigmoid (QLS) function Sigmoid function Many natural processes, including those of complex system learning curves, exhibit a progression from small beginnings that accelerates and approaches a

climax over time. When a detailed description is lacking, a sigmoid function is often used. A sigmoid curve is produced by a mathematical function having an "S" shape. Often, sigmoid function refers to the special case of the logistic function shown at right and defined by the formula

support vector machine (SVM) A support vector machine (SVM) is a concept in statistics and computer science for a set of related supervised learning methods that analyze data and recognize patterns, used for classification and regression analysis. The standard SVM takes a set of input data and predicts, for each given input, which of two possible classes forms the input, making the SVM a nonprobabilistic binary linear classifier False acceptance rate (FAR)In biometrics, the instance of a security system incorrectly verifying or identifying an unauthorized person. Also referred to as a type II error, a false acceptance typically is considered the most serious of biometric security errors as it gives unauthorized users access to systems that expressly are trying to keep them out. Principal component analysis (PCA) Principal component analysis (PCA) is a mathematical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components. The number of principal components is less than or equal to the number of original variables. This transformation is defined in

such a way that the first principal component has the largest possible variance (that is, accounts for as much of the variability in the data as possible), and each succeeding component in turn has the highest variance possible under the constraint that it be orthogonal to (i.e., uncorrelated with) the preceding components.

You might also like