You are on page 1of 8

2009 Canadian Conference on Computer and Robot Vision

A Support Vector Machine Based Online Learning Approach for Automated Visual Inspection
Jun Sun1, 2 and Qiao Sun1 Department of Mechanical and Manufacturing Engineering, University of Calgary, Calgary, Alberta T2N 1N4, Canada 2 Alberta Research Council, Calgary, Alberta T2L 2A6, Canada
models, which are usually pre-defined by the system developer manually through trial and error. It has been recognized that it is not a trivial task to define either appropriate matching template or effective inspection rules for a particular inspection problem [1]-[3]. Hence, the conventional AVI system lacks adaptability when changes happen in the production line. For instance,

1

Abstract
In manufacturing industry there is a need for an adaptable automated visual inspection (AVI) system that can be used for different inspection tasks under different operation condition without requiring excessive retuning or retraining. This paper proposes an adaptable AVI scheme using an efficient and effective online learning approach. The AVI scheme uses a novel inspection model that consists of the two sub-models for localization and verification. In the AVI scheme, the region localization module is implemented by using a template-matching technique to locate the subject to be inspected based on the localization submode. The defect detection module is realized by using the representative features obtained from the feature extraction module and executing the verification submodel built in the model training module. A support vector machine (SVM) based online learning algorithm is proposed for training and updating the verification sub-model. In the case studies, the adaptable AVI scheme demonstrated its promising performances with respect to the training efficiency and inspection accuracy. The expected outcome of this research will be beneficial to the manufacturing industry.

• The production line is reconfigured to manufacture a
new product. This situation requires adapting the existing AVI system to deal with new assembly parts; • The system operation conditions, such as illumination condition and camera settings, have changed or drifted after a certain period of time. As a result, image representation of the inspected assembly part may change which renders the existing system obsolete. This situation requires adjusting or retuning the existing system so that it can work properly under the new operation conditions. For the past two decades, researchers have attempted to apply the machine-learning techniques, such as neural networks and neuro-fuzzy systems, to improve the adaptability of AVI systems [4]. These learning techniques are commonly used in AVI systems to build inspection models or functions with training samples in offline fashion. In the offline learning, all training samples are previously obtained and each training sample is assigned a class label (e.g., defective or nondefective) as the desired system output (i.e., inspection result). The role of a human inspector is to label the training samples according to quality standard. The estabished inspection model or function is not changed after an initial training process has been completed.
192

1. Introduction
Increasingly, various automated visual inspection (AVI) systems have been utilized for quality assurance of product assembly processes in different production lines. Instead of human inspectors, the AVI system can perform inspection tasks to verify that parts are properly installed and reject improper assemblies. Conventionally, most of the existing AVI systems use the matching-template or rule-based inspection
978-0-7695-3651-4/09 $25.00 © 2009 IEEE DOI 10.1109/CRV.2009.13

VR may show different appearances reflecting non-defective or defective part assembly situations. the system does not require excessive initial training before it can function. As ML is invariant to all inspection samples. defect detection. the system can be trained to handle different inspection problems. the following objectives are focused emphasized in this paper: (i) utilizing an adaptable inspection model that can be trained online adapting itself to different inspection problems. The online training algorithm is summarized as follows: l … 2. The adaptable inspection model consists of the two sub-models: • Localization Sub-model (ML) encloses the VR and contains features independent of the subject being inspected within the VR. An Acquired Image Verification Region (VR) Assembly Part (Clip A) Assembly Base Localization Sub-model (ML) Verification Sub-model (MV) Pattern 1 (VR) Pattern 2 Pattern 3 Figure 1. a gray-scale image is acquired by camera for the inspection of assembly part. MV is built and updated based on training samples using the SVM learning technique. Taking an assembly process as an example. Adaptable inspection model be used to identify the inspected image as defective or non-defective. the process involves installing a part at a required site on an assembly base. This paper presents an adaptable AVI scheme with a support vector machine (SVM) based online learning approach. and model training. • Verification Sub-model (MV) is considered a classification model which incorporates a set of representative VR patterns within it and then can 193 . To address this issue. the number l of representative features are extracted to generate the feature vector for representing the inspection sample x ∈ ℜ . The model training module updates the existing MV through an efficient and effective online learning algorithm proposed in this paper. concerns are often raised by the end users that the performance of an offline-learning approach relies heavily on the quality of initial training data. With an existing MV. VR is a subset of image that contains the subject being inspected. An adaptable AVI scheme In this paper. In this paper. feature extraction. the defect detection can be done by generating the inspection result y’ using x as input. the system developer can specify it by simply drawing a box enclosing the inspected subject. In many situations it may be difficult or even impossible to collect sufficient representative training samples over a limited period of time. as illustrated in Figure 2.With this learning capability. we proposed a novel adaptable inspection model that can adapt to different inspection problems through online learning. The VR image representing the subject to be inspected is then processed by feature extraction module. an adaptable AVI scheme is developed with the four major modules: region localization. Within the image a region of installation site is specified as the verification region (VR) for visual inspection. The features in ML are usually invariant for both defective and nondefective samples. Given an inspection sample. However. The idea is to incorporate new inspection patterns into the inspection model as they are encountered during the system operation. It requires simulating all possible scenarios of future events. As illustrated in Figure 1. the region localization module applies an image template matching technique with the predefined ML to locate VR within an acquired image. The VR can be located by using ML as a reference or landmark within an acquired image. From each VR image. recently there is an emerging research interest in applying online learning approach for development of adaptable AVI systems [2][5]. (ii) developing an efficient and effective online learning algorithm which can minimize the cost for sample labeling while building an accurate inspection model. Particularly. As such. For a defect detection problem.

. Framework of an adaptable AVI scheme Online Learning Algorithm for Adaptable AVI Scheme Initialization Build MV based on an initial training set of size n. 1) 3. The matched ML occurrence is normalized through scaling.e. It offers several advantages over the pixel-to-pixel correlation method. ML is considered a matching template and the module seeks the best matching occurrence of ML within the image. b) Add the labeled sample (xi. y2). called uncertain sample. The details about the three strategies will be described in Section 6. Calculate and set the confidence threshold pt for determination of uncertain/certain result based on the estimated prediction accuracy of MV. the edge-based technique compares edge pixels with the matching template. Updating For each newly arrived sample xi without a class label. assign a class label +1 or -1. In addition. i. as long as a certain percentage of its edges remain visible. We used the Geometric Model Finder in Matrox® Imaging Library to implement the edge-based geometric patternmatching function for the region localization.. i.e. The second is to use an adaptive margin sampling approach to reduce the cost for sample labeling and model updating. p(xi) >= pt. The first strategy is to adopt an efficient and effective method to estimate the prediction accuracy of the SVM based MV. retrain) the existing MV with D. The inspection sample with an uncertain result. namely the edgebased pattern-matching method. As illustrated in Figure 3. A template matching technique.e. 4. y1). c) Update (i. do the following steps: a) Request a class label yi for xi from the manual inspection. e) Reset the confidence threshold pt. . this technique can provide good results with a greater tolerance of lighting variations. this technique can rotate and scale edge data to find an object. Since only edge information is used.. respectively. yi’ = MV (xi). D = [(x1. 3) If the certainty is unsatisfied. …. (x2. yn)]. it offers reliable pattern identification when part of an object is obstructed.e. 2) Calculate the certainty of the classification result. p(xi) < pt .. yi) into D.Model Training Adaptable AVI Model Labeled Sample ML MV Manual Inspection Uncertain Sample Image Region Localization VR Feature Extraction x Defect Detection y’ Certain Result Figure 2. rotation. i. p(xi). the following steps are taken in the feature extraction process: 1) The image is first binarized using Ostu’s method that chooses a global optimal threshold to maximize the separability of the inspected subject and background in gray levels [6]. do the following steps: Classify xi using the existing MV. and landmark alignment. requests to be labeled through manual inspection. Otherwise. i. Region localization In the region localization module. regardless of its orientation or size. Output yi’ as the certain result. The third is to adopt a grid-search method to choose the optimal SVM model parameters.e.. (xn. For example. representing defective or non-defective class. where y denotes the numerical class label +1 or -1. Instead of comparing every pixels of the whole image. 194 The feature extraction module generates the feature vector that represents the VR image as the input of MV. is employed in the region localization module. d) Estimate the prediction accuracy of MV. Feature extraction Three learning strategies are used for facilitating the online learning algorithm. The VR is then identified with the matched ML occurrence image to represent the subject to be inspected.

and their corresponding eigenvalues. The blob analysis identifies the blob that is formed by a set of connected pixels. there are two margins as follows: w ⋅ z + b = 0. -1}. Y0 . i = 1. only the orientation θ1 and accumulation contribution ratio r1 of the eigenvector e1 are selected as the representative features. a binary pixel mask can be placed around the subject to be inspected for reducing unnecessary background noise. ξ i ≥ 0. C ≥ α i ≥ 0 ∀i and ∑yz i =1 n i i =0 (4) Figure 3. the hyperplane is considered a hard-margin classifier that attempts to separate all samples correctly between the two classes. yi). By adding the error term and the penalty parameter. each perimeter pixel is denoted by a two-dimensional vector consisting of its x and y coordinates in the image. where the sample vector xi ∈ ℜ and label y ∈ {+1. 5) The principal component analysis (PCA) technique is used to determine the center and orientation of the inspected subject. The constrained optimization problem (3) can be solved using the solution of its dual problem: max D (α ) = (X 0 Y 0) θ1 e1 ∑α i − i n 1 n ∑α iα j yi y j (z i ⋅ z j ) 2 i . b. z = φ (x ). 3) A blob analysis technique is used for a further reduction of noise pixels.2) On the binarized image. Background of SVM Support Vector Machine (SVM) is an effective machine learning technique for classification [7][8]. w ∈ ℜ h w ⋅ z + b = −1 w ⋅ z + b = +1 (2) Finding the optimal hyperplane is a constrained optimization problem that has the following primal objective function and constraints: n 1 2 w + C ∑ ξi 2 i =1 s. y i (w ⋅ z i + b ) ≥ 1 − ξ i . Y0]T. In this step. θ1 . 5. Due to there are noises and outliers in the training data. Since the eigenvectors e1 and e2 are orthogonal. For a non-linear classification problem. If the error term is not included in (3). For a set of perimeter pixels. can be computed using PCA technique. which can reduce the influence of noise and outliers in the training data. ξ ) = x = [X 0 . j =1 s. λ1 and λ2. θ1. ∀i (3) min P (w . in many situations such hard-margin classifier may not achieve a good performance on classifying future unseen data. it is required to find a classification function based on a given set of labeled training samples (xi. …. Y0.t.t. For the hyperplane. The two eigenvectors e1 and e2. the four representative features X0. n. the hyperplane obtained from (3) becomes a soft-margin classifier. r1 ] Binarizartion T VR image The inspected subject Masking Blob analysis Perimeter identification e2 PCA where ξ denotes the classification error for each training sample and C is the penalty parameter for the error term. We first map the sample vector x into a higher dimensional l space z ∈ ℜ and then construct a linear hyperplane in this space: h (1) where Ф(x) is the space transforming function. and r1 are extracted to generate the feature vector representing a given inspection sample: The feature extraction module is implemented using the relevant functions provided by the Matlab® image processing toolbox. 2. Upon completing the feature extraction process. and w and b are the weight vector and bias for the linear hyperplane. Feature extraction process 195 . 4) The perimeter pixels on the boundary of the inspected subject are determined by using the image dilation and erosion operations. the center of these pixels is denoted by the two-dimensional vector: [X0. Those identified blobs should be removed if they are considered noises according to certain criteria.

…. yi). the solution of weight vector w is described as: 5. In the inequality. the R2∆ is an upper bound on the kernel function K(xi. For the RBF function. n} E ξα = { ( ) where the d counts the number of training samples for which the inequality holds. …. x j ) = exp − γ x j − x i ( 2 ) (10) Based on the above descriptions. xj) for any two training samples. This process is repeated for all training samples. In the proposed online learning algorithm. this method does not require actually performing n times of re-sampling and retraining.1. A disadvantage of the LOO method is its computational inefficiency. d n (12) 2 d = i : 2α i R Δ + ξ i ≥ 1 . the classification model is tested on a held out training sample. i. x j = z i ⋅ z j = φ (x i ) ⋅ φ x j (9) ( ) ( ) For example. Estimating prediction accuracy In order to evaluate the training sufficiency. ⎛ m ⎞ f (x ) = sign⎜ ∑ y iα i K (s i . Assume there are m support vectors (si. The LOO error rate is equal to the number of LOO errors divided by the total number of training samples [10]. This method is adopted in the proposed online learning algorithm to estimate the prediction accuracy of MV. If the sample is classified incorrectly it is said to produce a LOO error.. R2∆ = 1. SMO is an iterative method that can quickly converge to the optimal solution by iteratively updating the hypothesis.. an online learning algorithm (as described in Section 2) is used to build and update MV. Model training In the proposed AVI scheme. In particular. In order to make the online learning algorithm effective and efficient. the calculation will simply depend on the kernel function without directly dealing with the mapping function Ф(x). the optimal weight vector w can be described only using these support vectors: 5. m. the radial basis function (RBF) is used as the kernel function in this paper: K (x i . Based on the solution α of the SVM training problem and the corresponding classification error ξ for the each training sample. w * = ∑ yiα i z i i =1 n (5) All training samples with αi > 0 at the solution are called support vectors that represent all relevant patterns for the classification problem. the SVM based classification function described as (11).. the LOO error rate can be calculated by w * = ∑ yiα isi i =1 m (6) The bias b can be calculated in terms of the w* and any unbounded support vector: (7) The corresponding classification function is then obtained as the following: b* = y − w * ⋅ s f (z ) = sign w * ⋅ z + b* ( ) ⎛ m ⎞ = sign⎜ ∑ yiα i (s i ⋅ z ) + b* ⎟ ⎝ i =1 ⎠ (8) If a kernel function K is used to substitute the dot products in (4) and (8). while the samples with αi = C are called bounded support vectors that are misclassified training samples. 2.. 196 . The samples with 0 < αi < C are called unbounded support vectors that lie on or between the two margins as described by (2).e. because it needs to build the model n times for a training set of size n. n. 2. we conclude that SVM can generate the following non-linear classification function with a set of support vectors. but can be applied directly after training the model. it is useful to estimate the prediction accuracy that reflects the model performance on the future unseen inspection samples.With the identified optimal αi. the following three learning strategies are employed in this algorithm. i ∈ { 1. The kernel function K is expressed as: K x i . In the LOO method.. x ) + b * ⎟ ⎝ i =1 ⎠ (11) In this paper we use the sequential minimal optimization (SMO) algorithm for training SVM [9]. i = 1. An efficient and effective method was developed by Joachims for estimating the LOO error of SVM [10]. LOO error is an unbiased estimator of the true generalization error compared to other estimators obtained by the k-fold cross-validation and splittingsample methods. i = 1. the leave-one-out (LOO) method is used to estimate the prediction accuracy of the trained classification model MV.

. The sampling heuristic is summarized as: a given sample is considered having a certain classification result. it also provides confident inspection results for the certain samples. C and γ. the search grids for C and γ are C = 2-5.5. Reducing online updates In the online learning process. During the online learning process. 2-13. that should be per-determined when we apply the SVM technique with RBF kernel function.2 begin + k *step ... we use a gridsearch method to choose the parameters (C. we proposed an adaptive margin sampling method. 2-3. γ) with the minimum LOO error is chosen as the optimal parameters. With the adaptive margin sampling method. reducing the number of updates of MV also means reducing the costs for manual sample labeling and model retraining computation. The pair of (C.. As we can see. this module identifies the uncertain samples that request to be inspected manually and invoke the online learning process for updating MV.2 begin + step . p(x ) = ∑ y α K (s . the confidence threshold pt will decrease adaptively. It has been found by researchers that the grid-search method is simplistic but quite effective compared to other advanced heuristic search methods [11]. In the meanwhile. x ) + b i i i i m * (13) That is. the closer a sample to the hyperplane. Practically. in which a sampling heuristic is used to determine if a given sample is informative and should be labeled for updating model. The goal here is to achieve a good prediction performance of MV while requesting as few online updates as possible. 7.e. 5. 100%). the defect defection module classifies inspection samples as defective and non-defective using the trained MV.2.. 212 and γ = 2-15. a fine grid search on that region can be conducted. if p(x) > pt. In order to reduce the computational cost for the search process. the MV is considered stabilized and the online updating may become unnecessary. the online update is required only when an uncertain sample is encountered. when the Eζα decreases. The benefit of identifying the optimal parameters (C. it can be classified by the existing MV. In this paper. the grid-search is implemented on exponentially growing sequences of C and γ in the form: For example. we applied the proposed adaptable AVI scheme to field data that were collected from an existing fastener inspection system for a truck cross-car beam assembly. a confidence threshold pt is calculated based on the LOO error rate Eζα (described in (12)) of the existing MV that is currently used for classification: (14) pt = 1 + Eζα ⋅ H where the H is a user-defined parameter that denotes the default distance between the threshold and the margin when Eζα is equal to 1 (i.. To minimize the LOO error is considered the objective for the search process. an AVI system is used to examine a total of 46 metal clips 197 (-)Threshold (-)Margin Figure 4. the less certainty for the classification result. …. Selecting SVM parameters C and γ As described in Section 5.. In the assembly line.3. there are two important parameters. Once the estimated LOO error for the trained MV converges after a certain number of samples is trained. γ) [11]. the sample is an uncertain sample that requests to be selected and {2 begin . Confidence threshold . Otherwise. 23. After indentifying an “optimal region” on the grid.. …. Defect detection In the proposed AVI scheme. As illustrated in Figure 4.2 end } Φ( x) Space pt Hyperplane 1 1 Eξα*H (+)Margin Eξα*H (+)Threshold pt 6.. the search uses a coarse grid first. Given a sample x. In the online learning algorithm. Case studies In the case studies. γ) is to allow preventing the problem of over-fitting outliers and noises in training data. Let’s define the certainty p(x) of the classification result using the distance from the given sample to the hyperplane of MV: labeled manually for updating the existing MV.

the SVM based MV was trained in batch mode with the samples indexing from 0 to 40. the number of manual inspection samples NMI. and I respectively.11% 198 . Known the number of positive samples NP and the number of false negative sample NFN. and 0 to 140. Performance measures There are the following performance measures that are used to evaluate the training efficiency and inspection accuracy of the proposed AVI scheme: • Manual Inspection Rate (RMI) and Automatic Inspection Rate (RAI) – The efficiency of the online training can be affected by the RMI.41% 8. Experimental results and analysis This section presents the experimental results of inspecting one type of metal clips. III. Table 1 also presents the three results that are titled as Offline-Learning II. It was assumed that the all samples used for the offline approach were manually labeled (as defective or non-defective) in beforehand. respectively.1. the lower RAI) indicates. Using the same sample sequence for the online approach. three experimental results were also generated with the offline learning approach for comparisons. 7. the RFN is more R AI = N AI N = 1 − RMI Table 1. it can be found that the inspection accuracy of the offline approach relies on the size of training dataset that is collected in beforehand. critical to manufacturers as they desire not to release any defective product to customers. • False Positive Rate (RFP) and False Negative Rate (RFN) – The RFP and RFN are used to measure the accuracy of automatic inspection. while the rest of samples were held out as a test dataset. In the sample sequence. Comparing to the offline approach.e. the first 140 samples were used for training.2.e. Known the total number of processed inspection samples N. including 101 nondefective (i.90% 24. the RFN is defined as R FN = N FN N P (18) Although both the RFP and RFN are important measures for inspection accuracy.41% 5. A false positive result refers to a non-defective sample when it is classified incorrectly as defective. To demonstrate the promising performances of the online learning approach. The existing system works well after excessive amount of manual tuning. Experimental Results of Online and Offline Learning Approaches Approach Training Performance Sample Indexes Online-Learning Offline-Learning III Offline-Learning II Offline-Learning I 1~140 (1~40 as initial training samples) 1~140 1~90 1~40 RMI 42% RAI 58% RFN 0% RFP 0% Inspection Accuracy on Test Samples RFN 0% 0% 6.41% 5. Clip A. the online approach provided a higher training efficiency. and the number of automatic inspection samples NAI. 1. From Table 1. installed properly) and 105 defective (i. Improving the system adaptability to changes has been a top priority.. The illumination condition was provided with the overhead lighting from the ceiling of the plant. In the 7.. The camera was mounted at the top of the assembly robot.1% RFP 5. Known the number of total number of non-defective samples NN and the number of false positive samples NFP. The trained MV was evaluated using the same test data for the online approach. the RFP is defined as R FP = N FP N N (17) A false negative result refers to a defective sample when it is classified incorrectly as non-defective.inserted by assembly robots for their proper installation. the more the cost for human involvement is needed in the online training process. as shown in Figure 1. A sequence of inspection samples was collected for the experiment. 0 to 90. The experimental results titled as Online-Learning are summarized in Table. missing or installed improperly) samples.. the RMI and RAI are defined as RMI = N MI N (15) (16) The higher RMI (i. In the experiment for the online learning approach. we used the first 40 samples as the initial training data.e. It was observed that the prediction accuracy of MV became stabilized after the following 100 samples (indexing from 41 to 140) were processed.

July/August 1998. Scholkopf.C. In the case studies.C. 10. October 2006. The future work will focus on refining and validating the adaptable AVI scheme with a greater range of data reflecting variations in both inspection parts and operation conditions. vol. E. Smola). November 2005. the proposed AVI scheme demonstrated its promising adaptability performances with respect to its training efficiency and inspection accuracy. [9] J.” Proceedings of 17th International Conference on Pattern Recognition (ICPR’04). Using the online approach. J.edu. the online approach achieved the same inspection accuracy (RFP = 5. and Cybernetics. E. no. 2. References [1] S. Y. Chang. Dumais.” Journal of Manufacturing Science and Engineering. Man. Joachims. M. vol. UK.J. Scholkopf. Weng. 4. Fachbereich Informatik. 171-188. Zervakis. 1999.N. IEEE Intelligent Systems and Their Applications. In the meantime. pp. Hsu. and Dr. 9. Otsu. [4] E. Osman. feature extraction. consisting of the two sub-models ML and MV. The proposed AVI scheme can adapt to changing inspection task and operation condition thought an online learning process without requiring excessive retuning or retuning.csie. vol. Surgenor’s research 199 . pp. S. Computer Vision and Image Understanding. 127. “An Automated Feature Selection Method for Visual Inspection Systems. March 1995. 394406. pp. Hearst. Newman and A. IEEE Transactions on System. Cambridge. Platt.C. Petrakis. Acknowledgements The authors would like to thank Van-Rob Stampings Inc. 4. August 23-26 2004.C.T. model training. C.G. group at Queen’s University for providing sample images for the case studies. no.-D. and T. Chang. “A Threshold Selection Method from GrayLevel Histograms”. vol. J. Brian W. “Support Vector Machines”. [8] M. and defect detection in the AVI scheme. Vapnik. “A Survey on Industrial Vision Systems. 239-242. England. Rene-Villalobos.J. [3] H. Lin. 1. vol. plays a key role for the proposed AVI scheme. 3. “Adaptive Part Inspection through Developmental Vision. “A Survey of Automated Visual Inspection”. we also conducted several experiments with the datasets acquired in different scenarios with inspecting different clips and changing operation condition. 21. pp. the online approach provided a high accurate inspection (RFP = 0% and RFN = 0%) on the 58% of samples that were processed during online learning. Application and Tools”.N. J. vol. The defect detection is realized by using the representative features obtained by the feature extraction and executing MV built by the model training. pp. New York. There are four major modules: region localization. Garcia.W. and J. Universitat Dormund. An efficient and effective online learning algorithm is developed using the SVM technique. Advances in Kernel Methods: Support Vector Learning (edited by B. Shi.experiment. 231262. 13. only 42% of the 100 processed samples requested for manual labeling. [2] G. “A Practical Guide to Support Vector Classification” (online).E. An adaptable inspection model.tw/~cjlin/papers/. no. The region localization is implemented by using the edge-based geometric template-matching technique to locate the subject to be inspected based on ML. 1998. vol. 18-28. “Estimating the Generalization Performance of an SVM Efficiently”. Bugres.” IEEE Transactions on Automation and Engineering. Jia. Abramovich. The expected outcome of this research will be beneficial to the manufacturing industry. Platt. 2. Statistical Learning Theory. we have presented an adaptable AVI scheme for the application of part-assembly inspection. LS-8 Report 625. Wiley. SMC-9. no. London.A. C. Conclusions In this paper. [5] H. and D.J.K. pp. Runger. The experimental results still showed that the proposed AVI scheme had promising performances with respect to its training efficiency and inspection accuracy. In order to demonstrate the adaptability of the proposed AVI scheme. Jian. January 1979. J. pp. Available: http://www. [10] T. 61. Image and Vision Computing. Murphey. Legat. no. “Fast Training of Support Vector Machine Using Sequential Minimal Optimization”. 1-11. Petit. February 2003.C. “An Intelligent Real–time Vision System for Surface Defect Detection. and G. and A. 8. Dutta. [11] C. L.41% and RFN = 0%) as Offline-Learning III after sequentially processing the same samples indexing 41~140. and B. and C. [6] N. 62-66.ntu. Malamas. L. [7] V. 1998. The MIT Press. 3.