2012 Intelligent Vehicles Symposium

Alcalá de Henares, Spain, June 3-7, 2012

Fast Visual Road Recognition and Horizon Detection Using Multiple
Artificial Neural Networks
Patrick Y. Shinzato, Valdir Grassi Jr, Fernando S. Osorio and Denis F. Wolf

Abstract— The development of autonomous vehicles is a The idea of ALVINN [17] consists of monitoring a hu-
highly relevant research topic in mobile robotics. Road recog- man driver in order to learn the steering of wheels while
nition using visual information is an important capability driving on roads on varying conditions. This system, after
for autonomous navigation in urban environments. Over the
last three decades, a large number of visual road recognition several upgrades, was able to travel on single-lane paved
approaches have been appeared in the literature. This paper and unpaved roads and multi-lane lined and unlined roads
proposes a novel visual road detection system based on multiple at speeds of up to 55 mph. However, it is important to
artificial neural networks that can identify the road based on emphasize that this system was designed and tested to drive
color and texture. Several features are used as inputs of the on well-maintained roads like highways under favorable
artificial neural network such as: average, entropy, energy and
variance from different color channels (RGB, HSV, YUV). As a traffic conditions. Beyond those limitations, the learning step
result, our system is able to estimate the classification and the takes a few minutes [17] and the authors mention that when
confidence factor of each part of the environment detected by is necessary a retraining then this is a shortcoming [18].
the camera. Experimental tests have been performed in several According to [19], the major problem of ALVINN is the lack
situations in order to validate the proposed approach. of ability to learn features which would allow the system to
I. I NTRODUCTION drive on road types other than that on which it was trained.
In order to improve the autonomous control, MANIAC
Visual road recognition, also known as “lane detection”,
(Multiple ALVINN Networks In Autonomous Control) [19]
“road detection” and “road following”, is one of the desirable
has been developed. In this system, several ALVINN net-
skills to improve autonomous vehicles systems. As a result,
works must be trained separately on their respective roads
visual road recognition systems have been developed by
types that are expected to be encountered during driving.
many research groups since the early 1980s, such as [1] [2]
Then the MANIAC system must be trained using stored ex-
[3]. Details about these and others works can be found in
emplars from the ALVINN training runs. If a new ALVINN
several surveys [4] [5] [6] [7].
network is added to the system, MANIAC must be retrained.
Most work developed before the last decade was based on
Both systems trained properly, ALVINN and MANIAC, can
certain assumptions about specific features of the road, such
handle non-homogeneous roads in various lighting condi-
as lane markings [8] [9], geometric models [10] and road
tions. However, this approach only works on straight or
boundaries [11]. These systems have limitations and in most
slightly curved roads [12].
cases they showed satisfactory results only in autonomous
Other group that developed visual road recognition based
driving on paved, structured and well-maintained roads.
on ANN was the Intelligent Systems Division of the National
Furthermore they required favorable conditions of weather
Institute of Standards and Technology [20]. They developed
and traffic. Autonomous driving on unpaved or unstructured
a system that make use of a dynamically trained ANN
roads, and adverse conditions have also been well-studied
to distinguish between areas of road and nonroad. This
in the last decade [12] [13] [14] [15]. We can highlight
approach is capable of dealing with nonhomogeneous road
developed systems for the DARPA Grand Challenge [16]
appearance if the nonhomogeneity is accurately represented
like focusing on desert roads.
in the training data. In order to generate training data, three
One of the most representative works in this area is
regions from image were labeled as road and three others
the NAVLAB project [3]. Systems known as ALVINN,
regions as nonroad, i.e., the authors made assumptions about
MANIAC and RALPH were also developed by the same
the location of the road in the image, which causes problems
research group. Among these systems, the most relevant
in certain traffic situations. Additionally, this system works
reference for this paper are ALVINN and MANIAC because
with the RGB color channel that suffers a lot of influence
they are also based on artificial neural networks (ANN) for
in the presence of shadows and lighting changes in the
road recognition.
environment. A later work [21] proposed dynamic location
This work was not supported by any organization of regions labeled as road in order to avoid these problems.
P. Y. Shinzato, F. S. Osorio and D. F. Wolf is with Mobile However, under shadows situations, the new system becomes
Robotic Laboratory, Institute of Mathematics and Computer Science,
University of Sao Paulo - ICMC-USP, Sao Carlos, SP, Brazil less accurate than the previous one because the dynamic lo-
shinzato@icmc.usp.br, fosorio@icmc.usp.br, cation does not incorporate the road with shadow information
denis@icmc.usp.br in the training database.
V. Grassi Jr. is with Dept. of Electrical Engineering, So Carlos School of
Engineering, University of Sao Paulo (EESC-USP), Sao Carlos, SP, Brazil In this work, we present a visual road detection system that
vgrassi@usp.br uses multiple ANN, similar to MANIAC, in order to improve

978-1-4673-2118-1/$31.00 ©2012 IEEE 1090

We used the following statistical measures: mean. This subdivision is features used as input.2(c) shows sub-images belonging to road class classification for each sub-image. the road and the horizon line respectively. classification is that it provides a confidence factor for each sub-image classification that can be used by any control system. one to detect road and (0 ≤ m < M) and (0 ≤ n < N). If the sub-image is classified as and variance from differents color channels (RGB. these ANN are distinguished by the combination of which is transformed into Fig. 3. The choice of these attributes was shows how the system works. The Table I shows all After executing an action. n) these identifiers are the features and training database used such that (iK ≤ m < (iK + K)) and ( jK ≤ n < ( jK + K)). an image I with size of (M × N) pixels is decomposed all. The normalized RGB is composed by (R/(R+G+B)). The first road class. from that group are considered as belonging to the same we combine a set of ANN’s outputs to generate only one class. transforms the image into set of sub-images and generates image features for each one. 2(a) words. B. Road and Horizon Identification A. 2. our system does not require to make assumptions about the location of the road in the image. In this work.the robustness. After the image obtained by a video camera attached to a vehicle. based on a feature selection method based on ANN and a previous work [22]. or sub-image. Unlike [20]. After that. our system provides the VNMap that is used by a control algorithm. sub-image (i. stage. uses VNMap in order to control the vehicle autonomously. Fig. allowing faster processing and obtaining the classification of all image. 4 shows for the same image the difference 1091 . then all pixels YUV) from sub-images. 2(b). entropy. an image feature is a statistical measure These features are used in the second and third stages called computed for a group of pixels considering a specific color Road Identification and Horizon Identification to detect channel. Each ANN uses some. into entropy. In other in many sub-images with (K × K) pixels. This strategy has been used to reduce the height of the horizon line in image in order to improve amount of data. These measures can be associ- the last stage called Combination. These from sub-images instead of all road appearance like ALVINN features will be used by road and horizon identifiers that does. our system does not need to be retrained all the time because the generalization capacity of our system is better than theirs. YUV and normalized “Visual Navigation Map” (VNMap). N OTE THAT RN. 1 calculated by our system. the pixel in row m and column n of the image. by its ANN. The System Architecture: Given an image. we can obtain results like (c). different ANN. Also. our ANN learns colors and textures For each group. variance and energy. as shows Fig. each one represented by a square in image (b). The Section III will explain the ANN described as follows: The element I(m. Another detail about this information like texture from sub-images. Fig. belonging to a specific class. we estimate the painted in red. 1. In our system. It is composed by several and generates image-features for each of them. HSV. not ically. the system captures another image the combinations of statistical measures with color channels from the environment and returns to the first stage. However. n) corresponds to used here in more detail. (G/(R + G + B)). energy particular class or not. it is transformed Variance × × into a set of sub-images that will be separately classified by two types Energy × × of identification. j) is other to detect horizon line. More specif. Therefore. In the final step of the algorithm. called Features Generation. many image features are generated. Sub-images Get with features TABLE I Features Generator Image Image F EATURES CALCULATED BY OUR SYSTEM . VNMap SkyMap Control Final VNMap combine Channels from several color spaces Algorithm Measure R G B H S V Y U RN GN BN Mean × × × × × × × × × Entropy × × × × × Fig. each ANN receives different image determine whether the group. Features like averages. Features Generation An identifier is responsible for classifying a sub-image as This stage transforms an image into a set of sub-images belonging or not to a specific class. j) that contains all pixels I(m. (B/(R + G + B)). S YSTEM D ESCRIPTION The system’s goal is to identify the road region on an Fig. GN. The Fig. To classification. the image (a) is transformed into a set of sub-images. Therefore. A control algorithm RGB. e. (a) (b) (c) II. HSV. as shows Fig. Combining all outputs from road and horizon Identifiers. BN Road Horizon Action Identifier Identifier ARE RGB CHANNELS NORMALIZED .g. road class. belongs to a features as input. features generated by the previous stage as input. In features generation stage. The major difference between represented by group G(i. where Our system uses two identifiers. Red squares were classified as belonging to composed by different algorithms and parameters. our system is divided into four stages receive the same classification. where all pixels from a square accomplish this task. our system combines the ated with some channel from four different color spaces in two identifiers results to provide a matrix that is the final order to define a feature: RGB.

Black represents non-sky class. A RTIFICIAL N EURAL N ETWORK The Horizon Identifier distinguishes sub-images above The ANN used in our system consist of a multilayer per- the horizon line from sub-images below it. of red represent patterns that identifier must returns value “1”. Fig. from Y . The pixels painted control algorithm.that classifications from all sub-images compose a Map that will be used by the represents sub-images above horizon with low confidence - system. 6. it calculates the height of horizon the weights based on the amount of error in the output line as follows: compared to the expected results. In order to improve system performance. An identifier is composed by multiple ANN. white represents sky class and the gray represents the intermediate values. where the right image shows the VNMap in gray-scale . 5. Identifier with N ANN sub-image X +Y 2 height = . III. image (a) classification system and this VNmap can be used by some is original image. at the last stage our system updates the classification of all sub-images (a) (b) (c) above the horizon line to zero in VNmap generated by Road Identification. is generated the classification for one sub-image. so of blue represents patterns thar identifier must return “0”. image (b) is classification for road training database and image (c) is classification for horizon training database. Results from combination of classification from horizon identifier and road identifier. white represents road class and the gray represents the intermediate values. 7. j) is the classification of sub-image G(i. This helps the system to distinguish X . Fig. Pixels painted we recommend the execution of the horizon identifier first. where (0 < threshold1 < thresh- all ANN outputs. that only the sub-images below horizon line will be submitted The Road Identifier detects sub-images that represent to the road identifier. After combining were defined empirically. Note that the uncertainty above horizon was eliminated causing classification improvement. 3. this stage generates a matrix Map of real numbers where Map(i. We different features. Black represents non-road class.black represents non-road class. This classification was manually defined in order to Classification generate training database for both identifiers. In this stage. 1092 . which estimates After generating Map. Pictures (b) and (c) show sub. use the resilient propagation technique [24]. 7. white represents road class and the gray represents the intermediate values. road regions from image. and return a value ranging from 0 to 1. confidence. j). These threshold values Fig. as shows Fig. 6. images with classification 1 (detected) painted in red and sub-images with classification 0 (not detected) painted in blue. all the ANNs of the road identifier classifies each sub-image. ceptron (MLP) [23]. After classifying all sub-images from an image. (1) W ANN ANN ANN ANN 1 2 (N-1) N where X is how many sub-images were classified with score higher than threshold1 and less than threshold2. Fig. Classification Fig. Y Combination is how many sub-images were classified with score higher sub-image classification than threshold2 and finally. Results from a classification sample from horizon identifier. 5 shows a sample of an image classified by this identifier. This procedure improves the Fig. Differences between training databases of identifiers. as shows Fig.that represents sub-images above horizon with high between the training data. number of ANN and training database. W is the number of sub-images (columns) in one line of Map. 4. After determining the position of the horizon. The old2 < 1). The classification of a sub-image is average of the results of all ANN outputs. which is a feedforward neural network This stage is similar to Road Identification but it uses model that maps sets of input data into specific outputs. Results from a classification sample from road identifier.

checking how many patterns were missclassified. Entropy of H. Feature (m-1) The OpenCV library has been used in the image acquisition Feature m Number of neurons and to visualize the processed results from our system.0 combined elements from these eight training data into a where N is the number of evaluated patterns.e. S. More specifically. In other words. R. The sidewalks. i. The However. The horizon identification uses four sets of features as input given an ANN output varying from 0 to 1. Altogether. The input layer depends on or some acceptable value. Normalized B. our system selects 10 from 50 Regarding ANN convergence evaluation. Rate”. basically. In other words. G. Other way of evaluating the convergence is Entropy of H and Y.1. The output is a real identification and the remaining four to horizon identifi- value ranging from 0.. Where score = 1 means that all ANN outputs are correct. Also. we used a Fast Artificial Neural .0 when receives patterns Therefore. thus the score ranges from 0 to trained to return value 1. i. The road “Mean-Square Error” and usually the training step stops identification uses only three sets of features as input. The Feature 1 image resolution was (320×240) pixels. Therefore. The used hmax has value equals “10” and frequently used: “MSE” and “Error Rate”. Energy of S error. the training step runs until score converges to 1 that does not belong to class. B and H. 8. We chose this a weight. and elements from the eight evaluating data number of classifications with error varying from hmax to (i+1) into another single database. some Equation 2 shows how to calculate the score: stretchs presents adverse conditions such as dirt.1 than to output with error of 0. features chosen for each ANN instance. a small mean-square error does not necessarily sets of features used are: imply good generalization [24]. 8. Thus. V. two layers. buildings. one class for to compare this method with another metrics well known. a greater weight is assigned to an ANN output with error Several paths traversed by the vehicle were recorded of 0. Also. Also this metric does not • Mean of U. Fig. In order 0 to 1 will be divided into 10 classes of errors. In order to execute the Feature 2 experiments with ANNs. as shows the Fig. The features sets used are: this output belongs or not to the class being assessed. 1093 . each class is directly related to has only one neuron. or “Error • Mean of U. H. the training time is also reduced enabling the instantiation of multiple ANN. Since the number of IV. for all patterns. and compute a score for the ANN. other class for errors between 0. six to road the sub-image between belonging to a class or not. (2) a training database and an evaluating database. ANN are instantiated for each set resulting six ANN. For each one. propose a method that assigns a weight to the classification • Mean of B. score indicates few large errors or several small errors. in order to evaluate the convergence we • Mean of R and H. Entropy of H and V. Entropy of Normalized G. The value hmax determines a precision for the interpretation of ANN’s output. The problem of using this criteria is how to define Variance of B. Entropy of V. h(i) is the i single database. H and Normalized B.2. Energy of S. The MSE is weight function p() used was a linear function. A. an adequate threshold value to interpret the ANN output.0 to 1. and vegetation. 9). two when the “MSE” converges to zero or some acceptable value. Our setup for the experiments was a car equipped with an A610 Canon digital camera. and so on. function. to a determined class and to return 0. ANN topology: The ANN uses some features. H. is higher for some patterns or if the error is evenly spread • Mean of V. times until 5000 cycles for that our system select one with best score.. Variance of S. we 2.2. In our work. To simplify the handling of the score. it was created i=0 score = . In the training step.0 when receives patterns belonging 1. not all. data were   hmax 1 N∗p(0) ∑ h(i) ∗ p(i) +1 collected from eight scenarios. E XPERIMENTS AND R ESULTS neurons is small. a high using a video camera. hmax is the size of h() and finally. The ANN was always ranges from -1 to 1. the weight topology based on previous evaluation [22]. Normalized B. V. r Network (FANN) which is a C library for neural networks. Normalized G. Evaluation using score if hmax = 10 then the output that has real value ranging from Score is a hit rate measure that varies from 0 to 1. In order to validate the proposed system.1 where the hidden layer has five neurons and the output layer and 0. if the error Energy of Normalized G. p() is the weight nine databases. If the value is closer to 0 then greater will be the confidence factor about not belonging.e. The constant Sub-image sub-image size used was K = 10. where if the value is closer to 1 then greater cation. so each image has 768 sub-images. Entropy of S and V. we tested the system with hmax . Normalized G. each ANN are trained five will be the confidence factor about belonging to class. parking. several exper- iments were perfomed. to classify Our system uses ten ANN during execution. traces of other vehicles and shadows (Fig. determine whether for ANN. The car and camera 0 ≤r≤ 1 were used only for data collection. Entropy of V. • Mean of B. • Mean of R. These paths are composed by road. provide how many patterns are missclassified.. for instance. Also. U. The ANN topology consists in.0. errors between 0 and 0. Energy of Normalized G. two metrics are created ANN.

The most important than theirs. Samples of results from our system when it is trained with database 9. This column shows that our The ANN from [21] was better than our system in all system classifies different scenarios with good precision. This database has patterns from many different scenarios in different weather and traffic conditions. Our system was capable to differ road of non-road pixels.05 and when systems were trained with database 9.7%.score”. “MSE”.1. 11 shows some results from the where the light conditions changes. 10. median in image (2 features). third scenario. For longer paths or paths different conditions. Also. median in blue. system trained with the database 9. quartile of 25%. quartile of 75% and maximun. Note the occurrence of shadows and how the colors of road changes in different lighting conditions. 11. the system was independently evaluated for each image from the same scenario. The graphic from Fig. that contains based on analysis of quartiles. Table II shows the comparison between blue. we transforms the score into “error measure” calculating (1−score). quartile of 25%. In other words. Fig. the results were satisfactory. All pixels above horizon line was non-road. Each column represents results from one evaluating database. most of the images were patterns from all scenarios. The most important result that shows our system and theirs for each database. Except for the patterns from the same scenery (database 1 to 8). all the medians were lower than 0. We compared our results with the ANN described in [21] that uses only one ANN with 26 input neurons composed by a RGB binary (24 features) and normalized position of sub- Fig. quartile of 75% and maximun. our system is better 1094 . Among all image results. Each column represents our system results from one evaluating database. since their ANN failed to learn. 10 shows the performance of our system in a boxplot format. Fig. in order to improve the classification. However. Fig. because it represents the system performance for missclassified 99. To generate each col- umn. their system is better a database consisting of patterns of various scenarios under than our system for short paths. column shows minimun. robustness of our system is the ninth column. each column shows minimun. Comparison We evaluate our system with 3 error metrics described in this paper. B. Also. evaluations when the system was trained and evaluated with In general. 9. also different environments. Samples of scenarios used in this work. We can see that our sys- tem has higher performance in different traffic and weather conditions. our system was much better classified with an error less than 0. We can see result that shows robustness of our system is the ninth this clearly in the column “Error Rate” where [21] ANN column. the green line shows where the system estimates the line horizon. “Error Rate” and “1 .

045 0. and A.046 0. K. no. 2010 IEEE set of features presented in this paper can be used in different International Conference on.” Journal of Intelligent & Robotic Systems.Critical Embedded Systems . “Vision-based road it performed better for longer paths with sudden lighthing detection via on-line video registration.” in Intelligent Transportation Systems (ITSC). For our vision system. pp.” in Robotics and Automation (ICRA). no. and K.” in Proceedings of IEEE International improve autonomous vehicles systems.449 [9] M. 939 –944.” in Applied Imagery Pattern Recognition Workshop. von Hundelshausen. “Efficient training of artificial neural networks for more ANN in the system. Edition).022 0. V. ITSC ’06.” IEEE Expert. MA. pp. Kak. 577 –580. “A hybrid when the training step obtains good score. “Adaptive road detection through continuous environment learn- and FAPESP to the INCT-SEC (National Institute of Science ing. 1st ed.047 0. pp. 2496 road detection system that uses multiple ANN.016 0. 237 –267. Broggi and S. and E. and A.” Pattern Analysis and Machine Intelligence. 88–97. 32. Alvarez. January 1999.035 of technical beings?” Artif. 1999. 2002. 3. Thorpe.023 0.042 0. Sejnowski. 62 –81. T.010 0. 14. 1995. 1999 IEEE/IEEJ/JSAI International Data 8 0. IEEE Transactions on. “Argo and the millemiglia [24] S.031 0. neural networks combinations. pp. pp.” Journal of Intelligent & Robotic Systems.131 0. 11. Ortiz. Lopez.031 0. Dick- retraining is only possible if the system makes assumptions manns. more adequate assessment to the proposed problem. Siedersberger. “Rapidly adapting machine vision for order to improve the system in urban scenarios. 2007. no.029 0. 1999. vol. Pomerleau.058 0. vol. 103. Shafer. T. Data 5 0. April 1991. Takeuchi. February as the system classifies each block independently. 1. 325–348. vol. “Maniac: A next generation neurally based autonomous road follower. 1–20. ITSC 2008. Bertozzi. Manz. which can cause IEEE. “Unscarf. 2. 263–296.041 0. Kanade. Gregor. Intell. M. relevant. “A road following approach using artificial processes 573963/2008-9. 3. 10. D.023 Conference on.013 detection using real-time voting processor.051 0. 1. “Color model-based real- time learning for road following. and M.020 0. This information can [14] J.1007/s10846-010-9463-2. Tan. pp. Tange. 2004. J. Desouza and A. the estimation approach for autonomous dirt road following using multiple clothoid segments. Also. IEEE Transactions on. pp. “Vision-based road detection in automotive sys- is capable to learn colors and textures instead of totally tems: A real-time expectation-driven approach. conditions to retrain the ANN without human intervention [19] T. “Vision for mobile robot navigation: a FROM NIST THAT ALSO USING ANN. IEEE. the system classification provides ber 2010. the influences of the shadows in the image and maybe use [17] D. As a future work. Dickmanns. “Ems-vision: a perceptual system for autonomous vehicles. September 2006. October 2004. “Visual navigation for mobile TABLE II robots: A survey. [22] P. pp. and G. score. Finally. T.026 0. 2008.022 Data 3 0. pp. Pellkofer. vol.075 0. integrate it with other visual systems like lane detection in 1991. Shneier. 10. TABLE OF ERRORS : C OMPARISON BETWEEN OUR SYSTEM AND [21] pp.030 Systems. In general. Thorpe. and H. 52 –57. 53. no.026 0. vol.” Intelligent Systems and their Applications. 1996. vol.074 0. 2010 13th International IEEE Conference on. [20] M. We presented a visual Conference on Robotics and Automation.997 0. vol. Iagnemma. 2000. The 7th International IEEE Conference on. The Computational Brain. Incorporated. 2000. M. Pomerleau. to integrate our approach with laser mapping in order to make Apr. 2. The 2005 DARPA Grand a better classification if we add a preprocessing to reduce Challenge: The Great Robot Race. Foedisch and A. Crisman and C. Finally. Broggi. Bertozzi. Bertozzi and A. 1. vol. “Vehicles capable of dynamic vision: a new breed Data 4 0. since the system reached good classification results [15] M.253 0. Lopez. pp. Broggi. Broggi. 2010. Pro- ceedings.017 0. Wolf. 3. Data 9 0.044 0. vol. 49–76. Fascioli. [4] F. October 2008.035 0. International Conference on Intelligent Autonomous Systems. pp.032 0. Shinzato and D. Bonin-Font. Oliver.” in Intelligent Transportation Systems many classifications with low degree of certainty lead to low Conference. detection algorithms. 16 – 21.Brazil).062 0. Neural Networks: A Comprehensive Foundation (2nd in automatico tour. [11] A. pp. [1] A.032 0.118 0.050 0. to improve the processing efficience using a GPU. and S.” in Intelligent Transportation Systems. Buehler.043 0. “Adaptive real-time road detection using neural networks. F. Haykin. road types and weather conditions. “Vision and navi- using other sensors. October 2004. 1998. 2008. 10. A. and S. we intend 1993. 1994. Our ANN – 2501. R EFERENCES [23] P. Lutzeler.” Pattern Analysis and Machine Intelligence. Jan. The authors acknowledge the support granted by CNPq [21] ——. M.” Robotics and Autonomous Data 1 0. Jochem. Furthermore.039 0. vol. [5] G. IV 2000.” Neural Computation. Bert. pp. Prentice Hall. Feb. 11th International IEEE Conference on. Our system was compared with another system and [13] F. and C. we plan to autonomous navigation. Pomerleau and T. 7. 1135 –1140.038 0. 1095 . Alvarez and A.1007/s10846-008-9235-4. ACKNOWLEDGMENT Proceedings. these assumptions gation for the carnegie-mellon navlab.” in Proceedings of the and without making assumptions about the image. 1988. Churchland and T. Chang. 08/57870-9 and 2010/01305-1. the results obtained in the experiments were pp.041 tion Systems. Singh. 3. Proceedings of the about the location of the road in the image. pp. 55 –64. USA: MIT Press. pp. “A robust lane Data 6 0. Serrat. August 1998.” in Intelligent Vehicles Symposium. pp. Thorpe. Cambridge. 167 – 172.018 0. 2nd ed. Septem- condition changes. Fascioli. vol. 2000. and A. 2006. no. Hong. May are not necessary. Y. pp. Ohta.248 0. a color vision system for the Visual road recognition is one of the desirable skills to detection of unstructured roads.032 0. no. no. J. 19 –27. A..031 [8] A.022 0. MSE Error Rate 1 − score Scenery LRM NIST LRM NIST LRM NIST [6] M. “Novel index for objective evaluation of road be used by a control algorithm. K. S.-J. Springer Publishing Company.106 0. or if the road is located [3] C. and Technology .061 0.020 0. The [2] R. 362 –373. Proceedings. C ONCLUSION IEEE Transactions on. Ninomiya. 24. 33rd. May 2010. D.055 [7] E.071 0. [10] J.” Image Processing. since [12] C. problems in certain traffic situations. July 1998. 2410 –2415. survey. Takahashi.071 0. “Vision-based intelligent vehicles: State of the art and perspectives. M. pp. 1.017 0. 2004.because their system must be retrained over the path. [18] D.” Journal of Artificial road appearance. Data 2 0. confidence factor for each sub-image. Diego. IEEE. We also plan automated vehicle steering. “Gold: a parallel real-time stereo vision system for generic obstacle and lane detection. Hebert. our training evaluation method is a Intelligence Research. Jochem.021 0. Wuensche. J.” in Intelligent Transporta- Data 7 0. 815 –820. 1–16.” in Intelligent Transportation Systems.035 0. It may be possible to get [16] M. M.