P. 1
Bionic Glass

Bionic Glass

|Views: 4|Likes:

More info:

Published by: Nguyen Ngoc Khanh Hang on Sep 07, 2012
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

09/07/2012

pdf

text

original

2006 10th International Workshop on Cellular Neural Networks and Their Applications, Istanbul, Turkey, 28-30 August 2006

Colo-r

'

Processin

in

earable

Bionic

eglass

Robert Wagner*,Mihaly Szuhajt *Peter Pazmany Catholic University, wagnerr@sztaki.hu tHungarian National Association of Blind and Visually Impaired People, Budapest,

Hungary

Abstract- Abstract - This paper presents color processing tasks of the bionic eyeglass project, which helps blind people to gather chromatic information. A color recognition method is presented, which specifies the colors of the objects on the scene. The method adapts to varying illumination and smooth shadow on the objects. It performs local adaptation on the intensity and global chromatic adaptation. Another method is also shown that improves the extraction of display data based on chromatic information.
blind people.

Index Terms- Color processing, image segmentation, guiding

I. INTRODUCTION

Espite the impressive advances related to retinal pros theses there is no imminent promise to make them soon available with a realistic performance to help navigating blind or visually impaired people in everyday needs. In the Bionic Eyeglass project we are designing a wearable visual sensingprocessing system that helps blind and visually impaired people to gather information about the surrounding environment. The system is designed and implemented using the Cellular Wave Computing principle and the adaptive Cellular Nonlinear Network (CNN) Universal Machine architecture [1], [2]. Similarly to the Bi-i architecture ( [3]) it will have an accompanying digital platform - in this case a mobile phone - which will provide binary and logic operations. This paper deals with the color related tasks of the Bionic Eyeglass project A large part of the color processing related operations are more suitable on digital hardw are because of sophisticated nonlinear transformation between color spaces On the other hand many operations require local processing. These are computationally intensive, but fast performed on an analog processor-array. The chosen CNN architecture has the advantage of large computing power, and the ability to perform image processing tasks which comes from the topographic and parallel structure. Using this architecture we can exploit the fact, that mammalian retinal processes were successfully implemented on it ( [4]), which enables us the realization of retina-like methods. In section II. we write about the role of color processing in the wearable eyeglass for blind people project. In section III we describe the color specification algorithm, in IV the extraction of displays based on color information and in section V we write about the planned architecture.

polka-ed. The second task is different from the first one in that here the focus is not the color of a given object, we use color in order to alleviate the extraction of other useful information. A good example of this is the reading of displays. Usually displays can be identified by their larger intensity, however the use of chromatic information helps to distinguish them from natural light sources. We take advantage of their color although we are not interested in what color they have.
III. COLOR SPECIFICATION

II COLOR PROCESSING FOR BLIND PEOPLE In the bionic eyeglass project there are two main application of color processing: 1) Determination of the color of the scene. 2) Filtering based on color to extract non color related information. (e.g. displays) The first task is considered despite the fact that blind people do not perceive color, because the color information of some objects is important for them. A good example for such an application is the color of clothes. Although they have few imagination of color, the color of clothes is important in order not to take clothes of unmatched colors. Our task is to determine the color of the piece of cloth recorded by the camera. Beside the color of the cloth we have to determine the texture of it this can also determine the matching of two clothes. At the next stage of the project, after determining the color we will classify the texture in one of the following categories: uniform, striped, checkered and

their location inside the image. The algorithm consists of a color space transformation, luminance adaptation, clustering, white balance, color-naming and location specifying steps. The flowchart of the algorithm can be seen on fig. and the steps are described in sections III-A-III-E.
A. Luv Color Space The specification of the colors and the subsequent computations are done in the CIE-Luv space ( [5]). The Luv space has the property, that the Euclidean difference of two colors L,uty,v coordinates is closer to the perceptual difference between them than that of the R,G,B coordinates.

Our aim is to determine the color of the objects as seen on an RGB image. Our system will provide the color names and

1-4244-0640-41061$20.00 ©2006 IEEE

At each pixel we perform the following operation (1): where (i. the CIE-Luv space (later we also used the Lab space without having large performance differences). j)).B values to X. L(i. jf (1) Fig. b. Such a very low frequency low-pass component does not surely eliminate all the illumination differences. We can also use the CIE-Lab color space ( [5]) using which we had similar results.) is the adapted result. 2. the illumination is constant over the scene. Luminance adaptation. This modification changes the pixel value according to the local average around it (L(i. the second is a nonlinear. We had challenges in the elimination of shadow borders and unequal illumination rather than in the identification of uniform regions. Luminance Adaptation Color determination is relatively easy if everything is ideal: we have reference illumination. The low-pass component was calculated using a diffusion ( [12]) that is equivalent to a Gaussian convolution with a sigma parameter: uf = 0. then we enhance the pixel. while L.) shows the input picture taken by a standard digital camera. We can see that the differences in the illumination are reduced. The texture information is maintained because it changes more sudden over the scene. Such changes can be filtered out in that the differences in the spatial low-pass component of the intensity are eliminated. j) the luminance value of the pixel. This part of the algorithm is explained in details in section III-B. which mean gradual changes over the whole scene. L(i.j)= L(i. In this stage of the project we omitted the region information in the clustering stage and have a simpler and faster algorithm.u*. However in most cases this is not satisfied. that the L coordinate represents the luminance information and the resting two coordinates the chromatic information.a. We applied a strong low-pass filtering. Especially not with blind people. who can only guess the illumination conditions while taking the picture. a. We plan our system to cope with smooth illumination changes. In order to compute the L. B.) b. An example can be seen on fig. If the local average is smaller than the mean of the image (which is also the mean of the local average).j) * mean(L) L(i. Hence the luminance differences between the regions are reduced.v* values need only division. Furthermore in case of striped or checkered clothes the regions of a color are not continuous. 1.) Fig. Tick boxes show operations that will be implemented or partially implemented on CNN architecture.u*. As in [8] we used a perceptually equalized color space.v* values we converted the R. 2. j) the value of the lowpass filtered luminance at the pixel and mean(L) is the mean of the intensity over the whole image.u*. Hence we clustered based on only the 3D color values of the pixels and not the color plus spatial coordinates. The detailed description can be found in [6]. This is the stage where we specify the colors seen on the image.G. [8] also used region information in a second stage of clustering.Y. otherwise we reduce its value. L(i. but it has the advantage. j) are the coordinates of the pixel. This enables us to deal with the luminance separately and reduce the luminance differences due to illumination changes. Both the Lab and Luv color spaces have the advantage. This works similarly to [7]. C. Processing steps of the color specification algorithm. so that only the very low-pass components are eliminated during the adaptation. However its computation requires the calculation of a cube root.7 * image width.Z and then to L. . The first transformation is a linear transformation. that only the illumination differences are reduced. so during luminance adaptation we change only the L values.v*. Clustering of Colors The clustering step is the main part of the algorithm. In the Luv space the L value carries the information about the intensity.

a.For the clustering we used the well known K-means algorithm [8]. The number of the initial cluster centers is crucial since it determines the number of resulting clusters. Using a smaller number of clusters. The main cluster colors can be seen on d. along shadow borders (see fig.u*. shadowy white on the lower right corner and the large white region on fig. On this picture the pixels have the color of the cluster they assigned to. in which we merged similar clusters. we get a map (index map) in which all pixels will have a value depending on the cluster to which it was assigned. The merging could be avoided if we applied anisotropic diffusion ( [12]) before the clustering. fig. a. Such splitting cases occur mostly at unequal illumination of the same texture e. but does not eliminate a luminance change that affects the whole scene (e.) On the other hand in case of more homogeneous cases we have the problem.g. but the difference of the u* and v* values is smaller than a parameter: P.). 3.) Fig. The weights are the number of pixels in C1 and An example of merging can be seen on fig. Hence we made two criterions for merging of two clusters. This is only applied in case saturation of the colors is larger than: p.g.. in order not to merge inconsistent clusters. c. In the Luv space the saturation can be computed as (2). this will be feasible on nonlinear CNN chips.v* values of the cluster centers is smaller than a predefined PLu. The criterion for the saturation is applied in order not to merge achromatic values. Luminance adaptation equals the differences within the scene. that some regions are not merged (e. saturation = U2 H v2 (2) . This has the effect. 3 c.). D: c. It consists of the following main steps: 1) Determination of K initial cluster centers. Clustering and merging of clusters.) shows the clusters after merging of similar clusters. D. and the cluster centers Luv coordinates will be the average of the pixels coordinates.3 b. we extended the clustering method by a post processing step. pixels of different color can be assigned to the same cluster. like dark-gray and white. 3. White Balance In spite of the luminance adaptation illumination variation is still a problem for the recognition of colors.). An example of the clustering can be seen on fig. the whole scene is in shadow. Experimentally tested we choose 16 initial cluster centers.) As a result of the clustering.) 1) The difference of the L.) d.) shows the original picture taken by a standard digital camera. C2. parameter 2) The difference of the L value is arbitrary large. 2) Assigning the image pixels to the centers. if the cluster centers change more then a predefined criterion. 3. These are both based on the Luv color coordinates of the cluster centers. To overcome this problem.. that a region of the same color is splitted between different clusters. In these cases the different clusters of the same color have similar chromaticity but different luminance. We specified strong criteria for the merging. which correspond to different colors but have no chromaticity difference.) we can see the result of clustering. Regions of a given color represent a cluster. 4 a. b. When a cluster C1 is merged to another one C2 the pixels of C1 are assigned to C2 (their indexes are changed) and the cluster centers Luv values become the weighted average of the two centers. 3) Recalculate the cluster centers as the mean of the pixel values belonging to a given cluster. that belong to the cluster. [9]. On b.g. 4) Go back to step 2.

G. Homogeneous regions of the same color represent the clusters. Consulting with blind people we were assured that this condition can be fulfilled. The operation of white balancing is illustrated on fig. COLOR CORRECTION AND FILTERING This section deals with a different task. down. We perform the following steps: 1) We choose the cluster center whose Luv coordinates lie the closest to the saturated white value.) shows the colors of four main clusters. If we cannot assume white areas on the scene. The separate scaling of the R. The CoG coordinates are then classified vertically and horizontally to 3 categories: up.G. The method that specifies the uniform regions has the following free parameters: . We used the Wiki list of colors ( [11]) as predefined colors. This step assumes that the scene contains white regions. . . We classified each cluster center (and so the cluster) to given predefined colors.. we can use scene statistics. This operation is especially important in case of mobile phone cameras the sensor of the planned device. 4).v* values of the cluster centers to R. White balancing means in our case that the cluster center whose color lays close to white color (saturated RGB values) is amplified so that it becomes white. then they are classified as regions lying on both sides. We tested the algorithm on about 20 pictures of different clothes or under different illumination and adjusted the parameters to recognize the regions of same colors as seen on fig. d. we scaled the R. Deviance from the center of gravity. a.B channels. Criteria at the merging of clusters (2 free parameters). On c. E. and to retrieve the location of the clusters based on the cluster data. Initial number of clusters. 2) We convert the L. Amount of diffusion to specify the low-pass component. These cameras make less balanced photos than standard digital cameras and their pictures have often a large proportion of a given color (fig. In order to be able to filter out predefined colors we have to remove a .255) and those of the chosen cluster center (3): C C*Crw Cc (3) IV.u*. white region on fig.255.B channels separately.B channels. 4. The correction can be seen in (4): c =C Tnean(C)(4 'g (4) . b.3. 4.) shows the result of clustering and merging of the clusters. This method is described more detailed in sec.) b.G.) the diagonal model in [10]. - c.) shows the original image taken by a mobile phone. The deviance is needed in case of non continuous region (e.g.) these might have the CoG in the middle although they are located on the sides. The last stage of the algorithm is to specify the color names based on the Luv coordinates. and the location names based on the map which shows the pixels which are assigned to a cluster. middle. We filter out displays of predefined colors (see fig. .G. For the location information we computed the following measures. here our aim is to extract useful information of the scene based on chromatic information. such as mean value or local average of R. Here we recalculate the Luv coordinates of the cluster centers.B channels corresponds to - chromatic distortion from the scene.) X d) Fig. middle horizontally: left. Center of gravity (CoG) coordinates.) we can see the clusters after white balance. .B values and compute the ratio between the RGB values of white color (255. We chose the color from the list based on the Euclidean distance of the Luv coordinates between the cluster and the predefined values. right. The average number of clustering iteration was 4-5 iterations. Clustering and white balance.where C is one of the R. based on a binary mask which contained the pixels that were assigned to the given cluster. IV. If the deviance is high. Crw is the channels value for the reference white and Cc is the value of the channel for the chosen close to white cluster. which is more sensitive to blue colors. 6).G. and scale the channels based on the ratio of the desired values and the calculated statistic. We implemented method similar to the white correction in sec. Retrieval of Color and Location Names At this stage we extract verbal information based on the Luv values.a. III-D.3 and 4.

see fig.) d. .) Fig. Color filtering.) Fig. b.G.. 1. In our application Crg was chosen 128 for all the channels.) shows the blue channel of c. and specification of the location names based on CoG coordinates. In [10] such methods are called gray world algorithms. REALIZATION For the use of our algorithm it is important that at a final stage of the project we implement our method on the wearable architecture. w~~ I a. V. After this we extract the predefined color that we need for the specific application. On c. we can see the steps of the algorithm which we are planning on CNN architecture.v. Color correction. The color-based filtering is able to reduce the number of possible location of the displays. . In case we apply our method for outdoor views (e.) shows the original image taken by a mobile phone. this means. We can see the bluish color of it. 5 we can see an example of this color correction. we regard many different textures whose average is usually nearly gray. 5. Mobil us: provide phones are advantageous.) c. In this case all deviations from this value are caused by chromatic illumination or different sensitivity of the color sensors (see fig. even if we extract non display regions. * Calculation of statistics for white correction and color correction. other methods should be also applied to locate the displays these are out of the scope of this paper.I C ~~~~~~~~~~~~~~~~~~~~~~~~~~~.) b.B channels.) shows the blue channel of a.6. the shown color filtering case fig. In the color specification task we regard clothes and the texture of one cloth can occupy the whole visual filed. merging.) we can see the result of filtering for green-yellow color. Steps that are easier realized on digital architecture: * Operations in which the pixels are processed separately: * Strong nonlinear transformations: conversion between color spaces. In our case this is a CNN array accompanied by C. than it is attenuated otherwise it is enhanced.) c.) shows the smooth correction. In this case we filtered the greenish color of the display based on the hue and saturation data. a. The color correction can be only applied if it can be assured that the average of the color channels on the reference illuminated scene is nearly 128.6).) shows the original picture taken by a mobile phone.) .g. 6.) shows the adapted version. a mobile phone. d.. 5).u. because they * camera input * speaker * power supply The camera input . On fig. Operations that require local processing can be more efficiently realized on CNN architecture: * Local adaptation algorithm. b. when a channels mean value is larger then the half of maximal intensity. so the CNN array is important for: * parallel image processing * adaptive image sensing On fig. a.where C is one of the R. data. clustering. * Rule based evaluation of the results: determination of the color name based on cluster L.. On the other hand in these applications the color-based filtering is considered as a simple preprocessing step. Crg is the channels value for the reference gray and mean(C) is the mean value for the channel for the whole scene..image sensor can be realized on CNN architecture as well ( [3]). The hue range was specified so that the most important parts of the display are extracted. .

. pp. F. 1014-1023. REFERENCES [1] L. Balya. B. Tkalcic. Roska. "CNN Software Library (Templates and Algorithms). Editor: Baldomir Zajc. . Wagner. 12. IEEE Transactions on Circuits and Systems I. On the field of implementation we will have to begin to transfer the algorithms to mobile phone platform.ncifcrf.org/wiki/List of colors [12] T. and then we can design the character recognition methods." Chapter Two. 2001. 363-393. T. Vol. pp. L. Strintzis. Roska. 0. Journal of Circuits. L.fe. Cs. Systems. Phd thesis. Szolgay (ed). G. 30.wikipedia. Cambridge.si/docs/documents/20030929092037 markot. A. Vol. CIE 15. No. 2003.html [10] Kobus Barnard. School of Computing available at: http://kobus. The display recognition requires non color based methods to locate the displays. "The new framework of applications .3". CIE Central Bureau. 2004 [8] I. Szatmari. 2003 available at: http://ldos. Nemes. We would also like to thank to: Kristof Karacs.* Determination of the location of the clusters. P. Thessaloniki.2-1986. Werblin. E.pdf [11] Wiki list of colors. Roska. Colorimetry. 769-782. In the design phase we are using PC simulation and then we plan to implement our algorithm on mobile phone. Vol. No. : Regular Papers.perceptual. Kompatsiaris. VII. A. Hungarian Academy of Sciences (MTA SzTAKI)." International Journal on Circuit Theory and Applications. we realize the method on the target platform. Marko Tkalcic. "Practical Colour Constancy. Kek. Tasic: "Colour spaces . Cellular Neural Networks and Visual Computing. 539-562. "Region-Based Color Image Indexing and Retrieval". Anna Lazair and Daivid Bailya. I.pdf [9] Web page about k-means clustering: url = http://fconyx. Chua. historical and applicational background". Roska. 1999 (1999). September. "Computational and Computer Complexity of Analogic Cellular Wave Computers". Simon Fraser University. available at: http://mkg. 2002 [5] Publ. pages 16. FURTHER PLANS We plan to extend the color specification method with the analyzing of the texture of clothes. 2001 International Conference on Image Processing (ICIP2001).ca/research/publications/PHD-99/KobusBarnard-PHD. Foldesy. S. Roska. 2003 [3] A Zarandy. [2] T. DNS-CADET15. http://en. VI. At the final stage when the hardware integration of the mobile phone and the CNN-UM array makes it possible. Analogical and Neural Computing Laboratory. 51. J. Eurocon 2003 Proceedings.The Aladdin system. 1986. 12. ACKNOWLEDGMENT This work is a part of the Bionic Eyeglass project under the supervision of Tama's Roska. Roska: "Adaptive Perception with LocallyAdaptable Sensor Array".iti. Cambridge University Press. "A CNN Framework for Modeling Parallel Processing in a Mammalian Retina. Triantafillou and M." J. Vienna. Greece. T. [4] D. Circuits Systems Computers Vol. 4. October 7-10.gr/publications/icip2001.uni-lj. Budapest. Zarandy and T. pp. Zarandy and P. [6] M. Version 7.gov/ lukeb/kmeans.pdf [7] R. 2002. 2nd ed.5. pp. UK. Computer and Automation Research Institute. Rekeczky. IEEE Region 8. and Computers.17 and 21.

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->