You are on page 1of 25

Elston Tochip

,
Robert Prakash,
Phillip Lachman
Tuesday, March 20, 2007
EE362/Psych 221

Camera Phone Color Appearance Utility
Finding a Way to Identify Camera Phone Picture Color

Today in the 21st century, phones have become and will continue to be the portal
able digital platform for variety of imaging applications. From pictures to video to
personal organizers, they have become the personal computer on the go, don’t leave
home without it. With this new technological advancement, we saw an opportunity to
take the camera phone one step further and use it to help vision impaired individuals
identify the color of images.

Approximately 10 million blind people live within the U.S. today, including
55,200 legally blind children and 5.5 million elderly people. Color blind people,
consisting of 8% of males and up to 2% of females in a population of 300+ million,
account for over 30 million within the U.S. These people have a right to see what many
of us take for granted on a daily basis, the right to experience life to its fullest.

To help make a small push towards that direction, our goal for this project was to
develop a software application that would be able to accomplish the following:

1) Receives a phone camera quality image
2) Identifies the predominant color(s) regions within the image
3) Estimates the color name for the predominant region
4) Audibly transmits the predominant color to the user

Our software takes the incoming image, which is surrounded by a white frame fixed a6-8
inches away from the camera phone and run an edge detection algorithm to identify the
background of interest. From there the code, using HSV as coordinate system, identifies
the color of each pixel within the background and sums up the colors before announcing
the predominant color within the region. The issues we ran into and our resulting
solution are explained.

Edge Detection

The focus of this task was to identify the target card. After poring through various
sources on the web, and discussing the topic with coworkers, we decided on a method
that simply measured for steep or sharp changes in intensity across an image. The
fundamental idea was that where an edge existed, there would be a quick change in the
intensity and still large enough to measure. By computing the gradient around every
pixel, we could find these points and mark them along the image.

To accomplish this gradient computation, we used the Canny Edge Detection
method. This algorithm works in three steps.

1) Gaussian smoothing of the image
2) Computing the gradient of the intensities in the image
3) Thresholding the norm of the gradient image to isolate edge pixels

The resulting output of the Canny algorithm was a binary-like image with high values at
pixel locations where an edge was detected, and a low base value for all other
surrounding pixels. See figure below.

Figure 1:
Original image (left) of white card on white surface versus
Canny Thresholded Image on Right

However, before diving into more specifics about the algorithm, a lengthy discussion on
problems associated with the edge detection, particularly for our purposes, must be
addressed.

Identifying Edge Location
The Canny Edge Detection algorithm effectively “paints” the pixels that mark the edges
of the card. Now, the question becomes how to use these pixels to do the following:

1) Find the location of the OUTER edge of the white card in the picture
2) Find the location of the INNER edge of the target hole in the white card
3) Identify stripes or other patterns with lines from the actual card edges.

The task is a lot more difficult than visual perception might suggest. Looking at the
thresholded image, it appears easy to locate the lines of the card, the lines of the pattern if
they exist, as well as the random dots of background noise. The problem is to
mathematically these high pixels with their perceived edges in the image.

Problems with Identifying Edges
To simplify the problem during first pass analysis, I used pictures of the white
card positioned vertically standing up as shown in the figure below.

Card Target Hole Figure 2: Simplified picture of 1st Analysis As shown. By finding the most common values in the histogram. y vertical). one could simply take the thresholded image and locate the edges by computing the maximum values and binning these values (by row and then by column) into a histogram based on x and y pixel location value (x being horizontal axis. For example. If this is always the case. especially if the user is blind. that value would correspond to the edge of the card or inner target hole. the card may be slanted as in the below figure: Card Target Hole . The ideal situation as described above cannot be assumed to happen EVERY time the camera is utilized. the sides of the card match up perfectly with the sides of the photograph.

the bottom of the card has a negative slope. and computing a linear regression fit of the points to find the edges of the target white card. and similarly of the hole in the card. but would work. Switching from one side of the card to another side could be determined by computing a change in sign of the slope of a set of points. causing a series of lines to appear in addition to the edges of the card. These were grouped into two categories based on 1) lighting and 2) surrounding background of the target. Impact of Background Images During our experiments with various background targets. Even moderate lighting affects thresholding ability of Canny Algorithm! An extension of this problem surfaces if you have perfect lighting and a target with a color matching EXACTLY that of the white card. but the image does not reflect (numerically) a high enough difference between the background and the edge of the card. See the example below. For example. Another instance where background color could affect the edge detections is if there is a striped or checkered target. lighting can prove to be a very important factor in detecting an object. in the above figure. For example. Figure 4: White Card on Off white in Ambient lighting. There needs to be a solution to this problem. the binning of high pixel values no longer works. In the first case. whereas the right side has a positive slope. Figure 3: Slanted Target In this case. Computationally. vertical card that would greatly simplify the algorithm. If the light source is dim. . in the case of an off-white image. You would not be able to see the edges at all. the lack of light captured in the image may cause poor intensity gradients via the Canny algorithm. since the card is at a high angle. say beige. Another possible solution would be to compute and adjust the slope between points recursively. The result is perfectly formed lines that will pass the threshold test of the Canny algorithm. One could solve this by finding all the high pixel values as before. several situations occurred where detecting the outline of the card with the Canny algorithm proved to be very difficult. Still if there was a way to guarantee the upright. this would take a long time and be very complex. not enough photons are reflected off the white card to sufficiently separate it from its background. The result is that visually the human eye can distinguish the difference.

think again. If you think that aiming is simple even if you cannot see. try it again. Another example would be trying to take a picture of yourself. People always try it and always miss a few times (cutting of part of the face. Figure 5: Image is of white background with stripes. Having the sense of vision. 2) Pick up the camera and white card 3) Try to take a picture at the same length as before with the card centered. Look at the pictures. there should be a marked difference in the centering of the white card in both images (unless you peeked!). this simply nullifies the use of the white card as a baseline for calculating luminescence. we as designers may OVERSIMPLIFY this problem dramatically. This would be similar to taking a picture at night without a flash. head. Additionally. You can probably do this fairly easily. a white index card (4” x 6”) 2) Close your eyes 3) Then try to take a centered picture of the card at about a 1 foot length. How about a blind person? We know that putting the camera right up to the target hole in the white card is futile since the lack of lighting will probably leave us a dark-blackish looking image. we wanted to develop an algorithm that could be used by color-blind and BLIND individuals through the use of the camera phone. How much more difficult would this be for someone who has NEVER been able to see. Note the edges and stripes are all thresholded. This is all true knowing we can see to begin with. Here is a test you can do to prove this to yourself: 1) Take any camera you have. user complications presented a challenging task to resolve as well. As defined by the project statement. NOW. only this time using a different order of steps: 1) Close your eyes first for 5 minutes. . The problem thus becomes how to reduce camera aiming error for someone who cannot see. Color-blind individuals can visually aim the camera phone well enough to ensure the white card is completely in the field of view of the lens. or others in the picture). Which one is the correct edge?? User Complications/Problems In addition to the math.

These involved everything from the math involved to user interfaces required to use the algorithm properly. how would they know that the target was in the center. the algorithm. The Design A bulky contraption would not be ideal to carry around. On one end. The goal was to devise something small and compact enough to fit in a pocket or small pouch that could be carried without discomfort. a variety of complications arises that need to be considered before the algorithm can even be written. or even if the picture they took included the white card at all? If we are to believe that we can simply line up a camera and the white card target now. More importantly. we decided a contraption that could be connected to the phone and used to offset aiming error was deemed a necessity. I believe we have proved that is a false assumption. How do we solve this? Our Problems Solved As discussed. See the figure below: . On the other side. is useless when the target white card cannot be acquired. The Contraption. Similarly. the card would be attached and allowed to flip out and stand upright. To resolve this situation. no matter how perfect.Flying Blind In all the stated challenges regarding the use of this algorithm. The amount could include dials to allow fin adjustments of the phone position.before coming close to an acceptable one. we saw the most important one as that of user feasibility. It is akin to having a skilled marksman hit a target without aiming. We decided upon a device that was collapsible to the size of a 4 inch by 6 inch white card. a mounting device could be used to attach the setup to the camera phone. and in agreement with Bob Dougherty. The following paragraphs will address the aforementioned concerns that appeared over the course of our analysis and design of the edge detection algorithm for both mathematical computation and user feasibility. A distance of about 6-8 inches between the camera phone and the white card was set to keep the design small and yet allow ample lighting to gauge the color of the target. A blind individual would not want to be continuously adjusting the card repeatedly.

. Camera White Card Hinge Assembly to allow for Mount Base Board to separate folding Camera from Card Figure 6: Diagram of Assembly from Horizontal View Here are a few sample pictures before using the device and after using the device.

I outlined the edges of the white card and the edges of the whole with a thick. black line. with a shorter board attached at one end via a mini-hinge. The white to black transition. At the other extreme of the base. First. wooden plywood boards were used for the base. and took up a large area of the photographs. I believed. To resolve this. Thin. It also could predetermine the white card orientation. Second. such as the slanted card described earlier. This simplified our algorithm computationally. The white card was always upright. the card holder guaranteed the card was always dominating the Field of View of the camera. would provide the steepest gradient for intensity. See figures below: . minimizing the problem of background clutter which could only complicate the edge detection. regardless of lighting and background. To this shorter board we attached a white card with the black outlines on the outer edges and inner target edges as well. Even more impressive was the ability to use the Canny algorithm to “see” edges against very white and dimly lit backgrounds. White Card Modifications Another issue mentioned earlier concerned lowlight level backgrounds or objects with a similar color to that of the card. we could improve the efficiency of the algorithm by eliminating strange orientations. Using this device I was able to take several pictures and run the edge-detection and later the color selecting algorithm on them. We forced the white card to be positioned vertically such that the edges of the photo and the edges of the card were parallel. It worked as well as I had expected. Notice how the card is not completely in the picture and sometimes does not even include the target object! The white card holder served multiple purposes other than just helping the user aim. Figure 7: Photos without and with the cardholding device The first is an attempt to photograph the orange shoebox. a simple thick paper clip was attached that allowed a slim phone to be locked into place. Proof of Concept To prove the theories of the usefulness of this card holding apparatus. I used basic materials to construct a primitive but useful device pictured below. The second is an attempt to photograph the purple gorilla.

Figure 8: Original Picture of Blue Material Using Card Holding Device Figure 9: Canny Edge Detected Thresholds Now that we have specified the nature of the problem. where the value is set to 8. highlighting . This is done by using a Laplacian convolution mask. The kernel is an approximation of the second derivative. we can proceed to a discussion on the development of our edge detection algorithm using the Canny Edge Detector. The Algorithm Step 1: Blurring and Sharpening Edges in the Image Prior to using the Canny algorithm. The Laplacian kernel is simply a 3x3 matrix filled with -1’s except at the center. the photographs are initially preprocessed to sharpen the edges present.

. Gaussian smoothing is done beforehand to blur and eliminate noisy pixels in the photograph. See the figures below. Due to the Laplacian’s high sensitivity to noise. Initial image Image after Blurring and Sharpening.changes in intensity. The smoothed photo is then added to the result of the Laplacian convolution to obtain a new image that has sharpened all edges for improved detection by the Canny Algorithm.

Initial Image (GrayScale) Image After Blurring and Sharpening (GrayScale) .

Initial Image Top Inner Edge Blur/Sharpened Top Inner Edge: Notice less noisy in center and smoother Note: Outer edge spikes are from photo The first image has more noisy pixels at points around the edges of the card (see top inner edge). The result is cleaner lines on the outside of the card and around the target hole. this process has three distinct phases: . The second image has reduced these spurious points by smoothing and the Laplacian convolution. the Canny Edge Detection Algorithm provided a way to find the pixels that outlined the points of largest intensity change across the image. Step 2: Using the Canny Algorithm As stated earlier. To reiterate.

whereas excessive light minimally impacted the edge detection. The result was clearer detection of the white card outline every time.15 in decent lighting conditions regardless of the mask size or standard deviation used. Ideally. initial testing showed variability to lighting conditions and background color. I was able to assess that the best value of alpha ranged from 0. A similar result occurred from increasing the standard deviation to a large value. We eventually settled on a mask size of 20. This made the lighting gradients more visible for the white card against the background. As lighting became weaker. every random bright spot that was detected appears in the thresholded image.10 .vertically and then horizontally on the image. After initial monte carlo testing over a few sample images. However. The problem here is that if alpha is set too low. by setting alpha too high. the image was first converted to a grayscale. we definitely remove these bright spots but at the cost of filtering out some of the edge pixels as well. we wanted to finely isolate all edges. the standard deviation could not be set too low. 1) Gaussian smoothing of the image 2) Computing the gradient of the intensities in the image 3) Thresholding the norm of the gradient image to isolate edge pixels Before processing. to avoid detecting small gradual changes in intensity. As expected. Thresholding The last step of the Canny algorithm performs a type of binary thresholding by setting all non-edge values to a single base number while leaving all edge pixels with a high number. Conversely. One important note is that regardless of the mask size or standard deviation. In our case. I decided to fix alpha and keep it at a level of 0. This led to our decision to use the black outlines on the white card which was stated earlier. the more smeared (and by default thicker) the edges became. the more problematic this became. As users.05 to 0. regardless of the Gaussian mask parameters. The matlab code we obtained completes both the first and second steps using one matrix. with a standard deviation of 5. so the smaller the standard deviation. we were able to tune the Gaussian mask by setting its size and standard deviation. This low “zero” is identified by taking a percentage (alpha) of the difference between the maximum and minimum intensity values from the norm of the gradient of the image. the better it worked. Computing the Gradient The Gaussian smoothing of the image is done by convolving it iteratively with a Gaussian mask. The result is an image that has high values only where edges exist. Extensive testing showed that the larger the mask. The derivative of the smoothed image is then computed to identify the gradients which identify the edges if they exist. This was a residual effect created by the gradient computation and its inability to highlight the white card edges. the lighting effect was diminished when the black outlines were applied to the white card. we had to perform these operations in two directions. while the rest of the image is set to one base value (analogous to a binary image of ones and zeroes). In the end.

Our idea was very simple given we had simplified the problem of card orientation to that of a vertically standing white card. not vertically. and the search continued up until the first quarter of the photograph size. This was done row by row throughout the image. searching from outside to inside 2) Bin the values for each side to estimate the white card outer edge location 3) Crop the original threholded image based on the computed edges and repeat to find the inner target hole Phase 1: First. Note we only searched horizontally. The next step is to find the target hole in the thresholded image and extract that matrix of pixels from the original RGB image for processing by the color detection part of our algorithm. This will be explained shortly.m matlab routine. This was done using the idOuterEdgeOutsideIn. a detection of high valued pixels in the thresholded image is done to identify the sides of the card. right. The pixel location was stored. The key phases are as follows: 1) Detect pixels that have high thresholded values on left. Phase 2: . Similarly. top and bottom of the photographs. Figure 9: Histogram of Threshold Values over Various Gaussian Masks Step 3: Processing the Thresholded Image At this stage of the algorithm. we have a thresholded image with outlines of the white card and target hole identified. We could limit the search to the first and last quarters of the photograph because of the assumed positioning of the card in the field of view using the card holder. another recursive search was applied from the right side of the image throughout the last quarter of the photograph. Code was written to do a recursive search from the left side of the image until it found values exceeding threshold for 6 consecutive pixels.

However. This axis was broken down into pixels. Since we only crop and search on the left fourth of the image. assuming pixel 0 of the horizontal axis is in the lower left corner of the image) and walk backwards towards the lowest bin. If the bin contains a number greater than 10% of the sum total of all pixels. we start with the highest bin number (which should be closest to the left edge of the card. and possibly more edges if the background has stripes. we expect a high value associated with a pixel location along which there was a vertical line in the thresholded image. Even if there is a gap between the top of the card and the top of the photo. At each bin. An equivalent procedure is applied to the right fourth of the image. that the edge of the card is closest to the center within that section of the photo since we have effectively cutout all other edges with the white card. then we believe we have an edge. To see this more clearly. we simply sorted the values into bins associated with their appropriate positions along the horizontal axis in the image. Why is this true? We know the left edge of the card is visible due to the black outline AND we know the left and right edges run the total height of the photo (OR definitely a majority of it). a search is done row by row to obtain thresholded pixels on the left fourth of the image. In finding the edge for the left side. we are guaranteed to find at least that one edge of the card. the algorithm checks the total number of values it contains. the dominant bin will be the bin closest to the center and constituting a sizable percentage (a minimum of 10-15%) of the total number of binned pixels. we could determine an estimate for the left and right outer edges of the card. The computeSpread.m matlab routine assesses . If we look at the size of the bins. with the 0 value being on the far left. See the diagram below: Target Hole Right Left Fourth Fourth Card Figure 10: Diagram of Search procedure: First. By examining the number of pixels in each particular bin number. we know with certainty. let us take the values accumulated from searching the left fourth of the image. Taking the distribution of the stored pixel locations from Phase 1 above. bottom corner of the image. followed by a similar procedure on the right fourth of the image.

these bins. we simply use the information gathered from the left and right searches. We can afford to do this since we know the card is white and we are just looking for the black outline of the target center. Thus. except we simply search to one edge and stop on both the left and right sides. Similarly. we assume the bottom of the card is cutout from the use of the cardholding device. There should be only one dominant set of bins on each side of the target. All that is left is a picture where the edges are all inside the white card. The only stripe or high thresholded . in case the top of the card is NOT cropped off. is done to isolate the inner target hole via the idInnerEdgeOutsideIn. by finding the largest bins closest to the center of the image in each of the searched regions.m matlab routine. However. similar to the one described in Phase 2. See figure below. another search. This search also uses a binning method to identify edges. Since we know the left and right edges from the binning of the values. The top side begins at the first few rows where we start receiving high pixel data for the left and right searches. we simply estimate the edge from where the last few left and right edges still collect high inputs. However. we merely take an average of the first 10 rows of these searches where high thresholded values appear. Phase 3: Once the outer edge pixel locations were identified. their contents and the percentage of high pixel locations associated with each bin. Again. Target Hole Left Right Side Side Card Figure 11: Diagram of Inner Target Search: Notice that the outer edge has been cropped off. the bottom corresponds to the last few rows of high pixel values. The top and bottom of the target hole is simply computed to be the first few rows (or last few rows) where the left and right searched locate high thresholded values.m routine. The top part of the card corresponds to the first few rows at which we start accumulating high pixels at points within a range of +/-25 pixels of the determined left and right edge. we do not count rows in the estimate that do not contain a value for the left and right edges within a spread of 50 pixels. we can find the outer edges of the card! To identify the top and bottom sides of the card. This is done in the findTopEdge.

The first problem arose with using the black outline white card.) Given we had a mounting device that would keep the card at a reasonably steady angle to the camera. Our initial implementation was similar to our final outside to inside search procedure. Alternate Algorithm Work During the process of finding the edge locations. the lines detected would look slightly slanted. If there were . Edge Location Facts One interesting discovery was that the outer border of the photographs always had bright spots associated with them in the thresholded image. we assumed. another problem arose once multicolored or patterned backgrounds/targets were being photographed. The original image is then cropped down to these edges to effectively zoom in on the color patch in the target. many things were learned and incorporated into the algorithm to make sure it functioned correctly. each image was always cropped by 15 pixels on each edge to exclude these noisy components before searching was initiated. A second problem during the edge detection process that repeatedly occurred was the variability of pixels identifying the edges. initial tests always identified at least one edge of the photograph as the edge of the white card. keeping the card rigid and standing perfectly perpendicular to the bore sight of the camera. To resolve this. If the card was not perfectly parallel to the camera lens. but searched for only the 4 maximum pixel locations for each row and column. This was discussed briefly in the binning of Phase 2 above. Ideally. This led to the method implemented above in computeSpread. However. the other two the inner target. (This is similar to looking at a flat road towards the horizon. after extensive testing. all else is washed out. This 3-D matrix is then handed over to the color processing code. a well designed card holding device would eliminate this problem. It was our belief that the highest values would mark the edges every time. values should be the inner edge of the target hole. The transition from white to black and from black to the background often caused 8 edges to appear on flat colored backgrounds.m. This was easy to solve using buffers to eliminate the two edges every time. If the separation is less than 50 pixels.m where the algorithm checks to see if bins are separated by 50 or more pixels. The two pixels farthest from the center would identify the outer edges. As a result. that the difference in pixel locations for any left or right edge should not exceed 50 pixels. Things Learned As simple as the search sounds. Step 4: Color Detection Input After identifying the inner and outer edges of white card.it looks as if it converges to a point in the distance. one could envision multiple solutions to identifying the thresholded pixels. the exact location of the target hole is identified by adding or subtracting the appropriate edges together in the function computeEdgeLocation. an average of the two bins is used to get a better estimate.

m matlab file. This gave us more confidence in our estimates of the outer edge. See idOuterEdgeLocationOutsideInOriginal. Similarly. There were too many thresholded values at times. Similarly. say a table edge. one can notice the bottom and top edges are removed from the photo due to the nearness of the camera. in the worst case. This leads us to the next point. Similarly. and a white background sheet which .stripes or changes in the back. This proved to be as tedious a method as storing all the high valued pixels. Even if we buffered. Base lining the Cardholding Device A brief discussion of how the device would be guaranteed to center the white card and camera is of importance here. on the sides is limited to a size not much larger than the height of the card. By computing the max pixel value locations for each outer quarter of the image. we could be sure we were NOT getting max values inside the target hole. we can accomplish that I believe. our bins for the left and right edges would we of equal size if there were stripes on the outside margins that might pass threshold in the Canny algorithm. base lining the aim of the camera on the device. we could not be sure that the four maximum pixel intensities were always that of the white card. This led us to use the outside in search procedure. This led us to the idea of a card holder to simplify the edge detection problem. how could one center the camera such that the base (and top) of the white card is cropped? This could be done by giving the user a centering algorithm with audio feedback and an adjustable mount on the card holding device. Secondly. because the top and bottom was cropped off. corresponding to the edges of the card and target. it required a long and iterative process that was very inefficient. In that case. The outside-in search procedure works well because of the use of the white card holder. but using algorithms similar to the ones written already. The issue then became one of knowing how many high-valued pixels do we store? If we limit it to four. Initially we implemented this outside-in search procedure without the holder. This greatly aids us in the outside to inside search method we implemented. the holder serves to vertically orient the card and maintain a regular distance to the card. It is safe to say that many could question how well a blind individual could center their camera lens and the white card. Similarly. the background. The algorithm would base it off a white card with its center still intact but outlined in black. if the target had very bright lines we might detect high pixels inside the target hole. the most probable edge still remained in the maximum bin closest to the center of the image. See the idOuterEdgeLocationMax. vertically. those transitions would appear as high valued edges in the Canny algorithm. This ensures the card is the dominant image in the photograph. The only issue with the method eventually decided upon is that the card must be centered in the field of view of the camera on the device. or just a random noisy set of pixels. since our card was designed to show maximum gradient via the black outline. we may miss them due to outside noise or the background pattern. if we stored ALL high valued pixels. depending on the photograph. Many of the problems we encountered were similar to those expressed above in searching for the maximum values in each row/column.m As discussed. In many of our test photos. we might be identifying a stripe in the background. If we simply searched until a set of high pixels was located. This may appear difficult for a blind person.

Color detection consists of two basic steps. and this value is used to normalize all the pixels. so that no matrix inversion or complex operations were required. and the algorithm could compute the edges. As stated earlier. The process of normalizing or white balancing is used to correct for such effects. Normalization The image taken by a camera is a rendering of the light that impinges on the lens. under bright conditions and with uniform colors only. and returned better results though as discussed in the color recognition section. but cell-phone cameras are low end. whereas the same shirt under tungsten will appear more orange. G and B values for the image are normalized to 128. and the minimal processing that is done is not enough. There are many algorithms that can be followed to do white balancing. This is a well known technique which returned usable results with the images that were tested. we found various methods to achieve the desired result. if illumination is below a minimum level the color recognition is unreliable. we decided to keep the algorithms as simple as possible. since the program must run on a cell phone with limited resources. This algorithm could be used in darker conditions. the world is considered to be gray on average. when the brightest point had RGB values near gray. Then the target card with the whole could be inserted in place of the test card and function just as well. so the algorithm could supply feedback to the user to accurately adjust the mount until the edges matched as we described above. the input to the color detection is the 3D matrix which contains the RGB values of the color target taken from the original images. Color Detection Once the color target is acquired by the edge detection algorithm. Based on the white background. take photos on the white background. the ambient lighting. To correct for this we needed to take these RGB values run a normalization algorithm and then decide on the color in the target. Very simply. However as illustrated in figure 12 below. Most cameras have some process already inbuilt do to just this. normalization or color correction and deciding the color. Under perfect lighting conditions the image would contain the exact RGB values for the target and the color detection would be trivial. Hence. under most lighting conditions. However. a photo of a red shirt under fluorescent lighting might appear slightly pink. “Gray World” assumption The first strategy investigated was the “gray world” assumption whereby as the name suggests. . However. the next step is color detection. the normalization of the image pixels and then deciding the color. So the average R. gray world algorithm fails. Using class notes and research done online. the reflectivity of the object being photographed and its motion all affect the image.could be included in the package when purchased by the user. there is no additional line content except the borders of the card. since in our case the image is taken from a low quality camera and the lighting in undetermined. The white balancing was correct even in low light conditions. the color in the image may not be the actual color a person would see under ideal conditions. We divided this problem into two distinct parts. the user would attach this card.

However. But we have the advantage of having a white card in the image. after extensive testing with different lighting conditions and images. . 255. under poor lighting conditions. on a bright white surface under fluorescent lighting. on a bright white surface under fluorescent lighting. since the card is known to be pure white. which can be used as the “white point”. as illustrated in the figures below. The results achieved were comparable to those using the gray world assumption. the results were better. this algorithm was chosen.255). it was found that the overall performance was better than “gray world” and hence. and in most cases with bright lighting. To achieve this. But. Normalizing using White Point The gray world assumption holds valid for most lighting and is a useful way of white balancing if no other information is available from the picture. this algorithm performs marginally worse. we find the max RGB point in the card image and normalize that to (255. 10 10 20 20 30 30 40 40 50 50 60 60 70 70 80 80 90 90 100 100 20 40 60 80 100 120 140 160 180 200 220 20 40 60 80 100 120 140 160 180 200 220 Figure 12: Illustration of normalization using “Gray World”. 10 10 20 20 30 30 40 40 50 50 60 60 70 70 80 80 90 90 100 100 20 40 60 80 100 120 140 160 180 200 220 20 40 60 80 100 120 140 160 180 200 220 Figure 13: Illustration of normalization using white point. This method brings up the RGBs for the target image. a reference from which we can calculate the normalization coefficients for the RGB value.

The Hue is the color type like red. the 3D matrix is then sent to the color recognition algorithm where the color of the target image is decided. to be able to use CIELab the illumination must be known since the transformation requires different constants which are in turn dependent on the ambient lighting in which the picture was taken. However. . HSV is a simple transformation from RGB. 0) then the max of RGB for that pixel is 255 and the normalized pixel will be (1.e. since some colors have wider ranges. Also there was no set pattern and grouping. Figure 14 shows the cylindrical representation of this standard. At this point. we again had the problem of overlap and unclear bounds. Saturation is the vibrancy of the color (i. since a slight change in any one of the three parameters can cause the colors to vary. This is the simplest method with no conversion required. to allow nested if statements in the code.84. On sorting. . Under very low lighting. And all with max B are bluish. Setting even upper and lower bounds is almost impossible. Once. just sorting the color lists by R. This method did bring similar colors together. Value model. Color Recognition Initially. To counter this problem. on initial testing we recognized that making bins with upper and lower was complicated. The CIE-Lab color system seemed to be the answer. and Value is the brightness of the color. G and B for each pixel and normalize that pixel. then G. The original plan was to bin each pixel depending on the RGB values. after working through all the color. which is also one of the pros of using this scheme. 215. since the target image looks essentially looks black and the white card appears dark gray. it was discussed quite extensively in class and could potentially provide a representation for the colors which could be easily divided and subdivided. how faded or sharp a color is). while researching CIELab. Next. but there were still some random colors scattered around. we started to look beyond RGB and into more complicated the colors schemes that are translations of RGB into a different domain. both the normalization algorithms are rendered useless. we did realize though that R seems to be dominant. the color recognition part seemed trivial. with max RGB below 100. after the normalization was finished. and the normalization just creates artifact pixels and doesn’t bring out the real color. and B. However. Saturation. a more extensive and complicated white balance algorithm must be implanted. However. or a flash must be required on the device. We spent over a week attempting to come up with methods to group the RGB values. we attempted to find the max of R. Initially. this illumination is known the image represented using CIELab becomes independent of the device taking the picture. 0). hence if Gold is (255. we found the HSV color scheme. and stands for Hue. After normalization of the pixels. we found all the colors where R is the maximum have a reddish hue. But. blue.

The Saturation represents the sharpness of the color. The Hue represents the color. and represents the brightness of the color. changing as one moves around the circumference of the cone. The Hue value is calculated from 0-360 similar to angles. G and B values to HSV Once the above calculation is run. zero being dark and 1 being fully bright. Figure 14: Cylindrical representation of HSV model. Figure 15: Equations used to convert R. and grouping colors now a trivial task. . the H value can be used to determine color and the S and V value for the lightness or darkness. The Value goes from zero to 1 also. being zero at the center (completely faded) to 1 at the circumference for full vibrancy. the following formulas could be easily coded and are not computationally intensive. The HSV model suits the needs of this project perfectly.

As future work. the code was written to incorporate the major VIBGYOR colors with each color having a light. Figure 17a: Pure white card under fluorescent lighting Result: WHITE . more colors could be added simply by making the bounds smaller and defining colors in higher precision. true and dark version. The following figures show some test pictures and the results after running them through the matlab routine. plus white. This scheme seems to provide a robust system since making upper and lower bounds can be set easily and lightness and darkness are also clearly defined. black and gray. Figure 16: Hue (H) values in HSV and its corresponding color After deciding on HSV.

Finally. Another area for possible improvement would be increasing the color library. All of our processing was done on a laptop where memory and RAM is not a problem. The second front that needs exploring is decreasing the processing time to audibly deliver the color to the user. Our algorithm was able to recognize and deliver only 24 different colors. To continue with this work. Figure 17b: Light Green and white striped shirt under sunlight Result: Light Green Figure 17c: Pure red backpack under tungsten lighting Result: Crimson Future Work From picture framing to edge detection to color identification. Obviously the same kind of computing power cannot be expected from a phone. the first area that needs exploring is implementing the algorithm on a camera phone. but of course there is plenty of room for improvement. Both of these fronts would require streaming the algorithm to run in a clean and efficient manner for maximum practically and usability. our algorithm did what it was intended to do. increasing .

edu/~denes/Psych221/Psych_221Final_Project. The last two fronts. If nature does not provide an individual with the full use of vision. we believe this could be achieved. Phillip Lachman: Development and selection of the picture images. All people have a right to experience life to its fullest which includes engaging the world through all five senses. but with the camera phone continuing to increase in computing power and capability. we hope our project has taken a small step in rectifying that.the algorithm’s sophistication in order to be able to detect color patterns like strips and patch patterns rather than only the predominant color. expanding the color library and increasing the algorithm’s sophistication would increase the processing time and memory requirements. Robert Prakash: Development of the color identification algorithm. which can be found at http://stanford. Appendix I All of our Matlab scripts and source code are attached to our project website.htm Appendix II Group Project Roles & Responsibilities: Elston Tochip: Development of the edge detection algorithm. color schemes and audio output files. .