watershed technique doesn’t declare it as an independentsegment. Fig.2 illustrates this problem. Here we have animage with 3 distinct regions. On the left side, we have aregion in which the shade of the region is continuouslydecreasing. Then, there is a second region in the right, within,which there is a region that possesses a shade, which iscontinuously and randomly varying. Human visual systemeasily distinguishes, that there are three regions in this image,but it has been a great challenge for computer visionalgorithms.
Fig. 2 a) The original showing three distinct regions with one segmenthaving continuous shade, the other with granular shade b)The image afterpassing through watershed transformation technique
While many merging algorithms simply apply a fixed rule togroup pixels and regions together,  present a mergingalgorithm that uses
between regions todetermine which ones should be merged, which produces analgorithm that provably optimizes a global grouping metric.They start with a pixel-to-pixel dissimilarity measure
such as intensity differences between N8 neighbors.(Alternatively, they also use the joint feature space distancesintroduced by Comaniciu and Meer, ). In the proposedalgorithm, this approach of finding relative dissimilaritiesbetween regions is taken and modified such that it not onlycounts on dissimilarities, but also on the relative and sufficientsimilarities of adjacent regions to give finalized version of segmented image.II.
In the watershed transform, the starting point of thealgorithm is taken as the topographic interpretation of theimage, which is a 3-D graph, with x and y coordinates of theimage placed on the horizontal plane and in the verticaldirection, corresponding intensity levels of the image areplotted. In this way, we get an interpretation in which highintensity values of the images can be observed in terms of peaks. The low intensity values can be observed in terms of valleys and the pixels with intermediate intensities can beviewed as slopes to the peaks. This is quite an efficient way of observing image such that we can have an idea of the image’sgradients as well. The whole watershed technique is based onthe immersion or flooding of this artificial landscape in water,which is followed by detection of edges and regions whichsurvive this flooding. Here, the total points can be divided intothree main categories. a)Points belonging to a regionalminimum. b) The catchment basins which are the points atwhich a drop of water, if placed at any of those locations,would fall to a single minimum. c) Watershed lines will becombination of points at which water will be equally likely tofall to more than one such minimum. Once these points arefound, the decisions of regions are made utilizingcomputations based on neighbourhood connectivity andsorting. The mathematical details of this algorithm areprovided in  & .III.
In this part, the proposed sequence of algorithm ispresented in detail. In the first step, the image I(x,y) is takenas an input to the algorithm. The image is first passed throughLaplacian and averaging filters. Laplacian filter is a veryefficient technique for edge detection as it highlights andsegments parts which have high gradient. The averaging ormean filter is applied to reduce noisy part of the image and tosmooth out the parts of the images which have high granularshade. Then this image
obtained after passing theoriginal image through laplacian and averaging filters isgiven to watershed transform. The watershed transformedimage is in a form which shows the proposed edges in lowgray scale levels. The watershed transformed image
isthen thresholded in order to give clear edge points, i.e. theedge points which depict boundaries of objects and have grayscale level of zero. This thresholded image is now
The resultant image from the previous transformations
are processed through the proposed algorithm. In thisalgorithm the edge points from
are extracted andstored in an array
is the numberof edge points and are number of rows in the array. Thesecond order ‘2’ is the number of columns of the array andstore x and y coordinates of the edge points. The array
is used for the analysis of the original inputimage
again. From the original image
, thegrayscale levels of the pixel points, pointed by
are taken. Finally, the whole of the image isscanned again at each location pointed by
and a vicinity check is performed. Thisvicinity check inspects that whether the pixel points lyingadjacent to the
declared by the first part of the algorithm, have sufficient intensity differences withneighbouring pixels or not. For each of the
if the difference
is found to be less than a threshold value, the
are modified to new coordinate values. Otherwise, the
This process is repeated for every single of the edge points.IV.
LOWCHART OF THE ALGORITHM
Fig.3 depicts the steps involved in the proposed algorithm