You are on page 1of 13

Biological vision Report

Image Edge Detection and Orientation Assignment


Shahzen Khan (B20CS065)

Part 1: Image Preprocessing

1. Image Selection and Diversity:

I created a Google Drive folder and uploaded a diverse set of natural images,
totaling a minimum of 10.

These images were carefully chosen to encompass a wide range of


complexities, perspectives, and lighting conditions. This diversity helps in
testing the edge detection algorithm under various scenarios, ensuring its
robustness in real-world applications.

2. Image Loading and Display:

I used appropriate libraries to load the images into the Python environment.

Displaying the images allowed me to visually inspect them and gain an


understanding of their content and characteristics.

Below image is one of the image from the dataset. the light condtions are
good.

Biological vision Report 1


The below image is taken on diwali in IITJ campus. the lighning conditions are
not so good.

3. Conversion to Grayscale:

Grayscale conversion is a crucial step in edge detection as it simplifies the


image while preserving important features.

Biological vision Report 2


To achieve optimal contrast enhancement and dynamic range preservation, I
employed two techniques: Contrast limit Adaptive Histogram Equalization
(CLAHE) and Gamma Correction.

4. Adaptive Histogram Equalization (AHE):

AHE is used to enhance the dynamic range of pixel intensities while preserving
local details.

5. Contrast Limited Adaptive Histogram Equalization (CLAHE):

CLAHE is a variation of AHE that limits the amplification of extreme pixel


values, preventing over-amplification and potential noise.

I applied CLAHE to the L channel of the LAB color space, which helps in
preserving the local contrast of the image.

6. Gamma Correction:

Gamma correction is a non-linear operation that adjusts pixel intensities to


improve contrast.

I applied gamma correction with a gamma value of 1.5 to further enhance the
contrast in the image.

Biological vision Report 3


Part 2: Complex Orientation Filters (Gabor-Based Filters)

1. Image Selection:

I chose one high-resolution grayscale image from the set I prepared in Part 1
for further processing. This image serves as the basis for applying the
orientation filters.

2. Gabor-Based Filters:

Biological vision Report 4


Gabor filters are advanced orientation filters that are constructed using Gabor
wavelets. These wavelets are complex sinusoidal functions modulated by
Gaussian distributions. By adjusting various parameters, I can extract features
at different orientations and frequencies.

3. Gaussian Smoothing:

Before applying the Gabor filters, I performed Gaussian smoothing on the


selected grayscale image. This process helps reduce noise and prepares the
image for better feature extraction.

4. Generating Gabor Filters:

I created multiple Gabor filters with varying orientations (0, 45, 90, and 135
degrees) and frequencies (0.6, 1.0, and 1.5). These filters are designed to
capture different types of features in the image.

5. Convolution Operation:

I applied each Gabor filter to the smoothed image using a convolution


operation. This process involves sliding the filter over the entire image to extract
features related to specific orientations and frequencies.

6. Visualizing Filtered Images:

I visualized the resulting filtered images for each orientation and frequency
combination. This step is crucial for observing how the filters emphasize specific
features in the image.

Biological vision Report 5


7. Interpreting Results:

The displayed images showcase the features that were highlighted by each
Gabor filter. By examining these filtered images, I gain insights into how
different orientations and frequencies contribute to the representation of edges
and textures in the image.

Part 3: Winner-Takes-All and Normalization

1. Winner-Takes-All (WTA) Algorithm:

I implemented the WTA algorithm to consider both magnitude and


orientation information from the complex filtered images.

This approach allows for a more robust edge extraction process by


selecting the maximum magnitude and its corresponding orientation at each
pixel location.

2. Max Magnitude and Orientation Calculation:

For every pixel in the filtered images, I iterated through all orientations and
selected the one with the maximum magnitude.

Biological vision Report 6


This process resulted in two output images: one containing the maximum
magnitudes and the other representing the corresponding orientations.

3. Visualization of WTA Output:

To visualize the WTA output, I combined the orientation information with the
normalized magnitude values.

The orientation information was mapped to colors using the 'hsv' color map,
and the magnitude values were normalized to enhance visualization.

4. Purpose of Visualization:

This visualization provides a clear representation of the features and edges


extracted using the WTA algorithm.

It allows for a visual inspection of the orientation and magnitude


information, aiding in the assessment of the effectiveness of the WTA
process.

Biological vision Report 7


5. Normalization Techniques:

I employed advanced normalization methods like Adaptive Contrast


Normalization (ACN) to further enhance edges while suppressing noise.

ACN is particularly effective in ensuring that features are more pronounced


and discernible, which is crucial for accurate edge detection.

6. Application of ACN:

I applied ACN using the exposure.equalize_adapthist() function from the


skimage library. This method adaptively enhances the contrast in the image,
emphasizing edges and features.

7. Display of ACN Output:

The resulting normalized magnitude image, obtained through ACN, was


displayed to showcase the enhanced features.

Part 4: Comparative Analysis


As a student, I conducted a comprehensive comparative analysis of edge detection,
encompassing various stages of the image processing pipeline. Here's an in-depth
explanation of what I did and why:

1. Pipeline Application:

Biological vision Report 8


I executed the entire pipeline, which includes image preprocessing,
complex orientation filtering, Winner-Takes-All (WTA), and normalization, on
all loaded images.

This step ensures that the complete process is applied uniformly to each
image, facilitating a fair comparison.

2. Impact of Complexity, Texture, and Lighting:

The impact of complexity, texture, and lighting on edge detection was


notable in our analysis. In images with high complexity, intricate details
were effectively captured by the algorithm, demonstrating its robustness.
Additionally, images with varied textures showcased the algorithm's ability
to discern edges amidst intricate patterns. Furthermore, under varying
lighting conditions, the edge detection algorithm consistently maintained its
performance, highlighting its adaptability in real-world scenarios. Overall,
the algorithm's ability to handle diverse image characteristics underscores
its reliability for a wide range of applications.

3. Introduction of Noise:

I introduced Gaussian noise to the original images.

Biological vision Report 9


EDGE DETECTION ON NOISED IMAGE:

4. Comparison with Original Edge Detection:

The comparison between the original edge detection and noise-influenced edge
detection revealed noteworthy insights. The Structural Similarity Index (SSIM)
indicated a substantial deviation of 0.1671, suggesting a notable impact on
structural similarity, encompassing luminance, contrast, and structure.
Furthermore, the Edge F1-score, which assesses precision and recall,
demonstrated a perfect score of 1.0, indicating a high degree of accuracy in
detecting edges, despite the introduction of noise. These metrics collectively

Biological vision Report 10


showcase the algorithm's resilience in maintaining accurate edge detection
performance even in the presence of noise.

5. Purpose of Evaluation:

The comparison between the original edge detection and noise-influenced


edge detection underlines the algorithm's resilience in the presence of
noise. It also offers valuable insights into the balance between edge
sensitivity and noise suppression.

6. Pros and Cons of Gabor-Based Edge Detection:

Pros:

Multi-Orientation Detection: Gabor filters excel at capturing edges at


varying orientations, making them highly versatile in detecting features with
different orientations in an image.

Frequency Sensitivity: They are effective in identifying edges at different


frequencies, allowing for the detection of fine details as well as broader
features.

Cons:

Parameter Sensitivity: Gabor filters require careful tuning of parameters


such as frequency, orientation, and aspect ratio. Incorrect settings may lead
to suboptimal edge detection.

Computational Complexity: Implementing Gabor filters can be


computationally intensive, particularly when dealing with high-resolution
images or a large number of orientations and frequencies.

Potential Overfitting: In some cases, Gabor filters may overfit to specific


features in the training data, potentially leading to reduced generalization
performance on unseen images.

Part 5: Visualization

1. Overlaying Detected Edges on Original Images:

I implemented a method to overlay the edge-detected results on the original


images with varying transparency levels.

Biological vision Report 11


This technique is crucial in visually highlighting the detected features,
allowing for a direct comparison with the original content.

2. Purpose of Overlaying:

The purpose of overlaying is to provide a clear visual representation of the


detected edges in relation to the underlying image content.

This visualization aids in understanding how well the algorithm captures


edges and features within the context of the original scene.

Below is one of the overlayed image from the dataset for other images
output you can find in my google collab.

3. Computing Gradient Magnitude and Orientation Maps:

I computed gradient magnitude and orientation maps to further visualize the


strength and direction of edges.

The gradient magnitude map highlights areas of significant intensity


change, while the orientation map indicates the direction of the gradient.

Biological vision Report 12


4. Importance of Gradient Maps:

These maps offer additional insights into the characteristics of edges


present in the image.

The magnitude map allows for a focused examination of areas with


prominent edge features, while the orientation map provides information
about the orientation of these edges.

5. Displaying Gradient Maps:

I designed a method to display the gradient magnitude and orientation


maps for each processed image.

This step enables a detailed examination of the underlying features and the
directionality of edges.

Below is one of the Gradient Map and Orientaion map from the dataset for
other images output you can find in my google collab.

Biological vision Report 13

You might also like