You are on page 1of 12

Unit – 1

Two Marks:

1. Define Image: An image is a two-dimensional representation of a visual scene or


object captured through various imaging devices. It's composed of a grid of small
elements called pixels, each representing a specific color or intensity at a particular
position in the image.

2. Define Pixel or Picture Elements: A pixel (short for "picture element") is the
smallest unit of a digital image. It is a single point in an image's grid, and its color or
intensity value represents the visual information at that specific location.

3. Steps in Digital Image Processing: The steps involved in Digital Image


Processing (DIP) typically include:

1. Image Acquisition
2. Preprocessing
3. Enhancement
4. Restoration
5. Compression
6. Transformation
7. Segmentation
8. Representation & Description
9. Recognition & Interpretation

4. Elements of DIP: Digital Image Processing involves various elements including


image acquisition devices (such as cameras), algorithms for processing, display
devices, and human perception.

5. Brightness Adaptation and Discrimination: Brightness adaptation refers to the


ability of the human visual system to adjust to varying levels of illumination.
Discrimination refers to the ability to perceive differences in intensity or color. These
characteristics influence how we perceive images under different lighting conditions.

6. Difference between Rods and Cones: Rods and cones are photoreceptor cells in
the human retina. Rods are more sensitive to low light levels and are responsible for
black-and-white vision. Cones are responsible for color vision and work best under
higher light conditions.

7. Weber Ratio: The Weber Ratio is a measure of the just-noticeable difference in


intensity of two adjacent regions in an image. It indicates the smallest change in
intensity that is perceptible to the human eye.
8. Mach Band Effect: The Mach Band Effect is a visual phenomenon where our eyes
exaggerate the contrast at the edges of adjacent regions with different intensities,
leading to the appearance of bands of light and dark.

9. Simultaneous Contrast: Simultaneous contrast is an optical illusion where the


perception of a color is affected by the colors surrounding it. The contrast between
the colors can cause one color to appear more intense or different than it actually is.

10. Hardware-Oriented Color Models: Hardware-oriented color models include


RGB (Red, Green, Blue) and CMYK (Cyan, Magenta, Yellow, Black) used in various
imaging and display devices.

11. Sampling and Quantization: Sampling involves capturing discrete samples of


continuous image data. Quantization refers to mapping these samples to discrete
intensity levels to create a digital representation of the image.

12. Zooming and Shrinking: Zooming is the process of enlarging an image, while
shrinking is the process of reducing its size. Various algorithms are used for these
operations, such as interpolation for zooming and downsampling for shrinking.

13. Moiré Pattern: A moiré pattern is an unwanted pattern that appears when two
regular grids or patterns overlap or interact, causing interference and producing an
irregular visual pattern.

14. False Contouring: False contouring is an artifact that occurs when an image
processing operation introduces artificial boundaries or levels of intensity that were
not present in the original image.

15. Dither: Dithering is a technique used to reduce quantization artifacts by


introducing a controlled level of noise. It helps to maintain the perception of smooth
gradients in images.

16. Checkerboard Effect: The checkerboard effect is an artifact that can occur in
digital images due to aliasing, leading to a distorted pattern resembling a
checkerboard.

Big Question:

1. Steps in DIP: The steps in Digital Image Processing (DIP) are a series of processes
used to manipulate and analyze digital images. The typical steps include:

1. Image Acquisition: Capturing the image using sensors or cameras.


2. Preprocessing: Cleaning and enhancing the image, correcting imperfections.
3. Enhancement: Improving the visual quality of the image for better
perception.
4. Restoration: Removing noise and distortions introduced during acquisition or
transmission.
5. Compression: Reducing the size of the image data for storage or
transmission.
6. Transformation: Applying mathematical operations (e.g., Fourier transform)
to change the image representation.
7. Segmentation: Dividing the image into meaningful regions or objects.
8. Representation & Description: Creating descriptors or features to represent
image content.
9. Recognition & Interpretation: Identifying and understanding objects or
patterns within the image.

2. Elements or Components of DIP: Digital Image Processing involves several


components, including:

• Image Input: Capturing images from various sources (cameras, scanners, etc.).
• Image Output: Displaying processed images on screens or other media.
• Image Processing Algorithms: A set of mathematical and computational
methods for manipulating images.
• Image Storage: Storing images in digital formats for easy access.
• Hardware and Software: Computers, processors, and software tools for
image processing.
• Image Analysis: Extracting information and patterns from images for further
interpretation.

3. Elements of Visual Perception: Visual perception involves several elements that


influence how humans perceive images:

• Brightness and Contrast: Detecting differences in light intensity to perceive


shapes and details.
• Color Perception: The way our eyes interpret different wavelengths of light.
• Spatial Resolution: The ability to distinguish fine details in an image.
• Temporal Resolution: The ability to perceive changes over time.
• Visual Sensitivity: Different sensitivities to different colors and intensities.
• Visual Adaptation: The ability to adjust to varying lighting conditions.
• Visual Illusions: Phenomena that showcase how our brain interprets visual
information.

4. Digital Camera & Vidicon Camera:


• Digital Camera: Captures images using an electronic sensor array (such as
CCD or CMOS), converts light into electrical signals, and digitizes these signals
to create a digital image.
• Vidicon Camera: An older type of camera that used a vidicon tube to convert
light into an analog signal, which was then further processed for display or
recording.

5. Image Sampling and Quantization (Analog to Digital Conversion):

• Image Sampling: Capturing discrete samples of continuous analog image


data at specific intervals (pixels). This forms a grid of pixel values.
• Image Quantization: Assigning a limited number of intensity levels to each
sample, converting the continuous range of intensities into discrete digital
values.

6. Image Sensing and Acquisition Devices:

• CCD (Charge-Coupled Device): A sensor technology used in digital cameras


to convert light into electrical charge, which is then digitized.
• CMOS (Complementary Metal-Oxide-Semiconductor): Another sensor
technology similar to CCD, but with different fabrication techniques. Used in
modern digital cameras.
• Scanners: Devices that capture physical images or documents and convert
them into digital form.
• Vidicon Tubes: Older technology used in television cameras, converting light
into electrical signals.

Unit – 2
Two Marks:

1. Define Image Enhancement: Image enhancement refers to the process of


improving the visual quality of an image to make it more suitable for human
perception or for further processing. Enhancement techniques aim to highlight
certain features, increase contrast, remove noise, or emphasize specific information
within the image.

2. Gray Level Transformation: Gray level transformation involves mapping the


intensity values of an image to new values, which can be used to manipulate the
overall appearance of the image. This transformation can include operations like
contrast stretching, histogram equalization, and thresholding.

3. Contrast Stretching & Gray Level Slicing:


• Contrast Stretching: This is a technique to expand the range of intensity
values in an image. It enhances the contrast by mapping the original intensity
range to a larger range.
• Gray Level Slicing: This technique involves highlighting specific intensity
ranges in an image by assigning one value to the pixels within the selected
range and another value to the pixels outside that range.

4. Thresholding: Thresholding is a technique used to create a binary image from a


grayscale image by selecting a threshold value. Pixels with intensities above the
threshold are set to one value (often white), while pixels below the threshold are set
to another value (often black).

5. Histogram: A histogram is a graphical representation of the frequency


distribution of pixel intensities in an image. It shows the number of pixels at each
intensity level, helping to analyze the image's contrast, brightness, and overall
distribution.

6. Histogram Equalization & Specification:

• Histogram Equalization: A technique that redistributes the pixel intensities in


an image to achieve a more uniform histogram. This enhances the contrast
and makes the image visually more balanced.
• Histogram Specification: A technique to transform the histogram of an
image to match a specified histogram. It's often used to adjust the image's
appearance to match a desired distribution.

7. Smoothing Filter: A smoothing filter (also known as a low-pass filter) is used to


reduce noise and fine details in an image. It works by averaging or blurring the pixel
values within a local neighborhood, resulting in a smoother appearance.

8. Sharpening Filter: A sharpening filter (also known as a high-pass filter) enhances


the edges and fine details in an image. It does this by accentuating the differences in
intensity between neighboring pixels, making edges appear more defined.

9. Spatial Filter: A spatial filter operates on an image by considering a small


neighborhood of pixels at a time and applying a filtering operation based on the
values of those pixels. Smoothing and sharpening filters are examples of spatial
filters.

10. Derivative Filter Types: Derivative filters highlight edges and rapid changes in
intensity within an image. Examples include the Robert Cross, Prewitt, and Sobel
operators.
11. Robert Cross, Prewitt & Sobel Operators: These are gradient-based edge
detection operators used to detect edges in an image by calculating the gradient
magnitude and direction. They differ in the way they calculate these values using
neighboring pixel intensities.

12. Unsharp Masking: Unsharp masking is a sharpening technique that involves


subtracting a blurred version of the original image from the original itself. This
enhances edges and fine details while reducing noise.

13. High Boost Filtering: High boost filtering enhances the high-frequency
components of an image to improve its sharpness. It's achieved by combining the
original image with a scaled version of its high-pass filtered version.

14. Frequency Domain Filtering: Frequency domain filtering involves transforming


an image from the spatial domain to the frequency domain using techniques like the
Fourier Transform. Filtering is performed in the frequency domain to achieve various
effects such as noise reduction, sharpening, and blurring.

15. Homomorphic Filtering: Homomorphic filtering is used for correcting non-


uniform illumination in images. It operates in the logarithmic domain to separate the
illumination and reflectance components of an image, making it useful for
applications like medical imaging.

Big Question:

1. Histogram Equalization: Histogram equalization is a technique used to enhance


the contrast and improve the overall visual appearance of an image. It redistributes
the pixel intensity values in such a way that the resulting histogram becomes as
uniform as possible. This helps to utilize the full range of intensity levels available and
can be particularly effective when an image has a narrow or skewed intensity
distribution. The process involves the following steps:

1. Calculate the histogram of the original image.


2. Calculate the cumulative distribution function (CDF) of the histogram.
3. Compute the transformation function that maps the original intensities to new
intensities based on the CDF.
4. Apply the transformation function to all pixels in the image.

2. Histogram Specification: Histogram specification (also known as histogram


matching or histogram matching) is a technique used to adjust the histogram of an
image to match a specified histogram. This is particularly useful when you want to
adjust the appearance of an image to match a desired distribution. The steps include:
1. Calculate the histograms of both the source and target images.
2. Compute the cumulative distribution functions (CDFs) for both histograms.
3. Calculate the mapping function that transforms the source CDF to the target
CDF.
4. Apply the mapping function to the pixel intensities of the source image.

3. Spatial Smoothing Filter: Linear & Non-Linear:

• Linear Spatial Smoothing Filter: These filters, like the Gaussian filter, operate
by taking the weighted average of pixel intensities within a local
neighborhood. The weights are determined by a kernel (also called a mask or
filter matrix). The Gaussian filter is an example of a linear spatial filter used for
smoothing, where the weights are determined by the Gaussian distribution.
• Non-Linear Spatial Smoothing Filter: Filters like the median filter and the
bilateral filter are non-linear spatial filters used for smoothing. The median
filter replaces the center pixel value with the median value of the surrounding
pixels, which is effective at removing impulse noise. The bilateral filter
considers both spatial and intensity differences to preserve edges while
reducing noise.

4. Spatial Sharpening Filter: Spatial sharpening filters enhance edges and fine
details in an image. The Laplacian filter is a common example of a spatial sharpening
filter. It calculates the second derivative of the image and accentuates areas with
rapid intensity changes, which correspond to edges. However, using the Laplacian
filter alone can amplify noise, so techniques like unsharp masking or high boost
filtering are often used to achieve better results.

5. Homomorphic Filter: Homomorphic filtering is used to correct non-uniform


illumination in images. It operates in the logarithmic domain to separate the
illumination (low-frequency components) and reflectance (high-frequency
components) of an image. This is particularly useful for images with varying lighting
conditions, such as medical images. The steps involve taking the logarithm of the
input image, applying a high-pass filter in the frequency domain, and then
exponentiating the result to obtain the enhanced image.

Unit – 3

1. Draw Image Degradation Model: An image degradation model depicts the


process of how an original image is degraded during acquisition, transmission, or
processing. Typically, it involves the following stages: Original Image -> Blurring
(convolution with a point spread function) -> Noise addition (random noise) ->
Distorted Image
2. Median Filter: A median filter is a non-linear spatial filter used for noise
reduction. It replaces the center pixel value of a local neighborhood with the median
value of the pixel intensities within that neighborhood. The median filter is
particularly effective at removing impulse noise while preserving edges.

3. Order Statistics Filter: An order statistics filter is a generalization of the median


filter. It replaces the center pixel value with an order statistic (e.g., minimum,
maximum, median) of the pixel intensities within a local neighborhood. Different
order statistics filters emphasize different aspects of the image, such as noise
reduction or edge preservation.

4. Inverse Filtering: Inverse filtering is a restoration technique used to recover an


original image from a degraded image. It assumes knowledge of the degradation
process (point spread function) and attempts to invert the degradation process.
However, inverse filtering can be sensitive to noise and may amplify high-frequency
noise.

5. Wiener Filtering / LMS Filtering: Wiener filtering is a restoration technique that


uses both the observed degraded image and knowledge of the noise characteristics
to estimate the original image. The Wiener filter minimizes the mean square error
between the estimated and actual images. LMS (Least Mean Squares) filtering is a
specific algorithm used to implement Wiener filtering.

6. Blind Image Restoration: Blind image restoration aims to recover an image from
its degraded version without prior knowledge of the degradation process. This is a
challenging problem since the degradation function and noise characteristics are
unknown. Various algorithms and techniques are used, often involving assumptions
about the image and noise statistics.

7. How Edges Are Detected in an Image? Edges in an image can be detected using
techniques like gradient-based methods. These methods calculate the gradient of
the image intensity and identify regions where the gradient magnitude is high. The
edges correspond to rapid changes in intensity, and the gradient provides
information about the direction and strength of these changes.

8. Zero Crossing Property: In edge detection, the zero-crossing property is used to


locate the positions where the second derivative of the intensity function crosses
zero. These positions correspond to the transitions from dark to light or vice versa
along an edge.

9. LoG / Mexican Hat Function: The Laplacian of Gaussian (LoG) filter, also known
as the Mexican Hat filter, is a second-order edge detection filter. It is obtained by
convolving the image with the Laplacian of a Gaussian function. The LoG filter
highlights edges by detecting zero crossings.

10. Thresholding: Thresholding is a technique used for image segmentation. It


involves selecting a threshold value and classifying pixels as either foreground or
background based on their intensity values. Pixels with intensity values above the
threshold are considered foreground, and those below are considered background.

11. Region Growing, Splitting and Merging:

• Region Growing: Starting from a seed point, pixels are added to a region if
they meet certain criteria, such as having similar intensity values to the seed.
• Splitting and Merging: This is an iterative approach where regions are split
into smaller regions if they do not meet certain criteria. Conversely,
neighboring regions with similar properties are merged together.

12. Watershed Segmentation: Watershed segmentation treats the grayscale image


as a topographic map, where intensity values correspond to elevations. Watershed
lines represent the points where water would flow to different valleys. These lines are
used to segment regions in the image.

13. Catchment Basin (or) Watershed: In the context of image segmentation, a


catchment basin (or watershed) is an area of the image where the gradient flows
towards a local minimum in the intensity function. It represents a region of uniform
intensity.

14. Markers: Markers are used in watershed segmentation to define initial regions or
seeds from which the segmentation process starts. Markers guide the segmentation
algorithm by indicating which areas belong to different objects or regions.

15. How Dams Are Constructed in Watershed? In watershed segmentation, dams


are constructed by preventing the merging of segments that shouldn't be merged.
Dams are created along the watershed lines to retain boundaries between regions
and avoid over-segmentation.

16. Drawbacks of Watershed: Watershed segmentation can suffer from


oversegmentation, where small regions are split due to noise or variations in
intensity. It can also be sensitive to noise, leading to undesired segmentations.
Proper marker placement and preprocessing are important to mitigate these issues.
Big Question:

1. Image Restoration/Degradation Models: Image restoration aims to recover an


original image from a degraded or distorted version. Degradation models describe
the processes that cause degradation in images, typically including blurring and
noise. Common degradation models include:

• Linear Degradation Model: Describes degradation as a linear convolution


between the original image and a point spread function (PSF) to model
blurring, followed by the addition of noise.
• Geometric Degradation Model: Incorporates geometric transformations like
rotation, scaling, and translation.
• Noise Models: Describe the characteristics of noise affecting the image, such
as Gaussian noise, salt-and-pepper noise, etc.

2. Spatial Restoration Filters: Mean, Order Statistics, and Adaptive:

• Mean Filter: A spatial restoration filter that replaces the center pixel value
with the average value of the pixel intensities within a local neighborhood. It's
effective at removing uniform noise but can blur edges.
• Order Statistics Filter: A type of filter that uses order statistics (e.g., median)
of pixel intensities within a neighborhood to restore an image. It's particularly
good at removing impulse noise.
• Adaptive Filter: These filters adjust their weights according to the local
characteristics of the image. Adaptive filters are useful when the noise
characteristics vary across the image.

3. Frequency Restoration Filters: Band Reject, Band Pass, Notch, and Optimum
Notch:

• Band Reject Filter (Notch Filter): Suppresses a range of frequencies while


allowing others to pass. It's useful for removing specific types of noise or
unwanted frequency components.
• Band Pass Filter: Allows a specific range of frequencies to pass while
attenuating others. It's used for enhancing certain frequency components in
an image.
• Notch Filter: A type of band reject filter used to remove or suppress specific
narrow ranges of frequencies. It's effective for removing periodic noise.
• Optimum Notch Filter: An improved version of the notch filter that optimally
removes the undesired frequencies.

4. Restoration Filter: Inverse and Wiener/LMS Filters:


• Inverse Filtering: Attempts to undo the degradation process by dividing the
frequency components of the degraded image by the estimated frequency
components of the point spread function. However, it's highly sensitive to
noise and can amplify high-frequency noise.
• Wiener Filter / LMS Filter: A statistical filter that aims to minimize the mean
square error between the estimated and original images. It balances noise
reduction and restoration by using knowledge of the image and noise
statistics.

Examples:

1. Compare Image Enhancement and Restoration Techniques: Image


Enhancement:

• Objective: Improve visual quality for human perception or subsequent


processing.
• Purpose: Emphasize specific features, improve contrast, reduce noise, or
adjust brightness.
• Techniques: Histogram equalization, contrast stretching, gray level slicing,
histogram specification, etc.
• Result: Enhanced image with better visual quality.

Image Restoration:

• Objective: Recover an original image from a degraded or distorted version.


• Purpose: Correct blurring, remove noise, or repair other types of degradation.
• Techniques: Inverse filtering, Wiener filtering, spatial restoration filters (mean,
median, adaptive), etc.
• Result: Restored image approximating the original before degradation.

Comparison:

• Image enhancement focuses on improving visual appearance, while


restoration aims to recover the original image.
• Enhancement methods often involve modifying pixel intensities directly, while
restoration techniques often require knowledge of the degradation process.
• Enhancement may introduce artifacts or exaggerate noise, while restoration
aims to reduce noise and artifacts.
• Both enhancement and restoration can utilize spatial and frequency domain
techniques based on the image's characteristics.
2. Probability Density Functions for Rayleigh Noise Models: The Rayleigh
distribution is commonly used to model noise in images, especially in cases where
the noise is related to the magnitude of a signal, such as sensor noise. The
probability density function (PDF) of Rayleigh noise is given by:

where x represents the noise magnitude, and σ is the scale parameter that controls
the spread of the distribution.

3. Probability Density Functions for Erlang Noise Models: The Erlang distribution
is often used to model more complex types of noise that might be influenced by
multiple factors. The probability density function (PDF) of the Erlang noise is given
by:

where x represents the noise magnitude, k is the shape parameter (a positive


integer), and λ is the rate parameter controlling the noise rate.

You might also like