Got it 👍 — I’ll clean up and reformat your notes so they’re well-structured, easy
to revise, and consistent in style.
Here’s the formatted version of CO1, CO2, and CO3:
📘 Digital Image Processing – Notes
CO1 – Image Fundamentals
1. Define visual perception.
Visual perception is the process by which the human visual system
interprets and understands images from light stimuli captured by the eyes.
2. Role of brightness and contrast in visual perception.
o Brightness: Controls overall lightness/darkness of an image.
o Contrast: Defines the difference between light and dark regions.
→ Both affect clarity and detail in perception.
3. Main components of an image acquisition system:
o Illumination source
o Lens
o Image sensor (CCD/CMOS)
o Digitizer
o Storage/Display unit
4. How a CCD sensor captures an image:
o Photons strike the sensor, generating electron charges in pixels.
o Charges are transferred and converted into voltage signals.
o These signals form the image.
5. Sampling vs Quantization:
o Sampling: Selecting pixel positions in spatial/temporal domain.
o Quantization: Mapping continuous pixel values to discrete intensity
levels.
6. Effect of increasing quantization levels:
Improves image quality by reducing intensity distortion and preserving finer
details.
7. Role of pinhole camera model in image formation:
Projects 3D scene points onto a 2D plane through a single aperture
(pinhole).
8. Principle of perspective projection:
Projects objects onto a 2D plane such that farther objects appear smaller →
mimics human eye perception.
9. Affine transformation + Example:
A linear mapping that preserves points, straight lines, and parallelism.
Example: Scaling, rotation, or translation.
10.Geometric image registration:
Aligning two or more images of the same scene taken at different times,
sensors, or viewpoints.
11.Four types of digital images:
o Binary
o Grayscale
o RGB (color)
o Multispectral
12.Binary vs Grayscale (pixel values):
o Binary: 0 or 1 (black/white).
o Grayscale: 0–255 intensity values.
13.4-neighbors vs 8-neighbors in pixel connectivity:
o 4-neighbors: Connected up, down, left, right.
o 8-neighbors: Connected in all 8 directions (including diagonals).
14.Adjacency in segmentation:
Defines connected pixels → affects boundary detection and region grouping.
15.Distance transform & application:
Assigns each pixel a value equal to its distance from the nearest object
boundary.
Application: Skeletonization, shape analysis.
CO2 – Image Transforms
1. 1D Discrete Fourier Transform (DFT):
Transforms a discrete signal from the time (spatial) domain into the
frequency domain.
2. Inverse 1D DFT:
Reconstructs the original discrete signal from its frequency-domain
representation.
3. Applications of 1D DFT:
o Frequency-domain filtering
o Image compression/reconstruction
4. Formula for 2D DFT:
M −1 N−1 ux v y
− j2π( + )
F (u , v )= ∑ ∑ f ( x , y )e M N
x=0 y=0
5. Significance of frequency domain representation in 2D DFT:
Separates image details by frequency → aids in edges, textures, and
filtering.
6. Difference between 1D and 2D DFT:
o 1D DFT: Applied to 1D signals.
o 2D DFT: Applied to images (2D data).
7. Purpose of DCT in image compression:
Concentrates energy into fewer coefficients → efficient compression.
8. Advantage of DCT over DFT:
DCT uses only real values → faster, more efficient (e.g., JPEG).
9. Expression for 1D DCT:
[ ]
N−1
π (2n+ 1)k
X (k )=α (k ) ∑ x (n)cos
n=0 2N
10.Basis functions of Walsh Transform:
Walsh functions (square waveforms with +1 and –1 values).
11.Useful property of Walsh Transform:
Orthogonality → efficient image representation.
12.Walsh Transform vs DFT:
o Walsh: Square wave functions.
o DFT: Sinusoidal functions.
13.Objective of PCA in image processing:
Reduce dimensionality while preserving maximum variance (features).
14.Role of eigenvectors in PCA:
Define directions of maximum data variation.
15.Real-time application of PCA:
Face recognition.
CO3 – Image Enhancement
1. Gray-level transformation:
Operations mapping input pixel intensities to new output intensities.
Examples: Log transformation, Negative transformation.
2. Effect of negative transformation (8-bit grayscale):
Reverses intensities → dark areas become bright, bright areas become dark.
3. Gamma correction (power-law transformation):
Enhances brightness. Formula:
γ
s=c ⋅r
4. Purpose of histogram equalization:
Redistributes pixel intensities → produces uniform histogram, improves
contrast.
5. Smoothing (low-pass) spatial filtering:
Reduces noise & blurs fine details by averaging neighborhood pixels.
Common filter: Mean filter.
6. Sharpening (high-pass) spatial filtering:
Emphasizes intensity changes → makes edges more visible.
7. Ideal low-pass filter (frequency domain):
Passes frequencies within a cutoff radius, blocks others.
8. Ideal vs Gaussian low-pass filters:
o Ideal: Sharp cutoff → may cause ringing.
o Gaussian: Smooth cutoff → avoids ringing artifacts.