You are on page 1of 7

Example topics for the test:

1. Colour spaces – basics (what are: RGB, HSV, CMY, YCbCr, CIE Lab)

RGB is the most popular model; colour is obtained as the sum of the primary colours: red, green, and
blue.

CMY (Cyan, Yellow, Magenta) is the opposite of the RGB model. Any colour is obtained as the
difference between the white and the CMY components. Used in printing devices.
Equal amounts of the CMY pigments produce black, but in practice combining these colors for printing produces
muddy –looking black. To produce true black (which is predominant color in printing) a fourth color, black (K) is
added giving rise to the CMYK color model

CMYK (Cyan, Yellow, Magenta, Black) – CMY with additional black component K.

YCbCr – a colour image is represented by a brightness component Y (the luminance) and by two
chrominance components (Cb and Cr), the encoding used in TV, due to the backward compatibility with
black and white analogue vision.

HSV – the colour is represented by three components : Hue (color), Saturation), Value (or: Brightness,
Luminosity, Intensity)).

CIE Lab – the colour is represented by three components : L (luminance) and ab (2 x chroma). This
colour space is perceptually uniform, i.e. a small change in the components results in a slight difference
in the reception / perception by a human (as opposed to the RGB system).

2. Understanding histograms: draw an approximate histogram for a given image


3. Understanding histograms: how the histogram changes when changing brightness, contrast, etc.

Increasing brightness shifts the histogram to the right, as it increases the overall intensity of the pixels.
Decreasing brightness shifts the histogram to the left, as it reduces the overall intensity.

Increasing contrast stretches the histogram, making the dark areas darker and the bright areas brighter. This
results in a broader distribution of pixel intensities.
Decreasing contrast compresses the histogram, making the dark and bright areas less distinct. This narrows the
distribution of pixel intensities.

4. Histogram equalization (steps of the algorithm)

5. Histogram stretching – know what it is.

Histogram stretching is a simple way of increasing contrast of images. It scales the pixel values, so that
the whole available range is used (0 - 255 in case of 8-bit greyscale)

6. Linear filters. What filters are used for blurring, sharpening, edge detection (e.g. provide
example of mask used for blurring/for edge detection; indicate the purpose of a filter based on name

1. Calculation of the cumulative histogram


2. Normalization of the cumulative histogram by dividing its value by the total number of pixels in the
image
3. Multiplying the values obtained in the second step by the maximum gray level value and rounding of
the received values to obtain positive integers
4. The creation of the resulting image by assigning new values to the pixels calculated in the previous
steps
Example in notebook
or mask, etc.)

Filter is defined by two parameters: the size of the mask (kernel) and mask coefficients.

Low pass filters are averaging filters whose coefficient matrices may take such forms as:

They remove isolated noise, smooth minor "turbulence" of edges, eliminate aliasing . Filters of this type
have, however, also strongly adverse effects, they cause a blur effect on image, corrupt contours of
objects and this way reducing effectiveness of a shape recognition.

 Gaussian filter
High-pass spatial filters are designed to extract or emphasize image components associated with rapid
changes in brightness, such as contours, edges, and fine details. These filters are commonly referred to
as sharpening filters because they enhance or differentiate the signal. While the practical definition of
image sharpening can be challenging, it can be understood as an operation that emphasizes transitions
in intensity.

High-pass filters are divided into following groups:

Edge detection filters - Laplacian

Corners dectection filters

Directional filters

Check Sobel, Prewwit, Roberts – in notebook

7. Non-linear filters: median, maximum and minimum filters. How they work and when we use
them.

Notebook

An interesting effect can be obtained by performing median filtering multiple times. It is called
posterization. As a result of the posterization, details are removed from the image and large areas
receive the same greyscale value.
8. Morphological operations - perform a defined morphological operation (e.g. erosion, dilation,
opening, closing, top hat, bottom hat) for a given simple image. Definition of the skeleton. Draw the
skeleton of a given shape.

Morphological operations are a set of operations used in image processing and computer vision to
analyze and process structures based on their shapes. Morphological operations involve the use of a
structuring element, which is a small shape or pattern, to manipulate the pixels in an image.

Structural element is a binary mask, which defines a particular morphological operation. The most

commonly used structuring elements are squares of size 3×3 or 5×5. Sometimes other shapes are
required, e.g. similar to a circle. Each element has its reference points, which is usually the central point.

 Erosion

It works by moving the structuring element over the image and replacing each pixel with the minimum
value of the pixels covered by the structuring element. (Can be interpreted as minimum filter)

 Dilatation

Dilation is the opposite of erosion. It expands the boundaries of foreground objects in an image. Like
erosion, it uses a structuring element, but it replaces each pixel with the maximum value of the pixels
covered by the structuring element. (Can be interpreted as maximum filter)
 Opening operation is defined as erosion followed by dilation. Opening removes small objects
and small details.
 Closing operation is defined as dilation followed by erosion. Closing fills the narrow indents,
bays and small holes inside the object.

 Hit or Miss transform

Hit or miss transform preserves pixels whose neighbourhood matches the shape of SE1 and do not
match the shape of SE2. Pixels where both SE1 and SE2 are 0 correspond to “don’t care”. Hit or miss
transform can be used for the detection of a specific configuration of pixels, which exactly corresponds
to the mask, when SE2 is the complement of SE1. It can be used to thin and skeletonize a shape

 Mrphological reconstruction

Morphological reconstruction requires three parameters:

 marker image – the image to start the transformation,

 mask image – the image that restricts the transformation and

 structuring element.
The marker image is repeatedly dilated but only within the mask image. More precisely, in each
iteration the new marker is the intersection of dilated marker and the mask image. The algorithm stops
when consecutive dilations don’t affect the image.

 Skeletonization

The skeleton of a figure is defined as the set of all points that are equally distant from at least two points
belonging to the edge of the figure

 top hat (also called white hat or white top-hat): I-opening(I) – detects small (thin) bright areas

 bottom hat (also called black hat or black top-hat): closing-I – detects small (thin) dark areas

9. Edge detection: Canny filter (including how binarization with two thresholds works)
Canny filter is based on Sobel filter (so it uses 1st derivative of the image) but it contains some additional
processing. The algorithm works as follows:

Step 1. Noise reduction with Gaussian filter. Sensitivity of Canny filter depends on standard deviation
(sigma) of Gaussian used in this step.

Step 2. Finding edges using approximation of 1st derivative (Sobel filter)

Step 3. Edge thinning – so-called non-max suppression is used. The pixel with maximum value is selected
in the gradient direction (direction perpendicular to the edge)

Step 4. Hysteresis thresholding

10. Hough transform. E.g. draw Hough transform in (Q,r) space for a given image (composed of several
pixels).

11. Binarization: manual threshold selection based on the histogram. Give examples of automatic
threshold selection methods. Binarization with two thresholds.

12. Shape recognition: what are shape coefficients. How we use shape coefficients for shape
recognition.

You might also like