You are on page 1of 5

MACHINE VISION

ASSIGNMENT I & III

By,
ANNAMALAI.PL
09G05
Region of Interest

A region of interest (ROI) is a user-defined subset of an image. The


region of interest identifies the areas of interest for machine vision application,
and removes the uninteresting image data that is beyond the ROI.
ROIs can be applied by the camera, the interface device, or the host PC.
For cameras and interface devices, an ROI is a hardware-programmable
rectangular portion of the acquisition window defining the specific area of the
image to acquire. In an ROI acquisition with hardware, only the selected region
is transferred across the PCI bus. As a result, defining an ROI increases the
sustained frame rate for the system because the image size is smaller and there
is less data per acquisition to transfer across the bus. At the host PC, ROIs are
the regions of an image in which you want to focus your image processing and
analysis. These regions can be defined using standard contours, such as ovals or
rectangles, or freehand contours. In software, the user can define one or more
regions to be used for analysis. For image processing, applying an ROI reduces
the time needed to perform the algorithms because there is less Data to process.

Image after Applying ROI ROI Definition Original Image

Accordingly, we can identify two different types of classifiers. The first type of
classifier tries to estimate a posteriori probabilities, typically via Bayes’s
theorem from the a priori probabilities of different classes. In contrast, the
second type of classifier tries to construct the separating hyper surfaces between
the classes. Below, we will examine representatives for both types of classifiers.
It can be shown that to minimize the probability of erroneously classifying the
feature vector, we should maximize the probability that the class ωi occurs
under the condition that we observe the feature vector x, i.e., we should
maximize P (ωi|x) over all classes ωi, i = 1. . . m [91, 97]. The probability
P(ωi|x) is also called a posteriori probability because of the above property that
it describes the probability of class ωi given that we have observed the feature
vector x. This decision rule is called the Bayes decision rule. It yields the best
classifier if all errors have the same weight, which is a reasonable assumption
for the OCR. We now face the problem how to determine the a posteriori
probability. Using Bayes’s theorem, P (ωi|x) can be computed as follows:
P (ωi|x) = P (x|ωi) P (ωi) P(x)
Hence, we can compute the a posteriori probability based on the a priori
probability P (x|ωi) that the feature vector x occurs given that the class of the
feature vector is ωi, the probability P(ωi) that the class ωi occurs, and the
probability P(x) that the feature vector x occurs. To simplify the calculations,
we note that the Bayes decision rule only needs to maximize P (ωi|x) and that
P(x) is a constant if x is given. Therefore, the Bayes decision rule can be written
as:
X ∈ ωi ⇔ P (x|ωi) P (ωi) > P (x|ωj) P (ωj) j = 1, . . . , m, j _= i

Directional Properties of the Light


The directional properties of light give the base for the interaction of the
lighting component with reflective, transmitting and scattering properties of the
test object. They are divided into diffuse, directed, telecentric and structured
properties.
Diffuse lighting does not have the preferred direction of light emission.
The light leaves the emitting surface in each direction. Frequently the light
emission obeys the rules of a Lambert radiator. This means that the luminance
indicatrix of a plane light source forms a half sphere and the luminance is
independent of the viewing direction.

Luminance indicatrix of a Lambert radiator.


Most diffuse area lighting reacts like a Lambert radiator. These lighting need no
special precautions or preferential directions with installation. For the function
it is important to achieve local homogeneity on the part. The illuminated area is
directly defined by the size of the luminous field of the diffuse lighting. The
general use of diffuse lighting is to obtain even lighting conditions.
Directed lighting has radiation characteristics that can vary widely.
Already the directive characteristics of the single light sources can vary as also
the characteristics of clusters of light sources. Generally produce lighting
components with overlapping of multiple light sources produce a better light
intensity and homogeneity.

Note for the installation that shifting and tilting of the illumination or the object
has a strong influence on the brightness and contrast in the image due to the
directionality. The general use of directed lighting is to show edges and surface
structures by reflection or shadowing. Even strong contrasts that are desired for
lighting pre-processing can be achieved.

Telecentric lighting are special forms of directed lighting with extremely


strong directional characteristics by means of an optical system in front of the
light source. Due to the arrangement of the light source (typically LED with a
mounted pin-hole aperture) in the focal plane of the optics they produce parallel
chief rays. Telecentric lighting is not parallel lighting, as the light source is not
infinitely small. Parallel, convergent and divergent light rays contribute to
illumination.

Principle functions of a telecentric lighting.


Telecentric lighting is not so sensitive against vibration and adjustment
like parallel lighting. And even if they are not fully aligned with a telecentric
objective the principle of telecentric still works, but the brightness distribution
becomes inhomogeneous. That is why it is ensured that telecentric components
are stable and defined mounted. Telecentric lighting works only in combination
with telecentric objectives. If this is not taken into account, the view from an
endocentric objective (objective with perspective properties) into telecentric
lighting shows only a spot. For the objective seems to be the lighting in infinity
– as the name telecentric already says.
Though the telecentric lighting uses only one single LED it is much brighter
than a diffuse transmitted lighting that is using many LEDs. This happens
because the light source of telecentric lighting emits the light only in the
direction where it is needed.

Telecentric lighting on the basis of LED produce incoherent light. This


means the avoidance of speckles (intensity differences from lasers made by
interferences of waves from same wavelength and same phase). Telecentric
lighting are mostly in use for transmitted light applications. Different
wavelengths (light color) are
• Red light for a maximum of brightness (most imagers have a maximum of
sensitivity of red light)
• Blue light for a maximum of accuracy (size of diffraction effects is
proportional to the wavelength)
• IR light for lighting with reduced extraneous light
• IR flash for very short and bright flashes in fast processes

A specific feature besides the directional properties of light is structured


lighting. Superimposed to the direction the light can carry various geometrical
bright-dark structures some of which are
• Single points
• Grids of points (point arrays)
• Single lines
• Groups of parallel lines
• Grids of squared lines
• Single circles
• Concentric rings
• Single squares
• Concentric squares

The methods of production of these geometrical structures are manifold.


From slides, templates, masks, through LCD projectors and laser diodes with
diffraction or interference gratings or intelligent adaptive lighting with LED
everything is possible. The general use of the structured light is to project the
structure onto the test object. The knowledge of the light pattern and the
comparison with the distorted pattern gives detailed information about the 3D
structure and the topography of the part.

You might also like