You are on page 1of 56

NI Visión

Dr. Héctor Simón Vargas


Martínez
Digital Images
Digital Images
 Definition of a Digital Image
Properties of a Digitized Image
 Image Resolution

 Image Definition

 Number of Planes
Image Types
Image Types
Image Types
 Grayscale Images
 Color Images
Image Types
 Complex Images

A complex image contains the frequency information of a


grayscale image. You can create a complex image by applying
a Fast Fourier transform (FFT) to a grayscale image. After you
transform a grayscale image into a complex image, you can
perform frequency domain operations on the image.

Each pixel in a complex image is encoded as two single-


precision floating-point values, which represent the real and
imaginary components of the complex pixel. You can extract
the following four components from a complex image: the real
part, imaginary part, magnitude, and phase.
Image Files
 Bitmap (BMP)
 Tagged image file format (TIFF)
 Portable network graphics (PNG)—Offers the capability of
storing image information about spatial calibration,
pattern matching templates, and overlays
 Joint Photographic Experts Group format (JPEG)
 National Instruments internal image file format (AIPD)—
Used for saving floating-point, complex, and HSL images

Standard formats for 8-bit grayscale and RGB color


images are BMP, TIFF, PNG, JPEG, and AIPD. Standard
formats for 16-bit grayscale, 64-bit RGB, and complex
images are PNG and AIPD.
Internal Representation of an NI
Vision Image
Image Borders
Display
Display
 Use display functions to visualize your
image data, retrieve generated events
and the associated data from an image
display environment, select ROIs from
an image interactively, and annotate the
image with additional information.
Display Modes
 One of the key components of displaying
images is the display mode that the video
adaptor operates. The display mode indicates
how many bits specify the color of a pixel on
the display screen. Generally, the display
mode available from a video adaptor ranges
from 8 bits to 32 bits per pixel, depending the
amount of video memory available on the
video adaptor and the screen resolution you
choose.
Palettes
 At the time a grayscale image is displayed on
the screen, NI Vision converts the value of
each pixel of the image into red, green, and
blue intensities for the corresponding pixel
displayed on the screen. This process uses a
color table, called a palette, which associates a
color to each possible grayscale value of an
image. NI Vision provides the capability to
customize the palette used to display an 8-bit
grayscale image.
Palettes
Binary Palette
Palettes
Binary Palette
Regions of Interest
 A region of interest (ROI) is an area of an image in which
you want to perform your image analysis.

 Use ROIs to focus your processing and analysis on part of


an image. You can define an ROI using standard contours,
such as an oval or rectangle, or freehand contours. You also
can perform any of the following options:
• Construct an ROI in an image display environment
• Associate an ROI with an image display environment
• Extract an ROI associated with an image display environment
• Erase the current ROI from an image display environment
• Transform an ROI into an image mask
• Transform an image mask into an ROI
Regions of Interest
Nondestructive Overlay
 A nondestructive overlay enables you to
annotate the display of an image with useful
information without actually modifying the
image. You can overlay text, lines, points,
complex geometric shapes, and bitmaps on
top of your image without changing the
underlying pixel values in your image; only the
display of the image is affected. Figure 2-1
shows how you can use the overlay to depict
the orientation of each particle in the image.
Nondestructive Overlay
System Setup and
Calibration
System Setup and Calibration
 This chapter describes how to set up an
imaging system and calibrate the
imaging setup so that you can convert
pixel coordinates to real-world
coordinates. Converting pixel
coordinates to real-world coordinates is
useful when you need to make accurate
measurements from inspection images
using real-world units.
Setting Up Your Imaging System
Acquiring Quality Images
 Resolution
 Contrast
 Depth of field
 Perspective
 Distortion
Resolution
Field of View
Resolution
Sensor Size and Number of Pixels in
the Sensor
 The camera sensor size is important in
determining your field of view, which is a key
element in determining your minimum resolution
requirement. The sensor’s diagonal length
specifies the size of the sensor’s active area.
The number of pixels in your sensor should be
greater than or equal to the pixel resolution.
Choose a camera with a sensor that satisfies
your minimum resolution requirement.
Resolution
Lens Focal Length
 When you determine the field of view and appropriate sensor
size, you can decide which type of camera lens meets your
imaging needs. A lens is defined primarily by its focal length.
The relationship between the lens, field of view, and sensor
size is as follows:

focal length = (sensor size × working distance) / field of view

 If you cannot change the working distance, you are limited in


choosing a focal length for your lens. If you have a fixed
working distance and your focal length is short, your images
may appear distorted. However, if you have the flexibility to
change your working distance, modify the distance so that you
can select a lens with the appropriate focal length and minimize
distortion.
Contrast
 Resolution and contrast are closely related
factors contributing to image quality. Contrast
defines the differences in intensity values
between the object under inspection and the
background. Your imaging system should have
enough contrast to distinguish objects from the
background. Proper lighting techniques can
enhance the contrast of your system.
Depth of Field
 The depth of field of a lens is its ability to
keep objects of varying heights in focus.
If you need to inspect objects with
various heights, chose a lens that can
maintain the image quality you need as
the objects move closer to and further
from the lens.
Perspective
Perspective
Spatial Calibration
 Spatial calibration is the process of
computing pixel to real-world unit
transformations while accounting for
many errors inherent to the imaging
setup. Calibrating your imaging setup is
important when you need to make
accurate measurements in real-world
units.
Calibration Process
Coordinate System
Coordinate System
Calibration Algorithms
 NI Vision has two algorithms for calibration: perspective and nonlinear.
Perspective calibration corrects for perspective errors, and nonlinear
calibration corrects for perspective errors and nonlinear lens distortion.
Learning for perspective is faster than learning for nonlinear distortion.

 The perspective algorithm computes one pixel to real-world mapping for


the entire image. You can use this mapping to convert the coordinates of
any pixel in the image to real-world units.

 The nonlinear algorithm computes pixel to real-world mappings in a


rectangular region centered around each dot in the calibration grid, as
shown in Figure 3-8. NI Vision estimates the mapping information around
each dot based on its neighboring dots. You can convert pixel units to
real-world units within the area covered by the grid dots. Because NI
Vision computes the mappings around each dot, only the area in the
image covered by the grid dots is calibrated accurately.
Calibration Algorithms
Image Processing
and Analysis
Image Processing and Analysis
 Image Analysis
• Histograms
• Line
• Profiles
• Intensity measurements.
 Image Processing
• Lookup tables kernels
• Spatial filtering
• Morphology.
 Operators
• Arithmetic and logic
• Operators that mask
• Combine
• Compare images.
 Frequency Domain Analysis
• Frequency domain analysis
• The Fast Fourier transform
• Analyzing and processing images in the frequency domain.
Using Gauging for Part
Inspection
 Components such as connectors, switches, and
relays are small and manufactured in high quantity.

 While human inspection of these components is


tedious and time consuming, vision systems can
quickly and consistently measure certain features on
a component and generate a report with the results.

 From the results, you can determine if a part meets


its specifications.
Using Gauging for Part
Inspection
 Gauging consists of making critical distance
measurements—such as lengths, diameters,
angles, and counts—to determine if the product is
manufactured correctly.

 Gauging inspection is used often in mechanical


assembly verification, electronic packaging
inspection, container inspection, glass vial
inspection, and electronic connector inspection.
Tutorial
Loading Images into Vision
Assistant
 If Vision Assistant is already running, click the Open Image
button in the toolbar, and go to step 4. Otherwise, go to step 2.
 Select Start»All Programs»National Instruments Vision
Assistant.
 Click Open Image on the Welcome Screen.
 Navigate to <Vision Assistant>\Examples\bracket, where
<Vision Assistant> is the location to which Vision Assistant is
installed.
 Enable the Select All Files checkbox.
 Click Open to load the image files into Vision Assistant.
The first image, Bracket1.jpg, loads in the Processing window.
Finding Measurement Points
Using Pattern Matching
 If the Script window already contains a script, click New Script
to open a new script.
 Select Pattern Matching in the Machine Vision Processing
Functions tab, or select Machine Vision»Pattern Matching.
 Click New Template. The NI Vision Template Editor opens.
 With the Rectangle Tool, click and drag to draw a square
ROI around the left hole in the image, as shown in Figure 4-2.
The ROI becomes the template pattern.

Figure 4-2. Selecting a Template Pattern


 Click Next.
 Click Finish. Learning the template takes a few seconds. After
Vision Assistant learns the template, the Save Template as dialog
box opens.
 Navigate to <Vision Assistant>\Examples\bracket.
 Save the template as template.png. The Pattern Matching Setup
window displays the template image and its path.
 Click the Settings tab.
 Set Number of Matches to Find to 1.
 Set the Minimum Score to 600 to ensure that Vision Assistant
finds matches similar, but not identical, to the template.
 Enable the Subpixel Accuracy checkbox.
 Make sure Search for Rotated Patterns is not selected to set the
search mode to shift invariant. Use shift-invariant matching when you
do not expect the matches you locate to be rotated in their images. If
you expect the matches to be rotated, use rotation-invariant matching.

 With the Rectangle Tool, draw an ROI around the left side of the
bracket, as shown in Figure 4-3. Be sure that the region you draw is
larger than the template image and big enough to encompass all
possible locations of the template in the other images you analyze.
Drawing an ROI in which you expect to locate a template match is a
significant step in pattern matching. It reduces the risk of finding a
mismatch. It also allows you to specify the order in which you want to
locate multiple instances of a template in an image and speeds up the
matching process.

Figure 4-3. Selecting the First Search Area


 Click OK to save this step to the script.
 Select Pattern Matching in the Machine Vision tab of the Inspection
steps, or select Machine Vision» Pattern Matching.
 Click Load from File and open the template you just saved.
 Click the Settings tab.
 Set Number of Matches to Find to 1.
 Set the Minimum Score to 600 to ensure that Vision Assistant finds
matches that are similar, but not identical, to the template.
 Enable the Sub-pixel Accuracy checkbox.
 With the Rectangle Tool, draw an ROI around the right side of the
bracket, as shown in Figure 4-4. Vision Assistant automatically locates
the template in the region bound by the rectangle and displays the
score and location of the match.

Figure 4-4. Selecting the Second Search Area


Finding Edges in the Image
 Select Edge Detector in the Machine Vision tab of the Inspection
steps, or select Machine Vision»Edge Detector.
 Select the Advanced Edge Tool from the Edge Detector drop-down
listbox. The Advanced Edge Tool is effective on images with poor
contrast between the background and objects.
 Select First & Last Edge from the Look For drop-down listbox so
that Vision Assistant finds and labels only the first and last edges along
the line you draw.
 Set the Min Edge Strength to 40. The detection process returns
only the first and last edge whose contrast is greater than 40.
 Click and drag to draw a vertical line across the middle of the bracket
to find the edges that you can use to calculate Width Center, as shown
in Figure 4-5. Vision Assistant labels the edges 1 and 2.
Figure 4-5. Finding the Edges for Bracket Distance

 Look at the edge strength profile. The sharp transitions in the line
profile indicate edges. Notice that the number of edges found is
displayed under the edge strength profile.

 Click OK to add this step to the script.


Taking the Measurements
 Select Caliper in the Machine Vision tab, or select Machine
Vision»Caliper.
 Select Mid Point in the Geometric Feature listbox.
 Click points 3 and 4 in the image to obtain the Width Center
measurement, which specifies the center of the bracket width. When
you select a point in the image, Vision Assistant places a check mark
next to the corresponding point in the Caliper Setup window.
 Click Measure to compute the center of the bracket width and add
the Mid Point measurement to the results table, as shown in Figure 4-
6.
 Click OK to add this step to the script
Figure 4-6. Using the Caliper Function to Find Width Center
 Select Caliper in the Machine Vision tab, or select Machine Vision»
Caliper again. The center of the bracket width appears as point 5.
 Select Distance in the Geometric Feature listbox.
 Click points 1 and 2 in the image to find the Bracket Distance, which
measures the length between the manufactured holes in the bracket
and determines if the bracket arch is the appropriate height.
 Click Measure to compute the distance between the bracket holes.
The distance measurement is added to the results table, as shown in
Figure 4-7.
 Select Angle Defined by 3 Points in the Geometric Feature listbox.
Click points 1, 5, and 2, in this order, to find the next measurement—
Bracket Angle—which measures the angle of the bracket arms with
respect to a vertex at point 5, as shown in Figure 4-8.
Figure 4-7. Using the Caliper Function to Find Bracket
Distance
 Click Measure to compute the angle of the bracket arms and add
the measurement to the results table. Figure 4-8 shows the image with
Bracket Distance and Bracket Angle selected on the image and
displayed in the results table.
 Click OK to add these caliper measurements to the script and
close the caliper window.
 Select File»Save Script As, and save the script as bracket.scr.
Figure 4-8. Using the Caliper Tool to Collect Measurements

You might also like