You are on page 1of 292

In-Sight Spreadsheets

Standard
Welcome
Dear Student,

Thank you. You have entrusted us to take on a unique role with a rich history of
helping people achieve great things.

Every day humans walk around with a tiny spark in their mind. This spark represents
a very basic human desire to thrive; to learn new things, and use that knowledge to
understand the world around them. Education is the fuel that spark needs to power
the engine in our brain. When you feed the spark with the right fuel, that’s when great
things happen. I firmly believe that great things don’t just happen – Teachers help
people make great things happen.

The Cognex training team has created this class to bring that spark and their skills
together to help you make great things happen. We look forward to seeing the great
things our students can accomplish.

Sincerely,

Miguel Perez
Manager, Education Services & Technical Support
|1
Agenda

Course Description
In-Sight Spreadsheets Standard (TRN-IS-CGNX-STD) gives new or
potential In-Sight users a 2 day overview of the hardware and software used
by In-Sight vision systems. With the focus on getting the most from the In-
Sight Explorer Spreadsheets interface, users learn how to walk through the
process of setting up a vision application using spreadsheet programming
best practices.

Expected Outcomes
At the end of this course, Participants will be able to:
– Identify In-Sight hardware and software interface components
– Demonstrate skillful use of the:
• Spreadsheet interface
• Tools to solve vision inspections
• Communication options of the system
– Build starter operator interfaces
– Explain the fundamentals of Lighting and Optics

|2
Agenda – Day One

8:30am 8:45 am Introductions and Overview 15 min


8:45 am 9:30 am Hardware and Connections 45 min
9:30 am 10:00 am Lab Exercise 30 min
10:00 am 10:45 am Spreadsheets and Image Acquisition 45 min
10:45 am 11:00 am Break 15 min
11:00 am 11:15 am Lab Exercise 15 min
11:15 am 12:00 pm Pattern Matching and Logic 45 min
12:00 pm 1:00 pm Lunch 60 min
1:00 pm 1:30 pm Lab Exercise 30 min
1:30 pm 2:15 pm ExtractHistogram and Edges 45 min
2:15 pm 2:45 pm Lab Exercise 30 min
2:45 pm 3:00 pm Break 15 min
3:00 pm 3:45 pm Blobs and Image Tools 45 min
3:45 pm 4:15 pm Lab Exercise 30 min
4:15 pm 4:45 pm Calibration 30 min
4:45 pm 5:00 pm Wrap-up 15 min

|3
Agenda – Day Two

8:30am 8:45 am Review of Day One 15 min


8:45 am 9:15 am Calibration Lab Exercise 30 min
9:15 am 9:30 am Discrete I/O 15 min
9:30 am 10:00 am Lab Exercise 30 min
10:00 am 10:15 am Break 15 min
10:15 am 11:00 am Network Communications 45 min
11:00 am 11:30 am Lab Exercise 30 min
11:30 am 12:00 pm Building Operator Interfaces 30 min
12:00 pm 1:00 pm Lunch 60 min
1:00 pm 1:30 pm Building Operator Interfaces (continued) 30 min
1:30 pm 2:00 pm Lab Exercise 30 min
2:00 pm 2:30 pm Maintenance and Deployment 30 min
2:30 pm 2:45 pm Lab Exercise 15 min
2:45 pm 3:00 pm Break 15 min
3:00 pm 4:15 pm Lighting and Optics 75 min
4:15 pm 5:00 pm Final Project 45 min

|4
In-Sight Spreadsheets
Standard

Recording in Progress…

Section 1 | Slide 2

Welcome to the In-Sight Spreadsheets Standard class. This training class will be recorded and the recordings will be available to all participants – please keep
this in mind when sharing any company related information with the class.
This is the introductory course for the spreadsheet programming environment for In-Sight cameras. It
covers In-Sight Explorer, the application for the PC that serves as the interface to In-Sight, as well as the Thank you.
basic software tools. The tool categories include pattern recognition, histogram, blob, edge, image filters,
and calibration.

This course also covers communication with other devices, such as discrete I/O and network
communication.

The labs will give you hands-on experience with the inspection of an actual part. If you don’t have the
part, you can use images available on www.cognex.com.

Section 1 | Slide 1 Section 1 | Slide 2


Introductions CEUs Awarded

Cognex’s Education Services department has been


recognized by the International Association for Continuing
Education and Training (IACET) as an Accredited provider
of CEUs.
1. Your Name
2. Your Company In order to earn CEUs for this training event the participant
3. Your Location will:
4. Your Role in the Company
5. What is the most important thing
• Be an active participant throughout the training event
for you to learn in this class? • Be in attendance for 100% of the training event’s
designated training hours
o The participant can miss up to 15% of the live
training event and still earn CEUs provided
they complete the work associated with
missed sections.

Section 1 | Slide 3 Section 1 | Slide 4

Take a few minutes to introduce yourself to the class. Earning CEUs


In order to earn CEUs (Continuing Education Units) for this training event the participant will:
Please share the following information: • Be an active participant throughout the training event
1. Your Name • Be in attendance for 100% of the training course’s designated training hours
2. Your Company o The participant can miss up to 15% of the live training event and still earn CEUs provided
3. Your Location they watch the prerecorded missed lessons, performs the hands on lab exercises and
4. Your Role in the Company completes the appropriate skills journal entries
5. What is the most important thing for you to learn in this class? • In a 2 day class = 126 minutes (2 hours and 6 minutes)
• In a 4 day class = 252 minutes (4 hours and 12 minutes)

Section 1 | Slide 3 Section 1 | Slide 4


Course Objectives Proprietary Interests
At the end of this Training Course Participants will be able to:
• Demonstrate how to connect the In-Sight camera to a network
• Navigate through a spreadsheet Proprietary Interests are profits, rights,
• Create basic mathematical formulas involving If & And functions ownership shares or advantages held by
• Implement fixturing in an In-Sight job the full or partial owner of a tangible or
• Implement the core vision tools, including pattern recognition, intangible asset or property.
histogram, blob, and edge tools
• Describe the purpose and methods of calibration Cognex Technical Instructors have no
• Describe different forms of communication: discrete I/O and network
proprietary interest in any of the materials
communications
or products that are included within this
training event.
• Create a Custom View in a job
• Explain the fundamentals of Lighting and Optics

Section 1 | Slide 5 Section 1 | Slide 6

At the end of the In-Sight Spreadsheets Standard training class, Participants will be able to: Proprietary Interests are profits, rights, ownership shares or advantages held by the full or partial owner
of a tangible or intangible asset or property.
- Demonstrate how to connect the In-Sight camera to the network
- Navigate through a Spreadsheet The Cognex Technical Instructors have no proprietary interest in any of the materials or products that are
- Create basic mathematical formulas involving If & And functions included within this training event.
- Implement fixturing in an In-Sight job
- Implement the core vision tools, including pattern recognition, histogram, blob and edge tools
- List the advantages and limitations of Image Tools
- Import and Export Snippets
- Describe the purpose and methods of calibration
- List the four conditions that can affect whether In-Sight is Online or Offline
- Describe different forms of communication – discrete I/O and network communications
- Create a custom view in a job, including status indicators, results or vision analysis, and a button
to control the region of a tool
- Explain the fundamentals of Lighting and Optics

Section 1 | Slide 5 Section 1 | Slide 6


Objectives of this Section

Hardware and
At the end of this Section Participants will be able to:
Connections
Section 1 • Explain what Cognex does and its place in the market
• Identify the In-Sight product offerings
• Discuss how image chip pixels and Field of View affect
accuracy
• Demonstrate how to connect an In-Sight camera to the network

Section 1 | Slide 8

The first section of the In-Sight Spreadsheets Standard training will focus on Hardware and At the end of this section, Participants will be able to:
Connections.
- Explain what Cognex does and its place in the market
- Identify the In-Sight product offerings
- Discuss how image chip pixels and Field of View affect accuracy
- Demonstrate how to connect an In-Sight camera to the network

Section 1 | Slide 7 Section 1 | Slide 8


Who is Cognex? Cognex Mission Statement

Leader 36+
in machine vision years in business

1,600+ $748M “To be the most successful machine vision company


employees 2016 revenue in the world and to be a company that exceeds the
expectations of its customers, employees,
shareholders, vendors and neighbors.”
1,500,000+ 4,000
systems shipped direct customers

Global 500+
offices in 20 countries channel partners

Section 1 | Slide 9 Section 1 | Slide 10

Cognex is the world’s most trusted vision company. Since 1981, Cognex has provided successful vision The Cognex Mission Statement is in place to ensure Employees and Customers know the overall goals of
solutions to companies around the world, it is the primary reason that manufacturers consistently select the company.
Cognex. With over a million systems installed in factories around the world, we have the experience and
application knowledge to ensure that our vision systems will do exactly the job you need … time after time “To be the most successful machine vision company in the world and to be a company that exceeds the
after time. expectations of its customers, employees, shareholders, vendors and neighbors.”

Cognex vision systems and sensors help companies improve their manufacturing quality and
performance. Eliminating defects… verifying assembly… tracking and capturing information at every
stage of production, top make sure that entire process is completed correctly. Smarter automation using
Cognex vision means fewer production errors, and that means lower manufacturing costs and higher
customer satisfaction.

Cognex has the widest global presence of any company in the industry, with offices in 20 countries
throughout the Americas, Europe, and Asia. These Cognex resources, combined with our network of over
500 local distributors and partner system integrators mean we are everywhere you are.

The company is headquartered close to Boston in Natick, Massachusetts, USA and is publically traded on
the Nasdaq stock market under the symbol CGNX.

Section 1 | Slide 9 Section 1 | Slide 10


Education Services Team – Mission Statement Cognex Locations

“The Education Services team strives to be the


standard around the world for excellence in machine
vision education, through the development of
training materials and the delivery of engaging and
empowering training events that exceed the
expectations of our customers, employees,
shareholders and neighbors.”

Offices and Partner Sites


Section 1 | Slide 11 Section 1 | Slide 12

Similar to the Cognex Mission Statement, the Education Services Team also has a Mission Statement. Cognex has a very broad global presence, so we are able to support you anywhere in the world with local
representatives for training and support. Cognex serves an international customer base from offices
“The Education Services team strives to be the standard around the world for excellence in machine located throughout North America, Europe, Japan, Asia and Latin America, and through a global network
vision education, through the development of training materials and the delivery of engaging and of integration and distribution partners. The company is headquartered close to Boston in Natick,
empowering training events that exceed the expectations of our customers, employees, shareholders and Massachusetts, USA.
neighbors.”

Americas Pennsylvania Hungary Turkey Japan


Corporate Hdq • Plymouth • Budapest • Istanbul • Tokyo
• Natick, MA Meeting • Fukuoka
Ireland United Kingdom
Tennessee • Cork • Osaka
• Epsom
California • Franklin Austria • Silverstone • Nagoya
• Hayward • Vienna KIA
• San Diego Wisconsin
Greater China
• West Allis Italy India
Illinois China • Bangalore
Canada • Milan
• Naperville • Beijing • Pune
• Montreal • Chengdu
Indiana The Netherlands Korea
• Guangzhou
• Carmel Brazil • Eindhoven • Seoul
• Nanjing
Michigan • São Paulo Poland • Qingdao Singapore
• Plymouth • Wroclaw • Shanghai*
Europe • Shenzhen Thailand
Missouri Spain
France • Wuhan • Bangkok
• Chesterfield • Barcelona
• Paris Taiwan
Oregon • Lyon Sweden • Hsin-Chu City
• Portland • Vasteras
Germany
North Carolina Switzerland
• Aachen
• High Point • Karlsruhe • St. Gallen

Section 1 | Slide 11 Section 1 | Slide 12


Cognex Product Offerings In-Sight Product Offerings

Micro Series

2000 Series

Displacement
DataMan Sensor (3D) VisionPro
5000 Series
The Only Vision Company Offering a
Complete Line of Machine Vision Products 7000 Series

ViDi
In-Sight
Section 1 | Slide 13 Section 1 | Slide 14

Before we begin to focus on In-Sight Spreadsheets, let’s take a look at the complete line of machine Depending on the model, an In-Sight device is technically called a “sensor” or “vision system,” but we
vision products that Cognex offers to fill nay of your vision needs. often refer to it informally as a “camera.”

The DataMan barcode readers are optimized with patented algorithms for the highest read rates (99.9%) There are four series of In-Sight cameras:
in the most challenging DPM (Direct Part Mark) and label-based identification applications. - Micro Series
- 2000 Series
Displacement Sensors (3D) optimize product quality by providing three-dimensional inspection of your - 5000 Series
products. These industrial sensors come bundled with vision controller, Cognex Designer software and - 7000 Series
world-class 3D and 2D vision tools.
You program an In-Sight camera by means of an application called In-Sight Explorer, which runs on a PC
VisionPro and Cognex Vision Library (CVL) Power Tools have the intelligence to ignore non-critical that is networked to the camera. The programming ‘language’ is actually a spreadsheet, very similar to
variations in appearance while focusing on the critical features that determine a product’s acceptability. Excel. We call the spreadsheet and its contents a job.

ViDi is the first ready-to-use Deep Learning-based software dedicated to industrial image analysis. In-Sight Explorer has two modes:
Cognex ViDi Suite is a field-tested, optimized and reliable software solution based on a state-of-the-art set - Spreadsheet Mode, in which you set up the spreadsheet itself
of algorithms in Machine Learning. The Suite consists of 3 different tools: - EasyBuilder Mode, in which you go through a series of menu steps that create the spreadsheet
• Feature localization & identification: finds and localizes single or multiple features within an for you.
image.
• Segmentation & defect detection: detects anomalies and aesthetic defects, such as scratches Once the job has been created, you do not need to have a PC on the network to run the job.
on a decorated surface, incomplete or improper assemblies, or weaving problems in textiles.
• Object & scene classification: classifies an object or a complete scene. Be it the identification You can create, edit, and view jobs from In-Sight Explorer. More than one camera can be viewed at time
of products based on their packaging, the classification of welding seams or the separation of by means of In-Sight Explorer’s tiling ability in its Windows pull-down menu.
acceptable or inacceptable defects; ViDi learns to separate different classes based on a
collection of labelled images.

Section 1 | Slide 13 Section 1 | Slide 14


In-Sight Product Offerings: VisionView Touch-Screen Display In-Sight Explorer in EasyBuilder Mode

• Provides intuitive setups for applications.


• You do not see the spreadsheet. EasyBuilder creates it for you.

Section 1 | Slide 15 Section 1 | Slide 16

The VisionView Operator Interface is ideal for monitoring and controlling Cognex vision systems and The EasyBuilder Interface allows you to go through a series of menu steps that create the spreadsheet
image-based ID readers on the factory floor and allows operator controls specific to the application. for you.

A VisionView touch-screen display can display up to 9 In-Sight cameras at the same time. It can be The benefits of the EasyBuilder Interface include:
customized to display specified values from a job, and to allow for operator inputs of specified parameters. - Provides intuitive setups for applications.
- You do not see the spreadsheet. EasyBuilder creates it for you.
The VisionView application software is available on four platforms (VisionView PC, VisionView 900 Panel, - Makes even the most powerful vision tools easy to use.
VisionView VGA, and VisionView CE-SL for third party CE Panels) which all feature:

- Automatic Detection – Quickly detect any Cognex vision system on your network.
- Mix and Match Cognex In-Sight and DataMan systems – View up to nine systems in a tiled
view.
- Graphical Interface – Display fill color images, with graphic overlays and operator controls.
- Fast Image Updates – See the most recent inspection images so you can view your process in
real time.
- Access to CustomViews and EasyView – The operator controls created in the spreadsheet of
items selected from In-Sight EasyBuilder software will appear on the VisionView screen.
- Run-time ability to train fonts, without a PC – no downtime during changeovers – ideal for
OCR/OCV applications.

Section 1 | Slide 15 Section 1 | Slide 16


In-Sight Explorer in Spreadsheet Mode Round Trip Development

EasyBuilder View Spreadsheet View


• Gives access to all In-Sight functionality
• Allows creation of a custom graphical user
interface (Custom View)
• Allows complex logic statements
• Can be slightly faster
Section 1 | Slide 17 Section 1 | Slide 18

The Spreadsheet Interface allows you to set up the spreadsheet itself. This allows for customization to Round Trip Development refers to the process of building a machine vision application in both the
fit your specific needs. EasyBuilder and Spreadsheet development environments of In-Sight Explorer.

The benefits of the Spreadsheet Interface include: A job started in the EasyBuilder environment may be edited in the Spreadsheet environment, allowing for
- Allows access to all In-Sight functionality customization of exposed parameters in the EasyBuilder interface. For example, the parameters for
- Allows creation of custom graphical user interface (Custom View) Location and Inspection Tools may be added, removed or renamed, and additional logic and functionality
- Allows complex logic statements that is only available in the Spreadsheet environment may be added to enhance EasyBuilder jobs, as well.
- Can be slightly faster
- Scripting capabilities NOTE: Not all In-Sight sensors may access the Spreadsheet, or load EasyBuilder jobs that have been
modified in the Spreadsheet environment. If a job developed in the EasyBuilder environment is edited in
the Spreadsheet environment, this job cannot be deployed on an EasyBuilder-only In-Sight sensor.

Section 1 | Slide 17 Section 1 | Slide 18


In-Sight Micro Series (1xxx and 8xxx series)

Highlights:

• Small (30x30x60 mm)


• Power over Ethernet (PoE)
• Gigabit Ethernet
In-Sight Models

In-Sight Micro 8405


Section 1 | Slide 19 Section 1 | Slide 20

In this section we will cover the different models of In-Sight cameras as well as the different connections Below are some of the features of the In-Sight Micro Series Vision System.
that are available.
Smallest Vision Systems
In-Sight Micro compresses an entire self-contained vision system into an amazingly small package
measuring just 30mm X 30mm X 60mm. The ultra-compact 5MP In-Sight Micro 8405 model features a
flexible form factor which can be deployed straight or at a right angle. Other In-Sight Micro models can be
mounted an angles of up to 45 degrees using the In-Sight non-linear calibration tool.

A Pattern Matching Breakthrough


PatMax RedLine was designed with one goal in mind: blazing fast pattern matching on the new In-Sight
5MP vision systems, including the In-Sight 8405. Together with PatMax RedLine, the 8405 can reduce
cycle times and increase throughput without compromising inspection accuracy.

Unmatched Performance and Reliability


Every In-Sight Micro vision system model delivers best-in-class performance. Most models are equipped
with a full library of proven Cognex vision tools through the easy to use In-Sight Explorer software.

Easy to Deploy and Maintain


In-Sight EasyBuilder configuration software includes scripting functionality to condense repetitive,
complex calculations and logic into a single cell to reduce spreadsheet clutter, even the most powerful
vision tools are accessible to users with little vision experience.

Section 1 | Slide 19 Section 1 | Slide 20


In-Sight 2000 Series In-Sight 5000 Series

Highlights:
Highlights:
• Field changeable integrated lights, filters, and
lenses • Rugged
• Field changeable cabling: in-line or right angle • IP67 with lens cover

Section 1 | Slide 21 Section 1 | Slide 22

In-Sight® 2000 series vision sensors combine the power of an In-Sight vision system with the simplicity In-Sight 5000 is a rugged IP67-rated series of industrial camera feature more than fifteen different model
and affordability of a vision sensor. These vision sensors provide value, ease of use, and flexibility thanks types including high speed, high resolution, color and line scan. Below are the features of the In-Sight
to a powerful combination of proven In-Sight vision tools, a simple setup, and a modular design featuring 5000.
field changeable lighting and optics.
Industrial-grade Design
The In-Sight 2000 series includes an integrated, high-performance image formation system consisting of The In-Sight 5000 series vision systems are the only industrial smart cameras in the world that provide
field interchangeable lenses and a patent-pending LED ring light that produces even, diffuse illumination industrial-grade features as standard:
across the entire image and eliminates the need for costly external lighting. Lenses and a variety of light - Rugged die-cast aluminum (IP67) housing
colors can be easily swapped out as needed to meet application requirements. - Sealed M12 connectors
- Protective lens covers
In-Sight 2000 series vision sensors can be configured for in-line and right-angle mounting installation. This Unmatched Performance and Reliability
modular body design provides maximum flexibility to mount in tight spaces, simplifies wiring and optical Every In-Sight 5000 vision system model delivers best-in-class performance. Most models are equipped
paths, and minimizes the need to design new mechanical fixtures. with a full library of proven Cognex vision tools through the easy to use In-Sight Explorer software.

Lighting A Pattern Matching Breakthrough


An 8-LED Diffuse Ring Light (White) is standard. Field changeable lighting options include: Red and IR 8- PatMax RedLine was designed with one goal in mind: blazing fast pattern matching on the new In-Sight
LED Diffuse Ring Lights, Red and IR Light Filters and a light polarizer. 5MP vision systems, including the In-Sight 5705 and 5705C. Together with PatMax RedLine, the 5705 and
5705C can reduce cycle times and increase throughput without compromising inspection accuracy.
Filters
Field changeable Polarized, Red, and IR filters are available. Easy to Deploy and Maintain
With the In-Sight EasyBuilder configuration software, even the most powerful vision tools are accessible
Lenses to users with little vision experience. With In-Sight vision systems you have the tools you need to keep
The In-Sight 2000 ships with an 8 mm Standard M12 lens. Field changeable 3.6 mm, 6 mm, 12 mm, 16 you line operating on schedule and at full throughput:
mm, 25 mm Optional M12 lenses are available. - TestRun system validation
- Cognex Connect suite of communications protocols
- Cognex Explorer control center
Section 1 | Slide 21 Section 1 | Slide 22
In-Sight 7000 Series In-Sight 7000 Series: Gen II

Highlights:
• SD slot for saving jobs and images
• Buttons on camera for acquisition
(TRIG) and event triggering (TUNE)
• Integrated light with 4 individually
controllable banks

Highlights: Models 76xx, 78xx, etc.

• Autofocus lens option


• On board discrete I/O: 3 inputs, 4 outputs
• Integrated lights available

Section 1 | Slide 23 Section 1 | Slide 24

There are many applications for the award-winning In-Sight 7000 series of vision systems. Below are the features In-Sight 7000 series Gen II
of the In-Sight 7000 Series Vision System.
- Models 76xx, 78xx, and others
Self-contained Vision Systems - Trigger button on camera generates a manual trigger
These self-contained smart cameras feature autofocus, fast image capture, integrated lighting and lens with
- Tune button on camera can be used to trigger an Event function in the spreadsheet
powerful vision tools for inspection, color and OCR models, and more. They also have the capability to power and
- SD slot accommodates an SD card for saving jobs and images
control a range of external lighting – all in a compact, industrial IP67 package measuring 75 mm X 55 mm X 47
mm. - Integrated light with color and filter options. Four individually controllable banks of lights

Flexible Lighting
The integrated, field replaceable lighting options (red, blue, green, white and infrared) give you total flexibility.
Unlike most vision systems the In-Sight 7000 additionally has the capability to power and control external lighting
directly eliminating the need for external power supplies which occupy valuable machine space.
The 76xx and 78xx series have a variety of integrated light options, with software control of four banks (left, right,
top, bottom). These can be used with the SurfaceFX vision tool.

Autofocus and Lens Options


You can easily set and save the optimal focus values associated with each job on your line. The autofocus feature
simplifies setup for situations requiring regular part changes or projects that require the systems to be in hard-to-
reach spaces. In addition, integrated field-replaceable lenses, like C-mount, allow you to further customize each
system for specific applications.

Fast Image Capture


The In-Sight 7000 delivers the highest acquisition speeds of all In-Sight products at over 100 frames per second.
This rate provides reliable 100% automated inspection of products on the fastest production lines.

The In-Sight 7000 Series offers more discrete Inputs & Outputs than the other models.

Section 1 | Slide 23 Section 1 | Slide 24


In-Sight Line Scan Camera In-Sight VC200 Multi Smart Camera System

• Captures one line of pixels at a time, Highlights:


with short delay in between
• Multi-view inspections (up to 4 cameras)
• Workflow diagram to control flow of data
• Stitches lines together into full image among cameras
• Can monitor & control from web browser
• Useful where whole part is not visible at
same time
- Moving parts partially blocked by setup
- Rotating parts

Section 1 | Slide 25 Section 1 | Slide 26

The In-Sight vision system, Model 5604, unites line scan imaging with the rugged In-Sight 5000 series to The In-Sight VC200 Multi Smart Camera Vision System brings the proven reliability of the Standalone
give a powerful new way to capture images. In-Sight vision system to multi camera vision systems. Four In-Sight cameras can easily be connected to
a controller for multi-view inspections in the manufacturing environment. For the first time, the power of
Bringing the benefits of Line Scan acquisition to the In-Sight product family, the following are the exciting distributed computing can be leveraged with multiple smart cameras for high-performance applications.
highlights of this model:
The In-Sight VC200 uses a flexible workflow diagram to control image acquisition, vision logic, decision
- High-speed image acquisition of 45,000 lines per second which translates to 22 frames per making, and factory communication. The In-Sight spreadsheet is used to configure the smart cameras for
second for full-size images. vision inspection.
- Two megapixel resolution images are built using a 1024 pixel-wide imager and up to 2048 lines.
- Large 14 X 14 micron pixel size yields superior light sensitivity at microsecond exposure rates, The flexible diagram makes it easy to:
dramatically reducing lighting needs. - Set up flexible multi smart camera triggering
- Built-in Ethernet port provides connectivity to the automation control system using the suite of - Exchange data and combine results from multiple inspections
Cognex Connect factory floor protocols. - Create modern, powerful, web-based human machine interfaces (HMIs) for displaying images
- Accepts standard C and CS mount lenses for simple integration. and results from all connected cameras
- Provide simultaneous, multi-user, platform-independent access to HMIs

Section 1 | Slide 25 Section 1 | Slide 26


Many Choices in In-Sight Features How Can Higher Resolution Help?

Same Field of View, more pixels per feature


640 x 480

• Processor speed
• Resolution: 640x480 pixels - 2248x2048 pixels 1600 x 1200
• Vision Tools Included: all tools or ID tools only
• Number of Discrete Inputs and Outputs
• Image: Greyscale or Color; Area Scan or Line Scan
• Integrated light
2448 x 2048
• Auto-focus lens
• PoE (Power over Ethernet)
• Physical size: Micro (30 x 30 x 60mm) and up
See In-Sight Product Guide for details
Section 1 | Slide 27 Section 1 | Slide 28

Many Choices in Features What can a camera with more pixels do for your application?
- It can provide more pixels per feature for better accuracy (same FOV)
• Processor speed
• Resolution: 640x480 pixels - 2248x2048 pixels The illustrations show a single object with the same pixels / feature, but the resolution is different on each.
• Vision Tools Included: all tools or ID tools only
• Number of Discrete Inputs and Outputs What does this mean in vision tools?
• Image: Greyscale or Color; Area Scan or Line Scan The more pixels per feature the more accurate your results will be. Whether this is a gauging
• Physical size: Micro (30 x 30 x 60mm) and up application, pattern location, or ID reading, think of accuracy in the sub micron level as opposed to
mm. This is why the document with the highest resolution (2448 X 2048) is the clearest of the
three examples, and the document with the lowest resolution (640 X 480) is very difficult to read.

Section 1 | Slide 27 Section 1 | Slide 28


How Can Higher Resolution Help? In-Sight I/O Expansion Modules
Larger Field of View, same pixels per feature

640x480 • Each model of In-Sight has a


built-in number of discrete I/O
lines
1600x1200
• More lines can be made
available on most models by
adding an I/O Expansion
2448x2048 Module CIO-MICRO

Section 1 | Slide 29 Section 1 | Slide 30

This illustration of the part (a clock) shows how three cameras of different resolution can obtain the same Similar to the CIO-1400 the CIO-Micro and the CIO-Micro-CC extend the capabilities by adding discrete
pixels per feature at increasing Fields of View. inputs/outputs, and hardware handshaking for serial communications.

A 640x480 resolution camera can only show a portion of the clock. A 1600x1200 camera can show more These modules are compatible with the In-Sight Micro and 5600 series vision systems.
of the clock, because it can be moved further away (larger Field of View). A 2448x2048 camera can show - Optically isolated trigger input
the whole clock at the same pixels per feature, by moving it even further away (larger Field of View). - ON: 20 to 28V (24V nominal), <7.5 mA
- OFF: 0 to 3V (8V nominal threshold), <250μA; Resistance ~10,000 Ohms
The higher the resolution of the camera, the more of the clock that can be viewed at the same pixels per - 8 optically isolated discrete inputs (Maximum 30 VDC, 104 mA)
feature. - 8 optically isolated discrete outputs (Maximum 30 VDC, 100 mA)

High Speed Outputs


- In-Sight Micro series vision systems
- 2 optically isolated discrete outputs (Maximum 28 VDC, 200 mA)
- In-Sight 5600 series vision systems
- 2 discrete (Maximum 28 VDC, 200 mA)
The minimum firmware version for the CIO-Micro is In-Sight version 4.2.0 or later, and the minimum
firmware version for the CIO-Micro-CC is In-Sight version 4.3.0 or later.

Section 1 | Slide 29 Section 1 | Slide 30


In-Sight I/O Expansion Modules

Discrete I/O

Discrete Inputs Discrete Outputs


Series
without with without with
expansion CIO-MICRO expansion CIO-MICRO Networking a Camera
Micro 0 8 2 10

2000 1 NA 4 NA

5000 0 7 2 8

7000 3 11 4 12-14

All cameras also have a dedicated trigger input (Trigger+ and Trigger-)

Section 1 | Slide 31 Section 1 | Slide 32

This chart outlines the different models of I/O Expansion Modules and the benefits of each. This section will outline the steps to connect the In-Sight camera (or emulator) to the network.

What I/O module goes with what camera?


- CIO Micro (CC)
- In-Sight Micro
- In-Sight 7000 Series
- In-Sight 5600 Models

- CIO 1400
- In-Sight 5000 Series

- The In-Sight 2000 series does not have an I/O Expansion Module

Section 1 | Slide 31 Section 1 | Slide 32


Terminology: TCP/IP and IP address Terminology: IP address

TCP/IP – Transmission Control Protocol / Internet Protocol, • Two types of networks:


a widely used protocol for communication on networks, - Static: IP addresses assigned by a person
including the Internet - Dynamic (DHCP): IP addresses assigned by server (computer)

• In-Sight can be configured for either type of network


IP Address – At any point in time, each device on a given
network must have a unique address of form • A new In-Sight is DHCP
xxx.xxx.xxx.xxx, where xxx is a number 0-255

Section 1 | Slide 33 Section 1 | Slide 34

IP Address: Each device on a network must have a unique IP address of form XXX.XXX.XXX.XXX, There are two types of Networks:
where XXX = 1 to 254. Example: 192.168.0.5.
- Static IP Address
NOTE: Microsoft’s default TCP/IP network is 192.168.0.XXX. - IP Addresses and subnet mask are assigned by person (Network Administrator)
- Stays the same with power cycling
- Dynamic IP Address
- IP Addresses and subnet mask are assigned by server (computer)
- Might change with power cycling, depending on the server

NOTE: In-Sight can be configured on either type of network.


A new or repaired camera is shipped as DHCP.

Section 1 | Slide 33 Section 1 | Slide 34


Terminology: Subnet Emulator

Allowable IP addresses on a network are defined by its


subnet mask

Example: 255.255.255.0

Addresses on this subnet could be:


- 192.168.0.1
• Emulator: In-Sight Explorer running on a PC
- 192.168.0.4 • Select any In-Sight model to emulate in System  Options  Emulation
- 192.168.0.126 – All tools for that model are made available, including PatMax
- 192.168.0.203 • You must use stored images with emulator
• You can develop jobs even when no camera is logged onto
– Requires offline programming key

Section 1 | Slide 35 Section 1 | Slide 36

The Subnet is a group of networked In-Sights with similar IP addresses. Subnet Mask defines which In the SystemOptionsEmulation menu, In-Sight Explorer can be configured to emulate any model of
part of the IP address refers to the network and which part refers to the host. In-Sight camera. An emulator includes tools that are optional, such as PatMax. A color camera emulator
The subnet mask must be the same for all devices on a network. Example: 255.255.255.0. includes color tools.
NOTE: 255 means all IP addresses on this subnet are identical in this position.
0 means each IP address is different in this position. Using an emulator lets you try out cameras that you don’t have which may have different tools or different
resolution. You must use images, and those images must match the resolution of the camera being
In a static network, a person specifies the subnet mask. In a dynamic network, the DHCP server assigns emulated.
the subnet mask.
You can even run the emulator even when no cameras are networked to the PC. This is called
There are three types, or classes of subnet masks. The class of a particular subnet on a network is standalone mode or offline programming. You can enable this on a PC by entering a key, which is a
defined by the number of bits used to represent the network and host address portions in the IP address, number obtained from the Cognex web site and which is different for each PC. You only need to enter
as in the table below: the key once for a given PC. Obtaining a key will be explained when the System Options menu is
covered in the Deployment section of this course.
Class Subnet Mask Network Address Host Address

A 255.0.0.0 8 bit 24 bit

B 255.255.0.0 16 bit 16 bit

C 255.255.255.0 24 bit 8 bit


For example, consider a networked In-Sight host system with the IP address 192.168.0.1. If the first three
numbers (192.162.0) identify the 24 bit network address, and the last number (1) is the 8 bit address for
the In-Sight host on the network, then the subnet mask for this host is 255.255.255.0

Section 1 | Slide 35 Section 1 | Slide 36


How To Control Your Camera Three Ways to Refer to a Camera

• MAC Address is
printed on the camera

• Name is what you see


in the Network pane

Section 1 | Slide 37 Section 1 | Slide 38

Before you can control your camera, you must log on to it. There are three ways to refer to a camera –
To log on, follow these steps: - IP Address
1. Find your camera in the In-Sight Network pane.
- Set by network administrator or server
2. Double-click on your camera.
3. A spreadsheet will load with an image behind it. - Can be changed, but must be unique on the network

Refresh (<F5>) to make sure the list is up to date. - MAC Address


- Unique, set by manufacturer
- Cannot be changed
- 12 characters, found on the label on the In-Sight camera
- All In-Sights 00d024---

- Host Name
- Shows in the Spreadsheets connection list
- Can be changed by the user
- Default is Model_LastPartOfMax

Section 1 | Slide 37 Section 1 | Slide 38


Add Sensor / Device to Network Add Sensor / Device to Network
Cameras with incorrect network settings should appear in Add Sensor box

A blank window means all cameras on the network are properly configured
• Click on Host Name, change settings to match network, click on Apply
• If camera still appears in box, keep box open and recycle power on camera

Section 1 | Slide 39 Section 1 | Slide 40

This automatically detects In-Sight sensors with invalid network settings on your TCP/IP subnet and IP Address: Each device on a network must have a unique IP address of form XXX.XXX.XXX.XXX,
displays them in a list. Any In-Sight sensor or Cognex Ethernet device that is power cycled on the subnet where XXX = 1 to 254. Example: 192.168.0.5.
while the dialog is open is also added to the list.
Subnet Mask: Must be the same for all devices on a network. Example: 255.255.255.0.

Add a camera not found on the list of cameras: The following information will be entered in the appropriate fields:
1. Check to make sure camera is powered (power LED on the camera)
2. Check to make sure camera has an active network connection (network LEDs on the camera) - Name of Sensor
3. Click System  Add Sensor/Device to Network - IP Address (dynamic or static)
- Static IP Address
- Subnet mask for network
- Gateway settings for network *
- Domain Name Server *
- Domain Name of Network *
- Get settings from PC
- Reset Admin Password
- Reset to DHCP (no static settings), IO and time settings Name will remain previous

*Needed when attaching to a company or controlled network

Section 1 | Slide 39 Section 1 | Slide 40


Listing all Cameras on Network Resetting a Camera to Factory Defaults

To see and/or change cameras already on the network: • Resets discrete I/O, Startup, and User List settings.
• Click on Show All • Releases IO module
• Select a camera and alter its settings on the right • Does not affect jobs in camera (but good idea to have Backup)
Section 1 | Slide 41 Section 1 | Slide 42

To see and change models already on the network: - Resets discrete I/O, Startup, and User List settings.
- Click on Show All - Releases IO module.
- Now you can select one and alter its settings - Does not affect jobs in the camera – but it is a good idea to have a Backup.

Section 1 | Slide 41 Section 1 | Slide 42


Summary Lab Exercise

• In-Sight comes in a variety of models to accommodate


resolution and speed requirements.

• Additional discrete inputs and outputs can be achieved


through the use of an IO Expansion Module.

• Configuring a camera is made simple through the use of


the Add Sensor utility.

Section 1 | Slide 43 Section 1 | Slide 44

In this section we covered the following topics: Complete:


Lab Exercise 1.1 – Getting Connected
- In-Sight comes in a variety of models to accommodate resolution and speed requirements.
- Additional discrete inputs and outputs can be achieved through the use of an IO Expansion
Module.
- Configuring a camera is made simple through the use of the Add Sensor utility.

Section 1 | Slide 43 Section 1 | Slide 44


In-Sight Spreadsheets Standard Section 1 | Lab Exercise In-Sight Spreadsheets Standard Section 1 | Lab Exercise

Normal Led Pattern:


Lab Exercise 1.1 – Getting Connected 7000 Series – power LED and ENET connector should be lit in green
2000 Series & 7000 Gen II Series – power LED green, network LED yellow
At the end of this lab exercise, Participants will be able to: 5000 Series – power LED and ENET connector should be lit in green
• Identify the camera system components Micro Series – ENET LED should be green
• Launch the In-Sight Explorer software
• Change the In-Sight interface from the EasyBuilder to the Spreadsheet view

The Participant will utilize the following In-Sight Functions to successfully complete this
exercise:
• Software Menus
• Spreadsheet
− ImageAcquire cell In-Sight 7000 Series In-Sight 5000 Series In-Sight Micro

NOTE: If you make a mistake or want to stop editing a cell, you can click the <Esc> key
on your keyboard to back out.
In-Sight 2000 Series and 7000 Gen II Series
Follow the steps below to complete the lab exercise:
3. Look at the set-up at your work station and make note of which item is the In-Sight
1. Assemble the hardware components. sensor (camera) and which is the I/O Expansion Module.
NOTE: The tripod should have the top four portions of its legs pushed back in to get Look at the type of hardware that you are using and make note of it below:
the proper height. The unit should be directly above the part below it with the lens
In-Sight Sensor Type:_____________________________________________
pointing down.
I/O Module Type:________________________________________________

4. Click the In-Sight Explorer icon on your desktop to launch In-Sight Explorer.
Or, Start Menu  Programs  Cognex  In-Sight Explorer (ISE) on your PC.

5. Next, you are going to set your camera to factory defaults, which will remove any
changes in settings made to your camera by a previous class. (If you already did
this earlier in the class, skip to step 7.)

**IS5000 series only**

2. Confirm there is power and network connectivity to the unit.

Page 1 Page 2
In-Sight Spreadsheets Standard Section 1 | Lab Exercise In-Sight Spreadsheets Standard Section 1 | Lab Exercise

Go the SystemAdd Sensor menu 7. After a few seconds, a list of all cameras on the network, including yours, should
appear. Click on yours, then click on the checkbox labeled Reset Sensor Settings to
Factory Defaults.

6. This will bring you to the Add Sensor/Device to Network window. Your camera’s
name should not appear, because it is already properly configured.
Click Apply and follow the resulting instructions to cycle power on your camera. This
will take about 2 minutes, at which point a message will indicate that the reset was
successful. Close the Add Sensor window.

8. Log on to your In-Sight camera.


9. Confirm that you are in the Spreadsheet View.
10. In the Application Menu, click View  Spreadsheet. If you do not see this option in the
View menu then you are in the spreadsheet view.

To show all cameras that are properly networked, click on Show All.

Page 3 Page 4
In-Sight Spreadsheets Standard Section 1 | Lab Exercise

11. Click the New Job button.


A blank spreadsheet displays.

12. Click View  In-Sight Network to see all of the cameras and emulators that are on
the network.

13. Click View  Palette to view all of the tools available.

Page 5
In-Sight Spreadsheets Standard Skills Journal In-Sight Spreadsheets Standard Skills Journal

Please answer the corresponding questions upon completion of each section. The 5. What are at least three benefits of the Spreadsheets Interface?
questions are to reinforce the skills learned during each In-Sight Spreadsheets Standard
section.

Section 1 – Hardware and Connections


• Demonstrate how to connect the In-Sight camera to the network
• Explain who Cognex is and its place in the market
• Identify the In-Sight Product Offerings 6. What are the results of using a higher resolution imaging chip?
• Discuss how Image Chip pixels and Field of View affect resolution

1. Your camera is not seen on the list in the Network Pane. List possible causes and
solutions.

2. Name two other lines of machine vision products besides In-Sight that Cognex
offers.

3. List at least three Model series of In-Sight cameras.

4. Name the two User Interface modes of In-Sight Explorer.

Page 1 Page 2
Objectives

Spreadsheets and Image At the end of this Section Participants will be able to:
Acquisition • Manage multiple networked In-Sight systems from a single PC
Section 2 • Save job files
• Open job files
• Explain the basic principles and terminology of image acquisition
• Navigate through a spreadsheet
• Record and play back images

Section 2 | Slide 2

This section covers the basics of In-Sight Explorer and the spreadsheet, including how to set up At the end of this Section Participants will be able to:
equations with cell references. It also explains the settings for the vision tool that obtains an image,
called AcquireImage. - Manage multiple networked In-Sight systems from a single PC
- Save job files
- Load job files
- Explain the basic principles and terminology of image acquisition
- Navigate through a spreadsheet
- Record and playback images

Section 2 | Slide 1 Section 2 | Slide 2


What is In-Sight Explorer? In-Sight Network Pane

Section 2 | Slide 3 Section 2 | Slide 4

In-Sight Explorer provides a powerful and completely integrated vision system configuration, The In-Sight Network Pane is a ‘tree’ containing all of the In-Sight vision systems, and emulators
management and operator interface, all within a single software package. In-Sight Explorer includes two automatically detected on the subnet (including any enabled and configured RAM Disk folders), as well as
development environments to program and manage In-Sight vision systems: EasyBuilder and any hosts that have been entered manually into the Explorer Host Table. The In-Sight Network pane
Spreadsheet. offers a great deal of flexibility when managing multiple In-Sight devices at once.

Right-clicking an In-sight device from the In-Sight Network panel displays a menu that allows the following
- Application that manages multiple networked In-Sights file operations to be performed:
- Log onto any In-Sight system on network
- Create and modify jobs on any In-Sight - Show EasyBuilder View – Shows the current image from the In-Sight vision system and the
- View and manage multiple jobs simultaneously current job in the EasyBuilder graphical user interface.
- Show Spreadsheet View – Shows the current image from the In-Sight vision system as well as
In-Sight Explorer automatically detects any In-Sight vision systems on your subnet and displays them. the semi-transparent spreadsheet overlay or Custom view (optional).
Vision systems with a camera icon represent actual In-Sight vision systems (for example, the In-Sight - Show Sensor Status View – Displays information contained within the memory buffers that
5100) while the computer icon represent In-Sight emulators running on networked PCs. The name of the make up the Machine Status Stack.
emulator is the computer’s name under Microsoft Windows. To make sure the list is current, select - Paste – Pastes job files that have been copied to the clipboard.
View Refresh or press <F5>. - Create Report – Generates an .HTML or .XML report that contains job and network
configuration details for one or more In-Sight device.
You can view more than one camera in In-Sight Explorer by going to the Windows menu and selecting - Backup – Stores an archive of job and configuration data for this device to the Backup
one of the Tile options. This will display all the cameras to which you are connected. directory.
- Restore – Retrieves an archive of job and configuration data for this device.
- Properties – Displays details of the vision system’s hardware information and network
identification, as well as the vision system’s flash and RAM memory configuration.

Section 2 | Slide 3 Section 2 | Slide 4


In-Sight Spreadsheet Pane In-Sight Files Pane

The In-Sight Spreadsheet pane is the


area used to organize and build your
vision application.

This is also where the image will be


displayed (behind the spreadsheet).
Section 2 | Slide 5 Section 2 | Slide 6

The main component of In-Sight Explorer’s graphical user interface (GUI) is a spreadsheet, an adjustable, The In-Sight Files pane allows you to see the files currently stored in your In-Sight system.
semi-transparent overlay that is superimposed onto a video image acquired from an In-Sight vision
system.

In addition to the semi-transparent spreadsheet overlay containing the active job, the Spreadsheet view
consists of:
- The active image from the In-Sight camera.
- The status bar, which indicates the current available job size, as the number of available,
allocated cells, the Online/Offline status and job execution time of the active In-Sight camera.
- The title bar indicating the camera name and file name of the active job.

Section 2 | Slide 5 Section 2 | Slide 6


In-Sight Palette Pane In-Sight Shortcut Buttons & Scroll Mode

Toggles scrolling between the


spreadsheet & image

Section 2 | Slide 7 Section 2 | Slide 8

The In-Sight Palette pane allows you to select functions to use in your inspection; You can drag and drop In-Sight shortcut buttons allow you to quickly access common functionality. They feature most of the
them into the spreadsheet pane for use in your application. same functionality as the menus, yet they are more easily accessible.

It is personal preference as to which method you use. Both the menus and the shortcut buttons will bring
you to the same place.

Section 2 | Slide 7 Section 2 | Slide 8


Field of View (FOV)

Working with Images

Section 2 | Slide 9 Section 2 | Slide 10

In this section we will cover how to work with images. Field of View is the image area of an object under inspection. This is the portion of the object that fills
the camera’s sensor. Field of View is critical for choosing the correct optical components to use in an
imaging application. Since resolution is dependent of field view, determining field of view affects what one
is trying to analyze or measure.

Section 2 | Slide 9 Section 2 | Slide 10


Image Acquisition Pixels – Monochrome Cameras

0 255

Section 2 | Slide 11 Section 2 | Slide 12

The Charge Coupled Device (CCD) is an integrated circuit etched onto a silicon surface forming light The Light intensity in a Monochrome Camera is represented as a greyscale value from 0 to 255, with 0
sensitive elements called pixels. This is also called the image sensor. The CCD converts light into an representing black and 255 representing white.
analog voltage, the camera then converts the analog voltage into a digital value.

The image is divided into grid of square picture elements called pixels. Each pixel consists of a location
within the image (X/Y coordinates) and a light intensity value or values.

Section 2 | Slide 11 Section 2 | Slide 12


Pixels – Color Camera Physical Setup – Lens

• Standard lens has two rings


for adjusting:
- Aperture
- Focus

0 • Autofocus lens focuses


255 automatically
- Instead of aperture, adjust
brightness using Exposure in
software

Section 2 | Slide 13 Section 2 | Slide 14

In a Color Camera each color is composed of three separate color components. Each component of a The camera’s aperture setting controls the are over which light can pass through your camera lens. It is
pixel is converted to a value from 0 to 255. specified in terms of an f-stop value. The area of the opening increases as the f-stop decrease, and vice
versa. The camera’s aperture setting is what determines the depth of field. So, a wide opening will give a
These components may be represented as combinations of: shallow depth of field, and a narrow opening will give a wide depth of field.
- Red, Green, and Blue (RGB)
- Red = 255, 0, 0 A Fixed Aperture will function independently of the lens focal length. The barrel of the lens does not
- Green = 0, 255, 0 extend or retract when the focal length changes. The fixed aperture prevents accidental changes in the
- Blue = 0, 0, 255 size of the aperture or slowly shifting aperture values through vibrations effectively.
- Hue, Saturation, and Intensity (HSI).
- Red = 0, 255, 85 An Adjustable Aperture can be adjusted to vary the amount of light appearing at the focal plane of the
- Green = 85, 255, 85 imager.
- Blue = 170, 255, 85
The Focus is an important aspect in many applications involving machine vision. The degree of focus in
NOTE: In-Sight Color vision systems report the color components of the pixel. an image is a factor in determining image quality. For example, a focused image may contain some
details not present in an unfocused image of the same part.

Section 2 | Slide 13 Section 2 | Slide 14


Physical Setup – Lighting Spreadsheet: Image Coordinate System

Y axis
(0,200)
(0,0) (0,639)

=
a pixel, with
greyscale
value 0-255
(300, 0)
(X,Y) = (300,200)

X axis
(479,0) (479,639)

Section 2 | Slide 15 Section 2 | Slide 16

In a later section on Lighting and Optics, we will take an in-depth look at lighting and optics and how The default setting is (0,0,0), the top leftmost corner of the image.
they affect the quality of the image.
- Row (X) – The row offset, in image coordinates.
- Column (Y) – The column offset, in image coordinates.
- Theta – The angle of orientation, in the image coordinate system.

Section 2 | Slide 15 Section 2 | Slide 16


Online vs. Offline How To Capture Images

Manual Trigger Live Mode


(single image) (live image)

You can also use


the Image menu
Online means that all In-Sight Input Offline means that most In-Sight Input
and Output signals are enabled. and Output signals are disabled.

Section 2 | Slide 17 Section 2 | Slide 18

Online means that all In-Sight Input and Output signals (discrete, serial, network, and non-manual To capture an image:
triggers) are enabled.
1. Log on to your camera
When Online: 2. Click on the Manual Trigger or Live Mode button
You can do this: 3. The image will be displayed in the Spreadsheet pane
- Acquisition triggers
- Serial I/O
- Discrete I/O
- Network I/O
But not this:
- Edit spreadsheet
- Open Property Sheets
Offline means that most In-Sight Input and Output signals are disabled.
When Offline:
You can do this:
- Edit spreadsheet
- Open Property Sheets

But not this:


- Acquisition triggers
- Serial I/O
- Discrete I/O
- Network I/O

Section 2 | Slide 17 Section 2 | Slide 18


Image Saturation Tool How To Change Image Settings

Show Brightness Feedback toolbar button assists with


built-in help
obtaining a good image

Not bright enough Too bright Good

• Goal is not to have blue or red span a feature and background


• Thresholds for blue/red are set in SystemOptions (more later)

Section 2 | Slide 19 Section 2 | Slide 20

Red or blue shown on the image is not necessarily a problem if it is limited to the background or limited to To change the image settings:
a feature. It can indicate a problem when it overlaps a feature and the background.
1. Log on to your camera
The leftmost image needs to have more brightness. This can be accomplished with more lighting, a 2. Double click on cell A0
larger lens aperture, and/or a longer exposure. The middle image needs to have less brightness. This 3. The AcquireImage Property Sheet will display
can be accomplished with less lighting, a smaller lens aperture, and/or a shorter exposure. The rightmost 4. Make the required changes and click the OK button
image is good. Even though there is some red, it is limited to features and does not overlap into the
background.

The greyscale threshold is set in SystemOptions, with defaults of 5 (blue below 5) and 240 (red above
240).

Section 2 | Slide 19 Section 2 | Slide 20


Trigger for AcquireImage Parameters for AcquireImage

Section 2 | Slide 21 Section 2 | Slide 22

Unlike other spreadsheet functions, AcquireImage exists permanently in cell A0 and cannot be cut, The default AcquireImage parameter settings vary based on the vision system model. If a job is
copied or cleared from the spreadsheet; this enforcement guarantees that an image is always available in developed on one model and then loaded onto a different model, verify that the parameter settings are
a predefined location. Most vision tool functions, as well as many other functions, take advantage of this appropriately configured.
fact by specifying an absolute reference to cell A0 as their default image source.

Trigger – Specifies the source of the image acquisition trigger when the In-Sight vision system is online. Orientation – Specifies the orientation of the image.
- Camera – Enables image acquisition on a rising edge sensed at the vision system’s dedicated - 0 = Normal (default)
acquisition trigger input. This is a hardwire trigger, i.e. there is a dedicated wire in a black cable - 1 = Mirrored horizontally
that supplies 24 volts to the camera. - 2 = Flipped vertically
- Continuous – Enables ‘free running’ (as fast as possible) image acquisitions. - 3 = Rotated 180 degrees
- External – Enables image acquisition on either a serial command, on the rising edge applied to
a discrete input line configured as an Acquisition Tripper, or from a PLC using a real time Buffer Mode – Specifies the number of buffers used for image acquisition. The Buffer Mode parameter
Ethernet protocol. cannot be modified when the camera is Online.
- Timestamp – Enables image acquisition when a timestamp trigger is sent to the vision system - 0 = Overlapped (default) – The number of image buffers specified in the Image Buffers dialog
from a PLC over EtherNet/IP. will be used for image acquisition.
- Manual – Enables image acquisition when pressing F5. On order for F5 to manually acquire an - 1 = Single – Only a single buffer will be used for image acquisition. This option is only
image, the spreadsheet must have focus. Use this for labs. supported when the Trigger parameter is set to Camera.
- Network – Enables image acquisition when specified In-Sight ‘Master’ system on the network is
triggered. The Master checkbox must be Off, and a valid Master Name must be specified.
- Industrial EtherNet – Enables image acquisition on triggers originating from a real-time
EtherNet protocol, such as EtherNet/IP POWERLINK, PROFINET or SLMP Scanner.

Camera trigger gives precise triggering (extremely low latency between trigger arrival and image
acquisition start). An External trigger has <1ms latency, but a Camera trigger provides higher precision.
In a few applications, an External trigger provides additional functionality because it involves the
software; a Camera trigger is handled just by the hardware.
Section 2 | Slide 21 Section 2 | Slide 22
In-Sight 7802 Integrated Light Opening Jobs

• Exposure: can set here or in AcquireImage


• Intensity: determines how long the light is on during
exposure (50-100%)

Section 2 | Slide 23 Section 2 | Slide 24

This dialog allows you to configure 0-4 banks of lights. If using the integrated light, make sure Integrated The Open dialog will allow you to load a job file into the memory of an In-Sight vision system as the active
is checked. job.

- Exposure is the exposure time, and can be set here or in AcquireImage To open a job:
- Intensity controls how long the light is on during the exposure time. (The actual brightness of 1. Click Open Job on the file menu
the light does not change.) 2. Choose the location that contains the desired job file to open
3. Highlight the desired job and click Open – alternatively, you can double-click the job file or type
Check on one of the four checkboxes at the left to activate that bank. Then determine the proper in the file name
Exposure and Intensity values by observing the image as you vary one or both parameters.

Uncheck the first checkbox, and repeat the process for each of the other three banks of lights. All must
have the same Exposure value and same Intensity value, so for SurfaceFX, you may need to come to a
compromise on these settings so that all four banks yield good images.

Whatever lights you leave checked will come on when in Live Video and when triggering the camera. The
exception is if you have an IntegratedLightControl function in the spreadsheet, in which cases its bank
settings will override the Light Settings bank settings. This will be the case in the SurfaceFX demo later
in this section.

Section 2 | Slide 23 Section 2 | Slide 24


Saving Jobs Selecting Job for Automatic Startup

The number of jobs that may be saved on In-Sight is


limited only by memory

Section 2 | Slide 25 Section 2 | Slide 26

When you create or edit a job on a camera, the job is initially located in a type of memory called RAM In the Sensor Startup dialog, you can designate any job stored on the camera or its SD card to
(Random Access Memory). Only one job is in RAM at a time. The contents of RAM are lost if power is automatically be opened when the camera is powered on. In addition, you can indicate that the camera
cycled. should go online automatically at power on.

Save Job and Save Job As copy the job from RAM to a location where information remains through a
power cycle. On a camera, that location is called flash memory. On a PC, it’s the hard drive. That
location can hold as many jobs as will fit. The number of jobs will depend on the size of the jobs and the
size of the storage medium.

Save Job and Save Job As save the contents of the spreadsheet, including parameters in Property
Sheets and trained patterns. They do NOT save I/O settings.
You can Save a job to any In-Sight host (In-Sight 5XXX, Emulator, etc.). You can Open from any host.
Save Job overwrites the previous version of the job. Save Job As lets you save the job to a different
location and under a different name.

Section 2 | Slide 25 Section 2 | Slide 26


Saving Images Recording Images

You can save images to a PC only


Recording Options

Starts and Stops


Recording

Section 2 | Slide 27 Section 2 | Slide 28

Saving an image as a Bitmap (BMP) saves the entire image – all the pixels – exactly as it was captured In-Sight allows you to record and playback your acquired images.
by the camera. If you intend to Open the image in a job later on, you likely want BMP format.
To record images:
Saving as a JPG compresses the image so that the file is smaller. Because information is lost, it is not 1. Open Record/Playback Options
2. Using the Record tab, set the desired parameters for recording, then click the OK button
the same as the original image, and will not behave as the original image if you OPEN it in a job. A JPG 3. Click the Record button
might be useful if you want to view the image and storage space is very limited. 4. Acquire images; the images will be recorded according to the selected options

Section 2 | Slide 27 Section 2 | Slide 28


Image Playback Sensor Filmstrip

Playback Options

Starts and Pauses


Playback Queue:

Section 2 | Slide 29 Section 2 | Slide 30

In-Sight allows you to playback recorded or previously saved images. When the vision system is Online and acquiring images, the Sensor filmstrip can be used to monitor a
job’s performance. As images are acquired, the job results, including the acquired image and
To play back images: accompanying job data are stored to the vision system’s RAM.
1. Open Record/Playback Options
As results are stored to the vision system, a pass or fail graphic is added to the filmstrip. When a result is
2. Using the Playback tab, set the desired parameters for playback, then click the OK button highlighted in the filmstrip, the filmstrip display changes from graphics to thumbnail images and the
3. Click the Play button corresponding image is loaded to the display is loaded to the display area.
4. Images will be played back according to the selected options (all images within the specified
Windows folder) Use the control in the Sensor Settings group box of the Filmstrip Application Step to configure the vision
system’s behavior. Up to 20 results can be saved, depending on the vision system’s resolution and
available RAM.

The images stored in the Sensor Filmstrip can be selected from the following choices in the Queue pull-
down menu:
• Pass Results only
• Fail Results Only
• Pass and Fail Results
• Separate Pass and Fail Results: all images are stored, and you can select select one or both to
display by clicking on green/red boxes at right

NOTE: The vision system must be Online for results to be added to the filmstrip.
Enabling the Sensor filmstrip may increase the job execution time; ensure that the increased time is
acceptable for your application.

NOTE: All results are deleted from the vision system if the Queue, Queue Size or Queue Type settings are
modified , a new job is loaded or if the vision system is power cycled.

Section 2 | Slide 29 Section 2 | Slide 30


Spreadsheets Formulas and References

Row Numbers Column Letters

Two types of reference:

Absolute: unchanged when copied


A5: $A$3 + $A$4 B5: $A$3 + $A$4

Relative: changes when copied


A5: A3 + A4 B5: B3 + B4

Active Cell A2
(selected cell) Cell C4 Cells

Section 2 | Slide 31 Section 2 | Slide 32

A Spreadsheet is a document automatically formatted into rows and columns. A cell reference is a link to different call. When data in the source cell changes, the value in the
- The Columns are lettered destination cell (the one that contains the reference) updates automatically. The link is unidirectional,
- The Rows are numbered however, as the source cell does not know that it is being referenced by another cell.

There are two kinds of references:


Each location is called a cell and is denoted by its column and row.
- Relative – is a reference whose row and column addresses can vary when copied to a new
location. The amount of change is equal to the distance (both row- and column- wise) between the
copied and pasted cells. For example, assume that cell A2 contains the relative reference A1. If
cell A2 is copied to cell S43, then cell S43 will contain S42. In this case, the cell reference is
essentially saying ‘always point one cell above me’.

- Absolute – is a cell reference that does not change when copied to a new location. In formulas, a
dollar sign ($) indicates and absolute reference; this symbol can be used in conjunction with any
row or column (or both) to construct an absolute cell reference.

Section 2 | Slide 31 Section 2 | Slide 32


Formulas and References Completing References

Creates Creates
Absolute Relative Accept Cancel
Use the formula
Reference Reference bar to enter
formulas

Section 2 | Slide 33 Section 2 | Slide 34

You can enter references directly into the formula bar. When making references, you must ACCEPT each reference and the final formula.

In-Sight will highlight the referenced cells using different matching colors to make them more easily TIP: You can usually utilize the <Enter> key to Accept or the <Esc> key to Cancel.
identifiable (as in A2 and B2).

Section 2 | Slide 33 Section 2 | Slide 34


How To Enter Formulas & References How To Enter Formulas & References
1.

2.
Formula bar shows formula

3.
Cell shows result of formula

4.

Section 2 | Slide 35 Section 2 | Slide 36

To enter formulas and references: To enter formulas and references:

1. Select an empty cell 1. Select an empty cell


2. Click inside the formula bar 2. Click inside the formula bar
3. Use the Absolute or Relative reference buttons to make references to cells needed to complete 3. Use the Absolute or Relative reference buttons to make references to cells needed to complete
the formula the formula
4. Click the green check button to save the changes made 4. Click the green check button to save the changes made

Section 2 | Slide 35 Section 2 | Slide 36


Help Summary

• One In-Sight Explorer can manage multiple networked


In-Sight systems from a single PC.

• A spreadsheet is composed of cells. The highlighted cell


is called the active cell. The spreadsheet and its current
contents are called a job.

• Searchable help is available through the Help Menu in


In-Sight Explorer, as well as within a tool’s Property
Sheet.

Section 2 | Slide 37 Section 2 | Slide 38

To help you find the tool you are looking for check the help file which is organized in the same hierarchy In this section we covered the following topics:
as the toolbox found in software.
- One In-Sight Explorer can manage multiple networked In-Sight systems from a single PC.
Context-sensitive help can be obtained by clicking the <F1> key. - A spreadsheet is composed of cells. The highlighted cell is the active cell. The spreadsheet and
its current contents are called a job.
- Searchable help is available through the Help Menu in In-Sight Explorer, as well as within a
Tool’s Property Sheet.

Section 2 | Slide 37 Section 2 | Slide 38


Lab Exercise

Section 2 | Slide 39

Complete:

- Lab Exercise 2.1 – Software & Image Acquisition


- Lab Exercise 2.2 – References

Section 2 | Slide 39
In-Sight Spreadsheets Standard Section 2 | Lab Exercise In-Sight Spreadsheets Standard Section 2 | Lab Exercise

3. Start a new job.


Lab Exercise 2.1 – Software and Image Acquisition Click the Live Video button to start a live image.
4. Move the part under the camera to confirm the image is updating.
At the end of this lab exercise, Participants will be able to:
• Log into the camera and put it into Live Mode NOTE: If there is too much glare on the part, try moving the tripod so it is not directly
under a ceiling light. As a last resort, try tilting the camera or part.
• Acquire a good image on the camera
5. If you are using an Autofocus lens, continue with step 6.
If you are using a C-Mount lens, skip to step 15.
The Participant will utilize the following In-Sight Functions to successfully complete this
exercise: NOTE: Click anywhere in the Image view window to stop the live acquisition.
• Logging on 6. Double-click on cell A0, the Image cell.
• Live Mode

Follow the steps below to complete the lab exercise:

1. Launch In-Sight Explorer (ISE) and enter the Spreadsheet view.


The Spreadsheet view displays. (If not, specify ViewSpreadsheet.)
7. Adjust the Exposure setting to establish light and dark pixels.
NOTE: Increase exposure for a lighter image.

2. Double-click on your camera to log into it and control it.


HINT: If you don’t know which camera is yours, click Help  About In-Sight Explorer
and match your MAC address (printed on the camera) to your camera’s name.

7a. Adjust the Light Control settings to establish light and dark pixels.

Page 1 Page 2
In-Sight Spreadsheets Standard Section 2 | Lab Exercise In-Sight Spreadsheets Standard Section 2 | Lab Exercise

NOTE: Ensure either Always On or Exposure Control is selected, and then adjust 8. Set the Focus Region in AcquireImage. You will need to decide which area of the
the Light Intensity to establish light and dark pixels. part to focus on, since it is a 3D part. Then adjust the focus by using the
Autofocus button.
NOTE: The button is in the lower right hand corner of the live video view.

For cameras with the “torch light” high intensity lighting accessory, check the
settings under Sensor
Light Settings.

9. Click the Live Video button to close the window.


10. Double-click on cell A0, the Image cell.
The Property Sheet – AcquireImage dialog box displays.
11. Set the Trigger to Manual and click the OK button.
NOTE: Use the Trigger button or use the <F5> key to trigger.

Page 3 Page 4
In-Sight Spreadsheets Standard Section 2 | Lab Exercise In-Sight Spreadsheets Standard Section 2 | Lab Exercise

NOTE: The top border of ISE will indicate what camera you are logged onto. To verify the writing on the block is dark and the metallic background light on your
Confirm that you are logged onto your camera. image, click the Show Image Saturation button (top icon bar) to assist with
this.
NOTE: Too much blue means that the image is too dark and too much red means
that the image is too light. To remedy this, adjust aperture setting, exposure or
light control (LEDs).
NOTE: If you are not logged onto your camera, select your camera from the In-Sight
Network list and double-click on it. (Lab 1 – step #8)
12. Trigger the camera; you should see the last image that your camera acquired.
Move your hand under the camera – since you are not in Live Mode you should not
see any movement.
13. Click on the Live Video button. Wave your hand under the camera, you
should now see movement.
14. Place your good block with the Cognex side up so that the whole part appears in
your view.

16. If the cameras in the training room are on a network, find another In-Sight system
in the room and ask its user if it is Okay for you to try and log into it.
17. With the Set up Image button selected, activate Live Mode on their system to
verify that you have logged into the correct system. Offer them the same courtesy.
18. Browse through the various drop-down menus in In-Sight Explorer to compare
what is available on both the Icon bar and within the Menu bar.

15. If you are using a C-mount lens, adjust the two ring controls on the lens to adjust
the aperture and focus. 19. Save the job as MyFocus.job on the In-Sight camera and your own folder on the
5000, 7000 and Micro cameras with C-mount lens: PC.
Aperture – adjusts the amount of light allowed to pass through the lens.
Focus – adjusts the sharpness of the image.

Page 5 Page 6
In-Sight Spreadsheets Standard Section 2 | Lab Exercise In-Sight Spreadsheets Standard Section 2 | Lab Exercise

4. Enter a formula into cell B2 that adds cells A2 and A3 using Absolute References.
Lab Exercise 2.2 – References
At the end of this lab exercise, Participants will be able to:
• Insert Absolute and Relative references into a spreadsheet and observe the
differences

The Participant will utilize the following In-Sight Functions to successfully complete this
exercise:
• Absolute Reference
• Relative Reference

Absolute References
Follow the steps below to complete the lab exercise:

1. Click the New Job button to begin a new job.


A blank Spreadsheet displays.
NOTE: We will not be using an image for this lab.
Relative References
Follow the steps below to complete the lab exercise:

1. Enter a value of -1.0 in cell A5.


2. Enter a value of 4.0 in A6.
3. Enter a formula into cell B5 that adds cells A5 and A6 using Relative References.

2. Enter a value of 1.0 in cell A2.


3. Enter a value of 2.0 in A3.

Page 7 Page 8
In-Sight Spreadsheets Standard Section 2 | Lab Exercise

Copying and Pasting Relative References


Follow the steps below to complete the lab exercise:

1. Highlight cell B2.


2. Copy and paste it to cell D2.
3. Highlight cell B5.
4. Copy and paste it to cell D5.
5. Examine the formula in cell D5 and compare it to the original formula in B5.

How do they differ? ___________________________________________

Why do they differ? ___________________________________________

Why is cell D2 showing a number? _______________________________

6. Save the job as MyCells.job on the In-Sight camera and your own folder on the
PC.

Page 9
In-Sight Spreadsheets Standard Skills Journal

Please answer the corresponding questions upon completion of each section. The
questions are to reinforce the skills learned during each In-Sight Spreadsheets Standard
section.

Section 2 – Software, Image and Calibration


• Manage multiple networked In-Sight systems from a single PC
• Explain the basic principles and terminology of image acquisition
• Record and play back images
• Navigate through the spreadsheet
• Save job files
• Load job files

1. Name three window panes in the Spreadsheets Mode of In-Sight Explorer.

2. What are the two differences between a camera’s being online vs. offline?

3. List three types of online trigger.

4. Name the two types of filmstrips. What are the two main differences?

5. What are the two kinds of reference in a spreadsheet cell and how do they differ?

Page 1
Objectives

At the end of this Section Participants will be able to:


Pattern & Logic Tools
Section 3
- Apply the Property Sheet parameters and auto-inserted
information for FindPatterns to a sample image
- Identify uses for the PatMax technology
- Configure the PatMax Pattern tools
- Create basic mathematical formulas involving If, And, InRange
and Not functions

Section 3 | Slide 2

The third section of the In-Sight Spreadsheets Standard training will focus on the Pattern and Logic At the end of this section Participants will be able to:
Tools.
- Apply the Property Sheet parameters and auto-inserted information for FindPatterns to a sample
image
- Identify uses for the PatMax technology
- Configure the PatMax Pattern tools
- Create basic mathematical formulas involving If, And, InRange and Not functions

Section 3 | Slide 1 Section 3 | Slide 2


Steps for Application Creation Block Inspection

1. Analyze Problem
1
2. Create Prototype Job 2
3
3. Design Operator Interface

4. Complete & Deploy


4 4 4

Section 3 | Slide 3 Section 3 | Slide 4

1. Analyze Problem To pass, the block must meet these tests:


- Determine what needs to be inspected
- Understand what is considered Good and Bad 1. Block must be present
- Tool used: FindPatterns or PatMax
2. Create Prototype Job 2. Block must not have gouge
- Use vision tools and logic to inspect part - Tool used: ExtractHistogram

3. Design Operator Interface 3. Block must have correct length (within tolerance)
- Decide how the results will be given to the user (visual, Ethernet, discrete, etc.) - Tool used: FindSegment

4. Complete & Deploy 4. Block must have correctly sized holes (within tolerance)
- Final details for longevity and ease of use - Tool used: ExtractBlobs
- Maintenance functions like back-up and restore

Section 3 | Slide 3 Section 3 | Slide 4


FindPatterns Applications

FindPatterns

Section 3 | Slide 5 Section 3 | Slide 6

In this section we will find a model using the FindPattern Tool. The goal of a vision application is to determine whether a part is good or bad, based on image analysis of
the part. You need to define as precisely as possible what distinguishes a good part from a bad part in
order to design an inspection system. Try listing all the criteria that must be met for the part to pass,
keeping in mind all the variations of “good” and “bad” parts.

Considerations:

- Which vision tools might you use for analysis? (Sometimes only one tool will work, sometimes
there will be a choice.)
- Do you require real world units of measure rather than pixels?
- How many cameras and which model(s) will you need to capture all of the required detail?
- Variations in color can affect how the part appears to the vision system.
- If the parts move, will you require strobed lighting?
- For parts whose position varies, how will you locate and fixture the part?
- How will you trigger your system to acquire an image?
- What outputs will your system need to send results to external hardware, for example, to reject
a part?
- How will the operator interact with the system?

Section 3 | Slide 5 Section 3 | Slide 6


Using FindPatterns to Locate Part FindPatterns

Find Region

Model Region

(x, y)
Angle

Section 3 | Slide 7 Section 3 | Slide 8

When the part moves in the Field of View (FOV) the tools that inspect a certain area on the part must also Pattern Match - Train a model part, followed by the model part searching at run time FindPatterns &
move, this is done by “Fixturing” with a Locating tool to: PatMax®

- Use FindPatterns to Train and Search for a specific model or pattern in the image
- Determine (x, y) part location
- FindPatterns performs pattern-matching
- Determine rotation (angle)
- FindPatterns is available on all In-Sight vision systems at no additional cost
- Determine scale variations within ±10%

The x, y, and angle results will be used to locate the part for other tools.

Section 3 | Slide 7 Section 3 | Slide 8


Using a FindPatterns Function Adding Vision Tools to the Spreadsheet

Section 3 | Slide 9 Section 3 | Slide 10

To use a FindPatterns Function: To use a vision function, drag it from the Tool Palette and drop it into the cell in the spreadsheet.

- Leave blank rows at the top for building custom operator interface later
- Always insert descriptive comments using an apostrophe

Section 3 | Slide 9 Section 3 | Slide 10


FindPatterns Property Sheet Setting a Region

Model Region
Graphics

Find Region
Graphics

Section 3 | Slide 11 Section 3 | Slide 12

A Property Sheet is used to configure In-Sight Explorer functions whenever a function returns a structure Model Region and Find Region graphics (red lines) can: move, resize, and rotate a region. The mode
or the function has many input parameters. depends on where you place the curser.

Every Property Sheet has a table that lists the default parameters, values and expressions of a function. The Regions define the area of the image where the Location or Inspection Tool will perform its operation,
Each row of a Property Sheet consists of a parameter name and value, or a group of expanded/collapsed and are also referred to as the Region of Interest (ROI).
parameters and values.
NOTE: The entire Model Region must be within the Find Region to be found.
The FindPatterns Property Sheet includes the following:
- Image – Reference to target image cell
- Fixture – Where tool should fixture itself
- Model Region – Region specifying model features
- Model Settings – Advanced settings (more…)
- Find Region – Region specifying searching area
- Number to Find – How many instances to look for
- Angle Range – ± rotation tolerance
- Scale Tolerance – Enable +/-10% size variations
- Thresh: Accept – Minimum score needed for match
- Thresh: Confuse – Minimum score needed for speed match
- Timeout – Max amount of time spent
- Show – Graphic options for display

TIP: Property Sheet parameters support drag-and-drop placement of their controls into the spreadsheet.
By selecting a parameter from the Property Sheet and then dragging that parameter to the spreadsheet,
the parameter’s label and edit control will be automatically created and referenced in the spreadsheet.

Section 3 | Slide 11 Section 3 | Slide 12


How to Set Model or Find Region FindPatterns: Model Settings
1.

2.

3.

4.

Section 3 | Slide 13 Section 3 | Slide 14

To set the Model or Find Region: The Model Settings specifies the model training parameters.

1. Select Model Region from the Property Sheet


- Model Type – Specifies area model or edge model training.
2. Click edit graphics
3. Use graphics (Red Lines) to select features for the model on the image - Coarseness – Specifies the size of the smallest features in the trained model.
4. Click green check to OK selection - Accuracy – Specifies the tradeoff between accuracy/reliability and execution speed.
- Offset Row – Specifies the row offset (-479 to 479; default = 0) from the model’s center to the
response point, as measured in the model’s local coordinate system.
- Offset Column – Specifies the column offset (-639 to 639; default = 0) from the model’s center
to the response point, as measured in the model’s local coordinate system.
- ForceTrain – Enables/disables automatic model training on spreadsheet updates. When
ForceTrain is OFF, the model is trained only when the Model Region or Model Settings are
changed. When ON, the model will be re-trained every time the spreadsheet executes.
- Patterns – Specifies whether the FindPatterns function will search using its own model, or a
model contained in another FindPatterns function. The default value is 0, which means that
current FindPatterns function will use its own model.
NOTE: This behavior does not chain; it is invalid to reference a PatFind model that, in turn,
references another PatFind model.

Section 3 | Slide 13 Section 3 | Slide 14


FindPatterns: Model Settings: Area Model FindPatterns: Model Settings: Edge Model

• Uses pixel values in region


• Creates a geometric model of just
• Normalized correlation search the edges
• Use when all pixels values in region • Use when edges are what is
should match model important, not pixel grayscale value
in region

Look for green outline of boundary

Section 3 | Slide 15 Section 3 | Slide 16

The Area Model training creates patterns based on a uniform sampling of greyscale pixel values from the The Edge Model training creates patterns based on a sampling of points biased to the immediate vicinity
Model Region. The area model similarity metric is based on the normalized greyscale correlation of the greyscale discontinuities typically found on object’s boundaries. The edge model similarity metric is
coefficient. based on the normalized comparison of greyscale derivatives.

The Area Model may be used when: The Edge Model may be used when:
- Model Region is small - Back lighting
- No well-defined edges - Non-linear lighting changes, such as shiny metal parts
- Speed is critical - Significant changes in focus or background

NOTE: The maximum number of points that may be used to define a model is 4096. NOTE: The maximum number of points that may be used to define a model is 4096.

Section 3 | Slide 15 Section 3 | Slide 16


FindPatterns: Coarseness FindPatterns: Accuracy

Section 3 | Slide 17 Section 3 | Slide 18

Coarseness specifies size of the smallest features in trained model. Accuracy specifies the tradeoff between accuracy/reliability and execution speed.

- 0 = Fine – Smallest features are approximately 4 pixels in size - 0 = Accurate – Specifies higher accuracy/reliability and slower execution speed
- 1 = Medium (default) – Smallest features are approximately 4 to 8 pixels in size - 1 = Medium (default) – Moderate accuracy and speed
- 2 = Coarse – Smallest features are larger than 8 pixels in size - 2 = Fast – Specifies lower accuracy/reliability and faster execution speed

Section 3 | Slide 17 Section 3 | Slide 18


Coarseness & Accuracy Coarseness & Accuracy Examples

Coarseness Accuracy Min Feature Edge Tolerance Coarseness: Coarse

Fine Accurate 1-2 0.5-1.5 Accuracy: Fast

Fine Medium 2-4 0.5-2

Fine Fast 4-6 1-2.5

Medium Accurate 4-6 0.5-2.5 Coarseness: Medium


Medium Medium 6-8 1-3 Accuracy: Medium
Medium Fast 8-10 1.5-4

Coarse Accurate 10-12 0.5-3

Coarse Medium 12-14 1-4 Coarseness: Fine


Coarse Fast 14-16 2-6 Accuracy: Accurate

Section 3 | Slide 19 Section 3 | Slide 20 3-


20
For Edge Models, Coarseness and Accuracy together determine the size of features to look for and the The green outlines around the Cognex name shows the differences in the coarseness and accuracy – you
tolerance for the edge locations in training. The following table shows the translation between these two will notice in the top example that the outline is not recognizing any of the letters, while in the bottom
parameters and the minimum feature size and tolerance for edge movement in the trained pattern. example all edges of the letter are found.

Min Feature – the minimum size (in pixels) of the dark and light region on each side of an edge for that Notice the position of the red reticule – to the human eye, there is no difference between the Coarse/Fast
edge to be included in the model. If the feature size is small, all of the really fine features in the pattern and the Fine/Accurate, but to the vision tool it would be off by a few pixels. A pixel could be a mm or
will be included in the model, but too much noise might be included for a very coarse model. If the feature more.
size is coarse, the noise will be ignored, but so will the very fine features if they exist.

Edge Tolerance – the maximum amount that an edge can move (± in pixels) from the trained location
and still give a strong response. If this tolerance is small, the pattern can be located more accurately, but
small steps must be taken during run time to ensure that the pattern is found, resulting in a slower search.
If this tolerance is large, the pattern can't be located as accurately, but larger steps can be taken during
run time, resulting in a faster search.

For Area Models, the Edge Tolerance numbers derived from Coarseness and Accuracy are stored during
training so that coarser or finer searching can be done, resulting in a small difference in search speed.

For both types of models, an Accuracy setting of Fast causes the run-time to evaluate fewer similar
looking possible results than an Accuracy setting of Accurate. This also results in a small speed
difference.

Section 3 | Slide 19 Section 3 | Slide 20


Accept & Confusion Thresholds Accept & Confusion Thresholds

Phase 1 – Lower Resolution Phase 2 – Full Resolution

Accept (50) Confusion (70)

Accept (50) Confusion (70) 0 100

0 100
No Yes

Maybe Re-examine Maybe’s

Section 3 | Slide 21 Section 3 | Slide 22

Phase 1: Phase 2:

- Uses lower resolution - Only runs if Phase 1 returns an insufficient number of valid results
- Fast - Uses full resolution
- Ignores candidates below Accept threshold - Evaluates only those candidates that scored between the Accept and Confusion thresholds
- Considers candidates above Confusion threshold to be matches

Section 3 | Slide 21 Section 3 | Slide 22


Accept & Confusion Thresholds Caution When Changing Model

Accept (70) Confusion (70) CAUTION!


0 100
• Changing any parameter for the
Fastest Model will cause retraining on
the current image.
NO YES
No MAYBE Zone • Also, any tools fixtured to the
changed pattern will need to
Accept (50) Confusion (100) have their regions repositioned
0 100
• This happens with any pattern
Most tool
Accurate
NO Large MAYBE Zone

Section 3 | Slide 23 Section 3 | Slide 24

This slide depicts how to make the tool use only Phase One or Phase Two. Results would only deal with Use caution when changing the model region…
the accuracy of the ‘found’ pattern. - Changing any parameter for the Model will cause retraining on the current image.
- Also, any tools fixtured to the pattern will need to have their regions repositioned
- This happens with any pattern tool, e.g., PatMax and PatMaxRedLine

Section 3 | Slide 23 Section 3 | Slide 24


What is PatMax?

Using PatMax to Find a Model

PatMax: Feature-based FindPatterns: Pixel grid-based

- PatMax is a pattern-based search technology.


- The models in PatMax use geometric (feature based) representation;
FindPatterns is pixel grid-based. Feature based is more accurate.

Section 3 | Slide 25 Section 3 | Slide 26

In this section we will find a model using the PatMax Tool. A PatMax pattern is a collection of geometric features where each feature is a point on the boundary
between two regions of dissimilar pixel values. PatMax calculates geometric representations of curves.
This makes it more accurate than FindPatterns, which approximates curves with line segments that follow
the pixel grid. PatMax can be more accurately rotated or scaled.

Section 3 | Slide 25 Section 3 | Slide 26


How Does it Work? Advantages of PatMax

• PatMax location results have a very


PatMax uses two tools: high level of accuracy
FindPatMaxPatterns
• PatMax can deal with difficult situations
- Finds features in an image based
on the trained pattern such as
- shiny parts with reflecting light
TrainPatMaxPattern - desired feature is similar to other features in
- Extracts and trains a pattern: a background
trained geometric description of an
object you wish to find • PatMax is an option available on In-
Sight vision systems at an additional
cost

Section 3 | Slide 27 Section 3 | Slide 28

PatMax offers three key features that distinguish it from other pattern-location technologies in machine Situations that require the accuracy and reliability of the PatMax Patterns Tool:
vision:
- High-speed location of objects whose appearance is rotated, scaled and/or stretched - When variations in lighting and reflections are difficult to control. For example, metal parts can
- Location technology that is based on object shape, not greyscale values reflect light in random directions.
- Very high accuracy - When the pattern being inspected is similarly shaped or shaded in comparison to something in
the background, or the pattern is being overlapped or partially hidden by other objects in the
PatMax uses two tools: image.
- FindPatMaxPatterns - When you want to accurately recognize one type of pattern from other, similar, patterns.
- Finds features in an image based on the trained pattern - When the conditions of your deployment environment are too demanding for the Patterns
- TrainPatMaxPattern Location Tool to consistently and reliably.
- Extracts and trains a pattern; a trained geometric description of an object you wish to
find

Section 3 | Slide 27 Section 3 | Slide 28


TrainPatMaxPattern Parameters Algorithm

Choose from two algorithms:

PatQuick PatMax

Section 3 | Slide 29 Section 3 | Slide 30

The TrainPatMaxPattern Property Sheet includes the following: Choose from two algorithms:

- Image – Reference to target image cell PatQuick


- Fixture – Where tool should fixture itself - Optimized for speed
- Pattern Region – Region specifying features to train - Uses coarse granularity (details later)
- External Region – Allows use of custom region
- Pattern Origin – Location within pattern to report PatMax
- Pattern Settings – Specialized settings - Optimized for accuracy
- Algorithm – Choose from PatQuick or PatMax - Uses fine granularity (details later)
- Elasticity – Specifies allowed perimeter deviation
- Ignore Polarity – Toggles to check for opposite polarity NOTE: The color of a trained feature (Green, Yellow and Red) represents the quality of the feature
- Sensitivity – Disabled candidate for matching. Green represents high quality; red represents low. Yellow is intermediate. The
- Coarse Granularity – Granularity used to find large features Granularity parameters should be adjusted until all trained features are green or you should choose a
- Fine Granularity – Granularity used to find small features better image to train. Distinct features (edges) and good image contrast will yield the best results.
- Reuse Training Image – Model image is saved for later use (retrain)
- Timeout – Milliseconds before tool gives up
- Show – Graphic options to display

NOTE: Any time you make region changes to the TrainPatMaxPattern tool (directly or through an
EditRegion), the model is retrained on whatever is inside the training region at that time – even with the
“Reuse Training Image” selected. If that is not selected, any changes to TrainPatMaxPattern will retrain
the model.

The tool is disabled by default.


GetTrained indicates if a pattern was trained or not.
Section 3 | Slide 29 Section 3 | Slide 30
Elasticity Ignore Polarity

Trained Pattern Elastic Change on Part

Section 3 | Slide 31 Section 3 | Slide 32

Elasticity specifies the amount of non-linear perimeter deviation (0 to 10; 0 = default). The Ignore Polarity defines whether or not found patterns can contain color-inverted match features, in
- 0 = No Tolerance respect to the Model pattern (default = Off, unchecked). When Ignore Polarity is applied (On, Checked),
- 1 or Greater = Flexible Boundary detected patterns with color inverted features, such as black/white vs. white/black in the Model pattern, for
example, will be classified as matching the Model pattern.
A linear change in a pattern is where the whole pattern changes in the same way. For example, if a cross
were twice as large all over, as the model, that would be a linear change, called Scale. But in the cross at Ignore Polarity instructs PatMax to check for original polarity identified at training and opposite polarity as
the right, only part of the cross has changed. For example, it could be made of rubber, and the top well. This may increase the execution time for PatMax.
section was pulled out at the corners. This is a non-linear change.

Increasing the elasticity parameter makes it more likely to find a part such as this. But if you increase the
elasticity value too much, you run the risk of a wrong feature being selected.

Section 3 | Slide 31 Section 3 | Slide 32


FindPatMaxPatterns Parameters Accept Threshold

0 80 100

Not Valid Accept Valid


Matches Threshold Matches

Section 3 | Slide 33 Section 3 | Slide 34

The FindPatMaxPatterns Property Sheet includes the following: Accept Threshold defines the degree of similarity that must exist between the model pattern and the
found pattern. The minimum acceptable score will vary depending upon the Location Tool selected.
- Image –Reference to target image cell Increasing the Accept value reduces the time required for the search.
- Fixture – Where tool should fixture itself
- Find Region – Region specifying searching area PatMax uses this to determine if the match represents a valid instance of the model within the search
- External Region –Allows use of custom region region. Increasing the Accept value reduces the time required for search.
- Pattern – Reference to a TrainPatMaxPattern
- Number to Find –Number of match to search for
- Accept –Minimum score necessary
- Contrast – Specifies lowest contrast necessary
- Clutter in Score – Toggles consideration of extra features
- Outside Region –Percentage allowed out of region
- Find Tolerances – Sets rotation, scale & aspect ratio range
- Find Overlapping –Sets allowable overlapping
- Timeout – Milliseconds before tool gives up
- Show – Graphic options to display

Section 3 | Slide 33 Section 3 | Slide 34


Find Tolerances FindPatterns & FindPatMaxPatterns Results

Allowable rotation of
match relative to model

Allowable scaling range


relative to pattern
(100% means no scaling)

Cross
Tool Structure Auto-inserted Functions

Section 3 | Slide 35 Section 3 | Slide 36

Find Tolerances – Specifies the settings for finding patterns that are rotated or scaled with respect to the Results:
trained pattern.
- Tool Structure – Holds all the result information returned by the tool.
- Angle Start – Specifies the angle at which to begin searching for matches, in degrees counter- - Cross – Indicates the center of the model found.
clockwise (-180 to 180, default = -15). - Auto-inserted Functions – Are the most commonly needed information, they pull the data out of
- Angle End – Specifies the angle at which to stop searching for matches, in degrees counter- the Structure.
clockwise (-180 to 180; default = 15).
- Scale Start – Specifies the scale at which to begin searching for matches (1 to 10,000; default
= 100).
- Scale End – Specifies the scale at which to stop searching for matches (1 to 10,000; default =
100).
- Aspect Ratio – Limits match-finding to uniform scale only or allows aspect ratio in X or Y, in
addition to a uniform scale.
NOTE: The Aspect Start and Aspect End parameters are disabled if Uniform Scale Change
is selected.
- 0 = Uniform Scale Change
- 1 = Uniform and X Change
- 2 = Uniform and Y Change
- Aspect Start – Specifies the minimum scale change, at which to begin searching for matches,
as a percentage of the trained pattern (1 to 10,000; default = 100).
- Aspect End – Specifies the maximum scale change, at which to stop searching for matches, as
a percentage of the trained pattern (1 to 10,000; default = 100).

Section 3 | Slide 35 Section 3 | Slide 36


FindPatterns & FindPatMaxPatterns

A B

Structure Using PatMax RedLine to Find a


GetRow($A$2,B2)
Model
GetCol($A$2,B2)
GetAngle($A$2,B2)

GetScale($A$2,B2)
GetScore($A$2,B2)

Section 3 | Slide 37 Section 3 | Slide 38

This slide shows the formula that is entered into each cell when the tool is added to the spreadsheet. In this section we will find a model using the PatMax RedLine Tool.

Section 3 | Slide 37 Section 3 | Slide 38


What is PatMax RedLine? Requirements for PatMax RedLine

Newer In-Sight models, such as:

In-Sight 76xx, 78xx, etc.

A complete reinvention of pattern matching! In-Sight 5705

In-Sight Micro 8405


Section 3 | Slide 39 Section 3 | Slide 40

PatMax Redline has been designed from the ground up to be optimized for speed on the newer In-Sight Requirements for PatMax RedLine:
5 megapixel models. It is not based on PatMax. PatMax Redline is typically 3 – 7X faster than PatMax, - Newer In-Sight models , such as 57xx, 78xx, and 8xxx series
and is sometimes even faster. It allows for both speed in pattern matching and high resolution (accuracy). - Spreadsheet Mode: In-Sight Explorer and firmware version 5.1.0 or higher
- EasyBuilder Mode: In-Sight Explorer and firmware version 5.2.0 or higher

PatMax RedLine is not supported by the following:


- In-Sight 51xx, 54xx, 56xx models
- In-Sight 74xx models
- In-Sight Micro 1xxx models

Section 3 | Slide 39 Section 3 | Slide 40


How Fast is PatMax RedLine? How Fast is PatMax RedLine?

PatMax RedLine 150 ms (3.2x faster) PatMax RedLine 111 ms (4.9x faster) PatMax RedLine 31 ms (6.8x faster)
Demo

PatMax = 486 ms PatMax = 551 ms PatMax = 210 ms

PatMax RedLine 208 ms (16x faster) PatMax RedLine 54 ms (4.5x faster) PatMax RedLine 810 ms (10.8x faster)

PatMax = 3330 ms PatMax = 242 ms PatMax = 8795 ms*

PatMax Redline is generally 4-7 times faster than PatMax

Section 3 | Slide 41 Section 3 | Slide 42

PatMax Redline is generally 4-7 times faster than PatMax, but may be even faster. The speeds depend The demo In-Sight job (PatMaxRedLine_demo) allows you to compare timings between PatMax and
on the In-Sight camera model, the nature of the part itself, and the settings in the Property Sheet, PatMax RedLine, using an In-Sight camera that has both tools. (The comparison of timings is not useful
including Model Region, Find Region, and Angle of Rotation. on the Emulator, since relative behavior between the two tools might not represent comparisons on actual
cameras.)
NOTE: These timings were obtained on an In-Sight 5705.
To run the demo:
1. Use a camera that has PatMax RedLine. Position a part under the camera and adjust
exposure and focus to get a good image. Open the demo job and go into its Custom View.
2. Set up the Pattern Region. This will be used for both PatMax and PatMax RedLine.
3. Click on Train Pattern. This will create separate models for PatMax and PatMax RedLine,
since they use different algorithms for their models.
4. Set up the Find Region. This will be used for both PatMax and PatMax RedLine.
5. Run the PatMax portion of the spreadsheet by clicking on the PATMAX logo in the Custom
View, then click on the Acquire button. This will display the job time for PatMax.
6. Click on the PATMAX REDLINE logo in the Custom View, then click on the Acquire button.
This will display the job time for PatMax RedLine. Compare with PatMax.
7. Try changing other parameters in the Custom View and repeating steps 5 and 6 to compare
times. For example, increasing the Angle should increase the ratio of speed between the two
tools.
8. Note that if you change a model’s parameter for PATMAX REDLINE, its first timing after that
will be much higher due to an optimization that is done at that point.

Section 3 | Slide 41 Section 3 | Slide 42


PatMax RedLine consists of two functions Moving from PatMax to PatMax RedLine
TrainPatMaxRedLine

FindPatMaxRedLine

• Existing job files using PatMax can be converted to


PatMax RedLine

• However, there are some differences in the Property


Sheet parameters and how they work

Section 3 | Slide 43 Section 3 | Slide 44

PatMax RedLine consists of two functions: TrainPatMaxRedLine and FindPatMaxRedLine. Moving from PatMax to PatMax RedLine :
NOTE: These functions are only available on In-Sight vision systems running In-Sight firmware 5.1.0 and
later. - Existing job files using PatMax can be converted to PatMax RedLine
- Load job on supported 5 MP system (5705, 5705C or Micro 8405)
- TrainPatMaxRedLine – Extracts and trains a pattern from an image for use with the - Replace TrainPatMaxPattern function with TrainPatMaxRedLine
FindPatMaxRedLine function. - Retrain the pattern using PatMax RedLine (PatMax RedLine can not use PatMax
NOTE: A trained pattern consumes approximately 1Mb when using the default pattern Region (320 patterns)
X 440), although the pattern size varies greatly depending on the size of the trained pattern. - Replace FindPatMaxPatterns with FindPatMaxRedLine

- FindPatMaxRedLine – Finds objects in an image based on a trained pattern. FindPatMaxRedLine - However, there are some differences in the Property Sheet parameters and how they work.
finds patterns within an image based on a trained pattern from a Patterns structure. - Some parameters are not present in both tools
- Some parameters are in both tools, but behave differently

Section 3 | Slide 43 Section 3 | Slide 44


Key Differences Between PatMax & PatMax RedLine PatMax RedLine is NOT PatMax!

• Setup and runtime differences between PatMax and Applications where PatMax may still be needed:
PatMax RedLine include:
- Fixtured search region (tool can run slow in 5.4 and earlier)
- Accept Thresholds and scoring methods are different - Variable aspect ratio, perspective, or skew
- Contrast Threshold has different meaning - Non-linear deviations along perimeter
- Automatic search optimization behaves differently - Filtering match results at approximately same X/Y location by
- Angle and Scale Find Tolerance behavior is different Angle and Scale Overlap

Section 3 | Slide 45 Section 3 | Slide 46

PatMax RedLine matches tend to score higher than PatMax. Therefore, it may be necessary to use a Differences:
higher Accept Threshold for PatMax RedLine compared to PatMax.
- Fixtured search region – In versions 5.4 and earlier, PatMax RedLine’s automatic search
PatMax RedLine contrast is a relative measure of contrast change between features in the trained optimization may cause it to run slower than PatMax if the search region is fixtured, especially
pattern versus target features in the search image. PatMax contrast is an absolute measure of contrast when searching for multiple targets on a confusing image background.
for target features in the search image. - Variable aspect ratio, perspective or skew – PatMax RedLine does not yet support X/Y
aspect ratio changes, perspective changes, or skew in target compared to trained pattern
In PatMax RedLine, an automatic (re)optimization occurs following any change to Angle/Scale Find - Non-linear deviations along perimeter – PatMax RedLine does not yet have an Elasticity
Tolerances or search ROI. This includes optimization at every trigger when PatMax RedLine is fixtured. parameter, and may be less tolerant of deviations along the target perimeter compared to
This can make the tool run slow if it is fixtured. However, PatMax RedLine is so much faster than PatMax
PatMax for a 5MP image that a fixture is no longer needed in most cases. - Filtering match results at approximately the same X/Y location by Angle and Scale
Overlap – PatMax RedLine cannot filter results based on angle and/or scale differences
PatMax RedLine Find Tolerances are strict, meaning the tool will fail if a limit is exceeded. PatMax Find between two or more matches found at the same X/Y location (as allowed by XY Overlap
Tolerances are permissive, meaning there is some allowance beyond the limits. Therefore, if PatMax setting).
RedLine does not find matches that PatMax finds, it may be necessary to widen the Angle and Scale Find
Tolerances in PatMax RedLine.

Section 3 | Slide 45 Section 3 | Slide 46


Comparison of Pattern Matching Tools

PatMax
FIndPatterns PatMax
RedLine

~4 times more
~4 times more accurate
Location Accuracy accurate accurate than
than FindPatterns
FindPatterns

4x - 7x faster than
Relative Speed fastest moderately fast
PatMax

Excellent for Excellent for challenging


Logic Tools
Excellent for many
Versatility applications
challenging apps requiring both
applications speed & accuracy

Scale Range 90 - 110% 1 - 10,000% 1 - 10,000%

~$750
Additional Price included ~$500
(includes PatMax)

included on all Included on newer


Which Models? standard models models
option on newer models

Section 3 | Slide 47 Section 3 | Slide 48

Take a moment to review the differences between the Pattern Matching tools that are available in the In this section we will review many of the Mathematical Logic Tools.
various In-Sight vision systems. Comparisons of speed are general, and may not apply to every situation.
The best way to compare speeds is to measure them for your setup.

Section 3 | Slide 47 Section 3 | Slide 48


Mathematical Functions Logic: If

If (condition, true_value, false_value)

• If condition is True, cell gets true_value


• If condition is False, cell gets false_value

Example:
A1 = 200
A2 = If (A1<128, 1, 0)
 A2 contains 0

A2 = If (A1<128, “Good”, “Bad”)


 A2 contains string of characters “Bad”

Section 3 | Slide 49 Section 3 | Slide 50

In-Sight has an extensive set of Vision, Mathematical, and other types of functions. The ultimate goal is When functions evaluate numbers:
to make a decision about an object being inspected using the information returned by the vision tools. - False = 0
- Logic - True = any other number (for example -5, -345, 1, 34 are all true)
- Lookup
- Math Example:
- Statistics A1 = 200
- Trigonometry A2= If (A1<128, 1, 0)
A2 = If (A1<128, “Good”, “Bad”)
In this section we will focus on Logic functions. - A2 contains string of characters “Bad”

Logic functions provide Boolean algebra and conditional testing capabilities in the In-Sight spreadsheet.
All bitwise functions perform the corresponding logical operation on each bit position in the binary
representation of the arguments and return the resulting binary number in decimal form.

For all Logic functions:


- False = 0
- True = any non-zero result

Section 3 | Slide 49 Section 3 | Slide 50


Logic: And Logic: InRange

And (condition0, condition1, condition2, …) InRange (value, min_value, max_value)


• Returns TRUE (1) if value is greater than or equal to
• Returns TRUE (1) if all conditions are TRUE min_value AND less than or equal to max_value,
• Otherwise returns FALSE (0) • Otherwise returns FALSE (0)

Example:
Example:
A1 = 23, B1 = 10, C1 = 22
A1 = 23, B1 = 10, C1 = 22
F1 = InRange (C1, B1, A1)
D1 = If (And(A1>B1, A1<C1), 1, -1)
 F1 contains 1
 D1 contains -1

Section 3 | Slide 51 Section 3 | Slide 52

Example: Example:
A1 = 23, B1 = 10, C1 = 22 A1 = 23, B1 = 10, C1 = 22
D1 = If (And(A1>B1, A1<C1), 1, -1) F1 = InRange (C1, B1, A1)
- D1 contains -1 - F1 contains 1

Section 3 | Slide 51 Section 3 | Slide 52


Logic: Not Logic: You Try It

A2 = 6
Not(condition) B2 = 244.5

• Returns opposite of condition What is in these cells?

A6 = If(A2>7, “High”, “Low”)


Example:
B7 = 92 F2 = If(Not(B2>128), “Pass”, “Fail”)
D1 = If(Not(B7>95), 1, 0)
 D1 contains 1 D1 = If(And(A2>5, A2<B2), -1, 1)

F1 = Not(InRange(A2, 0, B2))

Section 3 | Slide 53 Section 3 | Slide 54

Example: What is in these cells?


B7 = 92
D1 = If(Not(B7>95), 1, 0) A6 = If(A2>7, “High”, “Low”)
- D1 contains 1 F2 = If(Not(B2>128), “Pass”, “Fail”)
D1 = If(And(A2>5, A2<B2), -1, 1)
F1 = Not(InRange(A2, 0, B2))

Answers: “Low”, “Fail”, -1, 0


Section 3 | Slide 53 Section 3 | Slide 54
How to Enter Functions Summary

• Property Sheets make it easy to specify parameters for


a function.
• Auto-inserted functions are the most commonly needed
results from a tool.
• FindPatterns is a powerful tool for locating features on
every part in 2 phases: Training and Searching.
• PatMax can be used to locate trained models when
appearance of the models is adversely affected.
• PatMax RedLine provides both high speed and high
resolution.
• In-Sight has an extensive set of Math functions used to
help make decisions about an inspection.

Section 3 | Slide 55 Section 3 | Slide 56

To enter functions, follow these steps: In this section we covered the following topics:
1. Find the function in the Palette pane.
2. Drag function to desired location in the spreadsheet. - Property Sheets make it easy to specify parameters for a function.
3. Make references to the desired cells to be evaluated by the function. - Auto-inserted functions are the most commonly needed functions for getting information out of
4. Finally, click the green check to save the changes made. Structure.
- FindPatterns is a powerful tool for locating features in every part in 2 phases: Training and
Searching.
- PatMax can be used to locate trained models when appearance of the models is adversely
affected.
- PatMax RedLine provides both high speed and high resolution.
- In-Sight has an extensive set of Math functions used to help make decisions about the
inspection.

Section 3 | Slide 55 Section 3 | Slide 56


Lab Exercise

Section 3 | Slide 57

Complete:
Lab Exercise 3.1 – Logic
Lab Exercise 3.2 – FindPatterns
Lab Exercise 3.3 – PatMax (if time allows)

Section 3 | Slide 57
In-Sight Spreadsheets Standard Section 3 | Lab Exercise In-Sight Spreadsheets Standard Section 3 | Lab Exercise

Logic – If & And


Lab Exercise 3.1 – Logic
Follow the steps below to complete the lab exercise:
At the end of this lab exercise, Participants will be able to:
• Utilize logic statements to determine Pass/Fail 1. Enter a value of -1.0 in cell B2.
2. Enter a formula into cell A6 that will display the word Accept in A6 if the two values
The Participant will utilize the following In-Sight Logic Functions to successfully complete in cells A2 and B2 are both greater than zero, or display Reject if otherwise.
this exercise:
• If
• And

Logic – If
Follow the steps below to complete the lab exercise:

1. Start a new job.


2. Enter a value of -1.0 in cell A2.
3. Enter a formula into cell A4 that will display the word Accept if the value in cell A2
is greater than zero, or display Reject if the value is less than zero.

3. Change the value in cell B2 to 2.0 and observe what happens.


HINT: Use the AND function under Mathematics  Logic as the first parameter in
an IF statement.
4. Save as MyCells2.job.

4. Change the value in cell A2 to 1.0 and observe what happens.


HINT: Use the IF function under Mathematics  Logic.

Page 1 Page 2
In-Sight Spreadsheets Standard Section 3 | Lab Exercise In-Sight Spreadsheets Standard Section 3 | Lab Exercise

4. Leave the first 10 spreadsheets rows (numbered 0 – 9) blank (except for


Lab Exercise 3.2 – FindPatterns AcquireImage).
NOTE: We will use these rows in a later lab to create an operator interface.
At the end of this lab exercise, Participants will be able to:
• Utilize the FindPatterns tool to locate the Cognex block in the Field of View
• Report the location based on row, column, and angle
• Apply the location information for fixturing in other vision functions

The Participant will utilize the following In-Sight Functions to successfully complete this
exercise:
• Live Video
• FindPatterns
• Profiler

We will use the following terminology to identify the parts of the Cognex block. 5. Enter the comment Find the Logo in cell B10. Be sure to start with an apostrophe
• Logo – Cognex logo on the front of the part (‘).
• Text – Human readable code 6. Insert a FindPatterns function into cell C12 of the spreadsheet.
• 2D Code – Data matrix code of human readable text
• Holes A, B, and C – The 3 holes in the flat portion of the block

Logo

Text 2D Code

Hole A Hole C

The FindPatterns Property Sheet displays.

Hole B

Follow the steps below to complete the lab exercise:

1. Open the MyFocus.job from Lab Exercise 2.


2. To verify the block is in the Field of View, click the Live Video button and
position the block under the camera so that it is centered in the field of view, as
shown above.
NOTE: Make it as large as possible in the FOV for good resolution but leave some
room for part movement.
3. Exit Live Video mode.

Page 3 Page 4
In-Sight Spreadsheets Standard Section 3 | Lab Exercise In-Sight Spreadsheets Standard Section 3 | Lab Exercise

7. Configure the Parameters of the FindPatterns Property Sheet as follows: Notice the green lines around the Cognex logo.
Model Region – As shown in the screenshot below step 8a.
Model Settings – Model Type – Edge model
Coarseness – Medium
Find Region – Make its size about ¼ to ½ of the FOV, as shown below
Angle Range – 45
Show – Input and result graphics
Allow the other parameters to remain as their default values.
8. Draw the Model Region by double clicking on the Model Region parameter or by
clicking the Edit Graphic icon in the toolbar.
11. Click Overlay in the View menu or click the Overlay icon in the toolbar to turn
off the overlay to see the image without the spreadsheet blocking it.

a. Draw the Model Region as shown below.

Model Region

Find Region

12. Use In-Sight Explorer’s Zoom feature (Image  Zoom) to enlarge the image until
the edge model is clear.

9. Draw the Find Region by double clicking on the Find Region parameter or by
clicking the Edit Graphic icon in the toolbar.
a. Draw the Find Region as shown above.
10. To see the actual Edge model, stay in the Property Sheet.

Page 5 Page 6
In-Sight Spreadsheets Standard Section 3 | Lab Exercise In-Sight Spreadsheets Standard Section 3 | Lab Exercise

NOTE: Zoom is found under the Image Menu and on the Icon toolbar as shown 17. Move the part around, rotate it and observe the FindPattern’s results in the
below. spreadsheet when the model is within the Find Region and when it is outside the
Find Region.

13. When done, turn the Overlay back on and Zoom to Fill.
14. Click OK to finalize the FindPatterns configuration.
15. Double-click on cell A0 to change the AcquireImage to Continuous Trigger.

18. Observe the Angle value as you rotate the block.


19. Go Offline.
20. Double-click on cell A0 to change the AcquireImage back to Manual Trigger (see
step 15).
21. Save the job as MyPatterns.job on the In-Sight camera and your own folder on
the PC.

16. Click the Online button to go online.

Page 7 Page 8
In-Sight Spreadsheets Standard Section 3 | Lab Exercise

Lab Exercise 3.3 – PatMax Tools (if time allows)


Use the PatMax tools TrainPatMaxPattern and FindPatMaxPatterns to locate the block.

TrainPatMaxPattern

FindPatMaxPattern

Page 9
In-Sight Spreadsheets Standard Skills Journal In-Sight Spreadsheets Standard Skills Journal

Please answer the corresponding questions upon completion of each section. The 5. Suppose you are inspecting for the proper location of the metal tab on top of a shiny
questions are to reinforce the skills learned during each In-Sight Spreadsheets Standard soda can using a bring light above it. So there is glare. Compare the advantages
section. and disadvantages of using FindPatterns, PatMax and PatMax RedLine.

Section 3 – Pattern & Logic Tools


• Apply the Property Sheet parameters and auto-inserted information for
FindPatterns to a sample image
• Configure the PatMax Patterns tools
• Create basic mathematical formulas involving If, And, InRange, and Not
functions
• Identify uses for the PatMax technology

1. What are the three parameters that are typically used to locate a part?

2. List three parameters in the FindPatterns tool that provide a tradeoff between speed
and accuracy.

3. List three enhancements in the PatMax Property Sheets compared to the


FindPatterns tool.

4. List at least three logic functions and give an example of each.

Page 1 Page 2
Objectives

At the end of this section Participants will be able to:


Histogram & Edge Tools
Section 4 - Apply the Property Sheet parameters and auto-inserted
information for the ExtractHistogram tool to a sample image
- Apply the Property Sheet parameters and auto-inserted
information for the Edge Functions to a sample image
- Describe the two groups of Edge functions
- Explain why the region must be rotated for a horizontal edge

Section 4 | Slide 2

In the fourth section of the In-Sight Spreadsheets Standard training we will focus on Histogram and At the end of this section Participants will be able to:
Edge Tools.
- Apply the Property Sheet parameters and auto-inserted information for the ExtractHistogram
tool to a sample image
- Apply the Property Sheet parameters and auto-inserted information for the Edge Functions to a
sample image
- Describe the two groups of Edge functions
- Explain why the region must be rotated for a horizontal edge

Section 4 | Slide 1 Section 4 | Slide 2


Block Inspection ExtractHistogram

Steps:


1. Determine presence and 1
position of block.
2
2. Determine if block has gouge.
3
3. Determine if the block length is Black = 0
within tolerance.
White = 255
4. Determine if hole sizes are
within tolerance.
4 4 4

Section 4 | Slide 3 Section 4 | Slide 4

Block Inspection Steps: ExtractHistogram accesses every whole pixel in an image region to accumulate a histogram. Individual
pixels are classified according to their greyscale value, and a count in maintained of the number of pixels
1. Determine the presence and position of the block using the FindPatterns or PatMax Tool – this at each value.
was completed in section 3.
2. Determine if the block has a gouge using the ExtractHistogram tool. In an 8-bit greyscale image, there are 256 (2^*) possible pixel values; therefore, the accumulated
3. Determine if the block length is within tolerance using the FindSegment tool. histogram will contain 256 elements, where each element contains a value representing the count of the
4. Determine if the hole sizes are within tolerance using the ExtractBlobs tool. number of pixels with greyscale values equal to the index number of the array element.

Section 4 | Slide 3 Section 4 | Slide 4


ExtractHistogram Applications ExtractHistogram Applications

• Detect presence/absence
• Check illumination levels
• Determine the uniformity of the grey values
– Are there any scratches, dust, debris, etc.?

Section 4 | Slide 5 Section 4 | Slide 6

This slide shows some examples of when the ExtractHistogram tool could be used. The Histogram tools are useful in examining images for issues such as detecting the presence/absence of
a feature, or qualifying the greyscale values to determine if there are any scratches, dust or debris.

Section 4 | Slide 5 Section 4 | Slide 6


Adding ExtractHistogram to the Spreadsheet ExtractHistogram: Property Sheet

Section 4 | Slide 7 Section 4 | Slide 8

If the image being processed is greyscale, the ExtractHistogram function is inserted into the spreadsheet The ExtractHistogram Property Sheet includes the following:
and the parameters are configured to define the area of the image that will undergo histogram analysis. If
the image is a color image, the ExtractColorHistogram function is used. - Image – Reference to target image cell
- Fixture – Where tool should fixture itself
As with the other tools, insert a comment (using an apostrophe) and drag the tool from the Tool Palette - Region – Region specifying interest zone
and drop it into the cell within the spreadsheet. - External Region – Selection of non-rectangular regions
- Show – Graphic options for display

Section 4 | Slide 7 Section 4 | Slide 8


ExtractHistogram: Histogram ExtractHistogram: Auto-Inserted Functions

Section 4 | Slide 9 Section 4 | Slide 10

The charts on this slide display the greyscale values of the pixels in the function’s Region of Interest. The following Vision Data Access functions are automatically inserted into the spreadsheet to create the
results table:
- The X-axis of the graph represents the total number of greyscale values (0 to 255).
- The Y-axis of the graph represents the number of pixels at a given greyscale value, and the - Thresh – The binary threshold separating dark from light pixels (0 to 255).
scale is established by displaying the greyscale value with the greatest number of pixels. - Function inserted: HistThresh
- The green vertical line indicates the threshold. - Contrast – The greyscale image contrast between the mean greyscale above Thresh and the
mean greyscale below Thresh.
- Function inserted: HistContrast
- DarkCount – The number of pixels below Thresh.
- Function inserted: HistCount
- BrightCount – The number of pixels above Thresh.
- Function inserted: HistCount
- Average – The mean greyscale value, called Brightness in EasyBuilder.
- Function inserted: HistMean

Section 4 | Slide 9 Section 4 | Slide 10


Fixturing the ExtractHistogram ExtractHistogram with Fixturing

Section 4 | Slide 11 Section 4 | Slide 12

As block locations vary in the field of view, the histogram region will now move accordingly. Fixturing means specifying the ExtractHistogram region of interest relative to the location of the Block
location found with FindPatterns.

Section 4 | Slide 11 Section 4 | Slide 12


Fixturing the ExtractHistogram ExtractHistogram: Deciding Pass or Fail

Use your reference


buttons

H21 = If(E21<50, 1, 0)

We use numbers 1 and 0 for Pass and Fail because we can easily use
that as an input into other functions (future sections).

Section 4 | Slide 13 Section 4 | Slide 14

Fixturing means specifying the ExtractHistogram region of interest relative to the location of the Block As we learned in Section 3 when reviewing the Logic tools – if cell E21 is less than zero, a 1 will be
location found with FindPatterns. returned (Pass), but, if the cell is greater than zero, a 0 will be returned (Fail).
You will need to make 3 references for full fixturing of the block; Row, Column & Theta (Angle).
NOTE: It’s a good idea to put all logic (If, InRange, etc.) for Pass/Fail in the same row as the applicable
tool to make your job more readable.

Section 4 | Slide 13 Section 4 | Slide 14


Block Inspection

Steps:


1. Determine presence and
position of block.
1
2. Determine if block has
 gouge.

3. Determine if the block


2
3
Edge Tools
length is within tolerance.

4. Determine if hole sizes are


within tolerance.
4 4 4

Section 4 | Slide 15 Section 4 | Slide 16

Block Inspection Steps: In this section we will review Edge Tools.

1. Determine the presence and position of the block using the FindPatterns or PatMax Tool – this
was completed in section 3.
2. Determine if the block has a gouge using the ExtractHistogram tool – just completed.
3. Determine if the block length is within tolerance using the FindSegment tool.
4. Determine if the hole sizes are within tolerance using the ExtractBlobs tool.

Section 4 | Slide 15 Section 4 | Slide 16


Edges Edge Applications

Edges represent a location in an image where a transition


from dark to light (or vice versa) occurs

Edges may be straight, curved, or even a complete circle

Section 4 | Slide 17 Section 4 | Slide 18

In machine vision terminology, an Edge is defined as the boundary (either a line, arc or circle) between The In-Sight Edge Tools are used to detect and process statistics about the found edges.
two adjacent pixel groups with contrasting greyscale values. The In-Sight Edge Tools are used to detect
and process statistics about the found edges. Edge Tools should be used in the following circumstances:
- The edge has a high contrast between light and dark pixels.
Edges can be comprised of a single edge, or a pair of edges, which consists of two transitions from dark - The application requires quick detection of features. The Edge Tools are among the fastest In-
to light or light to dark as shown above. Sight Vision Tools, providing the ability to detect edge features faster than patterns, for example.

NOTE: The In-Sight Tools only analyze greyscale images; all color images are automatically converted to
a greyscale value.

Section 4 | Slide 17 Section 4 | Slide 18


Edge Applications FindSegment

Section 4 | Slide 19 Section 4 | Slide 20

This slide shows some of the applications that you can use an Edge tool to complete: The FindSegment Tool locates a pair of edges within an image region and computes the arc distance
between them. FindSegment forms a one-dimensional projection of the image region by summing pixel
- Gauge parts or part features values on radial line segments scanned in the positive Y-direction relative to the region’s local coordinate
- Find circle features (center and radius) system.
- Use to quickly locate parts (by finding part edges)
- Determine relative contrast using score (0-100) Edge transitions are extracted from the projected image data. The arc segment over which the edge-to-
edge distance is computed from the region used to extract the edges.

Section 4 | Slide 19 Section 4 | Slide 20


Adding FindSegment to the Spreadsheet FindSegment: Property Sheet

white

Section 4 | Slide 21 Section 4 | Slide 22

As with the other tools, insert a comment (using an apostrophe) and drag the tool from the Tool Palette The FindSegment Property Sheet includes the following:
and drop it into the cell within the spreadsheet.
- Image – Reference to target image cell
- Fixture – Where tool should fixture itself
- Region – Region specifying interest zone
- Segment Color – Grayscale intensity between edges (black or white)
- Find By – Helps select from multiple pairs
- Accept Thresh – Minimum contrast score
- Normalize Score – Helps find low contrast edges
- Angle Range – Allowed angle variation for edges
- Edge Width – Number of pixels over transition
- Show – Graphic options for display

Section 4 | Slide 21 Section 4 | Slide 22


FindSegment: Region FindSegment: Chart

NOT
FOUND
“Accept
Thresh” and
“Score”

FOUND
Location along edge region

Section 4 | Slide 23 Section 4 | Slide 24

The Edge Response Chart (displayed when the Show parameter is set to show all: input, result and It is important to be aware that Edge tools have a specific direction in which they search – notice the
chart) displays the first derivative of the greyscale values found in the ROI; with peaks and valleys direction of the arrows above. The arrow must touch the edge in order to find it – the arrow in the first
representing the major edge transitions as show above. example is parallel to the edge so it does not find it.

The Accept Thresh/Minimum Contrast parameter is used to set a minimum peak height (contrast NOTE: The way that you draw your ROI is the direction of the search arrow.
threshold). With this set, any peaks that are lower than the minimum peak height are excluded from the
results. This allows the inspection analysis to be limited to just those edges in the image that are of a
certain magnitude.

Use the function’s edge response chart to determine the correct contrast threshold.

- The score axis is defined by the Score (100 to -100) and the Accept Threshold parameter set for
the tool. Peaks (or positive scores) indicate that the edge transitions from dark to light, while
valleys (or negative scores) indicate that the edge transitions from light to dark (a score of 0
indicates no edge was detected.
- The Offset axis refers to the ROI of detecting the edge feature, where 0 represents the
beginning of the region and the right-hand value marks the maximum width (in pixels) of the
region. The apex of the peaks or valleys along the Offset axis indicates the position of the
found edge within the ROI.

Section 4 | Slide 23 Section 4 | Slide 24


FindSegment: Segment Color FindSegment: Deciding Pass or Fail

white

H21 = InRange(D21,270,285)

We use numbers 1 and 0 for Pass and Fail because we can easily use
that as an input into other functions (future sections).

Section 4 | Slide 25 Section 4 | Slide 26

Segment Color – specifies the color of the segment to be located. FindSegment will only report edge As we learned in Section 3 when reviewing the Logic tools – if cell E21 is less than zero, a 1 will be
pairs of the specified polarity. returned (Pass), but, if the cell is greater than zero, a 0 will be returned (Fail).

- 0 = black (default) – Specifies black or white-to-black followed by black-to-white polarity.


NOTE: It’s a good idea to put all logic (If, InRange, etc.) for Pass/Fail in the same row as the applicable
- 1 = white – Specifies white or black-to-white followed by white-to-black polarity.
tool to make your job more readable.

Section 4 | Slide 25 Section 4 | Slide 26


Functions that Find Edges Functions that Operate on Edges

Section 4 | Slide 27 Section 4 | Slide 28

The following functions find the edges: The following functions operate on edges:

- Caliper – measures the width between edges, locates edges and the location and spacing of - PairDistance – Computes the distance between two edges.
edge pairs within a Region of Interest (ROI), based upon an edge model. - PairEdges – Sorts arrays of edges into arrays of edge pairs.
- Find Circle – Locates a single circular edge within an annular ROI. - PairMaxDistance – Finds the maximum edge pair distance.
- FindCircleMinMax – Inspects the circularity of a continuous edge. - PairMeanDistance – Computes the mean edge pair distance.
- FindCurve – Locates a single arced edge within a bent ROI. - PairMinDistance – Finds the minimum edge pair distance.
- FindLine – Locates a single straight-line edge within a ROI. - PairSDevDistance – Computes the standard deviation of edge pair length.
- FindMultiLine – Locates multiple straight-line edges within a ROI. - PairToEdges – Converts an array of edge pairs into an array of edges (by averaging).
- FindSegment – Locates a pair of straight-line edges within a ROI. - SortEdges – Sorts edges by a specific criterion.

Section 4 | Slide 27 Section 4 | Slide 28


Summary Lab Exercise

• The ExtractHistogram tool calculates statistics about pixels’


greyscale values in a specified region of an image.
• Fixturing handles the problem of parts’ locations varying in
the image.
• Two groups of Edge functions:
• Functions that find edges
• Functions that operate on edges returned by the functions
in group 1

Section 4 | Slide 29 Section 4 | Slide 30

In this section we covered the following topics: Complete:


Lab Exercise 4.1 – ExtractHistogram
- The ExtractHistogram tool calculates statistics about pixels’ greyscale values in a specified Lab Exercise 4.2 – FindSegment
region of an image. Lab Exercise 4.3 – (if time allows)
- Fixturing handles the problem of parts’ locations varying in the image.
- There are two groups of Edge functions:
- Functions that find edges
- Functions that operate on edges returned by the functions in group 1

Section 4 | Slide 29 Section 4 | Slide 30


In-Sight Spreadsheets Standard Section 4 | Lab Exercise In-Sight Spreadsheets Standard Section 4 | Lab Exercise

NOTE: Remember to leave a few blank rows between vision functions to allow for
Lab Exercise 4.1 – ExtractHistogram comments (best practice is to enter the comments in as you go along).

At the end of this lab exercise, Participants will be able to:


• Utilize the FindSegment tool to determine the distance (in pixels) across the block
• Fixture both vision functions to the Row, Column, and Angle returned by
FindPatterns
• Use If functions to specify pass (1) or fail (0) for both tests

The Participant will utilize the following In-Sight Functions to successfully complete this
exercise: 3. Insert an ExtractHistogram function into cell C16 of the spreadsheet.
• ExtractHistogram
• AcquireImage
• If Find the Pattern
• Fixturing

Follow the steps below to complete the lab exercise:


Check for Gouge
1. Load MyPatterns.job from the previous lab.
NOTE: You will analyze the area indicated in the image below.

The ExtractHistogram Property Sheet displays.

2. Enter the Comment Check for Gouge in cell B14.


4. Fixture it to the Row, Column, and Angle reported by the FindPatterns function by
double-clicking on Row under fixture.

Page 1 Page 2
In-Sight Spreadsheets Standard Section 4 | Lab Exercise In-Sight Spreadsheets Standard Section 4 | Lab Exercise

NOTE: This is done by clicking the left mouse button and pulling across the results 6. Set the Show parameter to input graphics only. This will allow you to always be
from your FindPatterns tool that was created in the last lab (it will highlight with a able to see the Region. Click the OK button.
red box) and clicking the <Enter> key.

NOTE: We will look at Contrast and Average as possible parameters to use for
determining Pass/Fail.
7. Write down the Contrast and Average values returned when there are no gouges
(Good Block) and when there are gouges (Bad Block).
No Gouges:
Contrast = ______________pixels Average = _______________ pixels
5. Double-click on the word Region in the Property Sheet and position the Region as Gouges:
shown below. Contrast = ______________pixels Average = _______________ pixels
8. Pick an appropriate threshold (limit) for it to distinguish between these two cases.
9. Enter a comment in cell K15 that indicates that you are creating a logic statement.
10. Use the threshold limit that you determined above in an If function (under
Mathematics  Logic) in cell K16 that gives you a value of 1 for no gouges and 0
for gouges.
NOTE: We will use this value later to generate color indicators, green for pass and
red for fail.

Page 3 Page 4
In-Sight Spreadsheets Standard Section 4 | Lab Exercise In-Sight Spreadsheets Standard Section 4 | Lab Exercise

11. Confirm AcquireImage is still set in Manual Mode.


Lab Exercise 4.2 – FindSegment
At the end of this lab exercise, Participants will be able to:
• Utilize the FindSegment tool to determine the distance (in pixels) across the block
• Fixture both vision functions to the Row, Column, and Angle returned by
FindPatterns
• Use InRange functions to specify pass (1) or fail (0) for both tests

The Participant will utilize the following In-Sight Functions to successfully complete this
exercise:
• FindSegment
12. Move the block a little in the Field of View and trigger (F5). Repeat several times. • AcquireImage
13. Verify that the Region for ExtractHistogram region follows the movement of the • InRange
block.
Follow the steps below to complete the lab exercise:

1. Continue with MyHistogram.job from the previous lab.


2. Enter the Comment Block Width in cell B18.

14. Check the value of the If function for a Good Block (1 = no gouges) and a Bad
Block (0 = gouges present). 3. Insert a FindSegment function into cell C20.

15. Save the job as MyHistogram.job on the In-Sight camera and your own folder on
the PC.

Page 5 Page 6
In-Sight Spreadsheets Standard Section 4 | Lab Exercise In-Sight Spreadsheets Standard Section 4 | Lab Exercise

The FindSegment Property Sheet displays. 6. Record the distance returned for a good block and a bad block:
Correct Gap Width: _______________________________________

4. Configure the parameters of the FindSegment Property Sheet as follows:


Fixture – Reference the Row, Column, and Angle returned by FindPatterns
(follow the same steps as in the Histogram lab)
Region – Set its size to span the length of the block, and be perpendicular
to the edges of the cutout, as shown below.
Segment Color – The segment between these two edges is white, 7. Pick an appropriate minimum and maximum tolerance for the gap width.
compared to the darker background of the block, so specify white.
8. Enter a comment in cell K19 that indicates that you are creating a logic statement.
Find By – widest segment
Angle Range – 5
Edge Width - 6
Block Width
Show – input and result graphics
Allow the remainder of the defaults to remain.
5. Click the OK button.
9. Use the tolerance that you determined above in an InRange function (under
Mathematics  Logic) in K20 that gives a value of 1 for Pass, 0 for Fail.
10. Confirm AcquireImage is still set in Manual Mode.
11. Move the block around in the Field of View, triggering with F5 each time you do.
12. Verify that the Region for FindSegment follows the movement of the block.
13. Check the value of the InRange function for a good block and a bad block.
14. Save the job as MyEdges.job on the In-Sight camera and your own folder on the
PC.

NOTE: The direction of the red arrow needs to be perpendicular to the edge.

Page 7 Page 8
In-Sight Spreadsheets Standard Section 4 | Lab Exercise

Lab Exercise 4.3 – (if time allows)


What if you placed the Edge’s Region across the word Cognex on the block and now
potential unwanted edges are being selected.

Which of the four choices for Find By in FindSegment’s Property Sheet would be best to
avoid misinterpreting the letters as an edge?

Try implementing two FindLine functions, one for each edge, to handle this situation.
HINT: Direction of search is important here.

Page 9
In-Sight Spreadsheets Standard Skills Journal

Please answer the corresponding questions upon completion of each section. The
questions are to reinforce the skills learned during each In-Sight Spreadsheets Standard
section.

Section 4 – Histogram & Edge Tools


• Apply the Property Sheet parameters and auto-inserted information for the
ExtractHistogram tool to a sample image
• Apply the Property Sheet parameters and auto-inserted information for the
Edge Functions to a sample image
• Describe the two groups of Edge Functions
• Explain why the region for the Edge tool must be rotated to detect a
horizontal edge

1. List the five results automatically inserted in the cells by the ExtractHistogram tool.

2. List at least two kinds of inspections for which ExtractHistogram may be used.

3. List the two categories of Edge Tools and name a tool in each category.

4. Why must the region for an Edge Tool be rotated when locating a horizontal edge?

Page 1
Objectives

Blobs Tools & At the end of this section Participants will be able to:
Image Tools
Section 5 • Apply the Property Sheet parameters and auto-inserted
information for the DetectBlobs tool to a sample image
• Import and export Snippets
• Describe how Image Tools are used and give some
examples
• Explain when and how to use the SurfaceFX tool

Section 5 | Slide 2

In the fifth section of the In-Sight Spreadsheets Standard training we will focus on Blob and Image At the end of this section Participants will be able to:
Tools, as well as Snippets.
- Apply the Property Sheet parameters and auto-inserted information for the DetectBlobs tool to a
sample image
- Import and export Snippets
- Describe how Image Tools are used and give some examples
- Explain when and how to use the SurfaceFX tool

Section 5 | Slide 1 Section 5 | Slide 2


Block Inspection Blobs

Steps: • Blob = set of connected pixels with a grayscale value


above (or below) a specified threshold
1. Determine presence and position 1
 of block.
2
- in other words, a light shape on a dark background or vice versa

• Views the image as all black and white (0 and 255)


 2. Determine if block has gouge.

3. Determine if the block length is


3
– “Binarizes” the image

 within tolerance.

4. Determine if hole sizes are within


tolerance.
4 4 4

Section 5 | Slide 3 Section 5 | Slide 4

Block Inspection Steps: A Blob is a set of connected pixels with a greyscale value above (or below) a specified threshold. In
other words, a blob is a light shape on a dark background, or vice versa.
1. Determine the presence and position of the block using the FindPatterns or PatMax Tool –
completed in section 3.
2. Determine if the block has a gouge using the ExtractHistogram tool – completed in section 4.
3. Determine if the block length is within tolerance using the FindSegment tool – completed in
section 4.
4. Determine if the hole sizes are within tolerance using the DetectBlobs tool.

Section 5 | Slide 3 Section 5 | Slide 4


DetectBlobs DetectBlobs Applications

Section 5 | Slide 5 Section 5 | Slide 6

The DetectBlobs* function is used to identify and locate blobs of connected pixels, which can be This slide shows some examples of when the DetectBlobs tool could be used.
comprised of various shapes and sizes. It finds sets of connected pixels with a greyscale value above (or
below) a specified threshold; in other words, it finds dark shapes on a light background or vice versa.

The DetectBlobs function can be used to measure x, y, angle, color, score, area, elongation, holes,
perimeter & spread of blobs.

How many blobs do you see on this block?

* Earlier versions of In-Sight software used a tool called ExtractBlobs instead of DetectBlobs. The two
tools work in similar ways; they have the same list of parameters in their Property Sheets, and auto-insert
the same kinds of results into the spreadsheet. The actual values returned may vary slightly between the
two tools.

Section 5 | Slide 5 Section 5 | Slide 6


Adding DetectBlobs to the Spreadsheet DetectBlobs: Property Sheet

Section 5 | Slide 7 Section 5 | Slide 8

As with the other tools, insert a comment (using an apostrophe) and drag the function from the Tool The DetectBlobs Property Sheet includes the following:
Palette and drop it into the cell within the spreadsheet.
- Image – Reference to target image cell
- Fixture – Where tool should fixture itself
- Region – Region specifying interest zone
- External Region – Select non-rectangular region
- Number to Sort – Number of blobs to list info *
NOTE: Setting Number to Sort to 0 only counts the number of blobs in the region. No other blob
results are reported.
- Threshold – Value separating black/white**
NOTE: Setting Threshold to -1 performs automatic threshold.
- Fill Holes – Enable blob hole area in result
- Boundary Blobs – Consider blobs touching region limit
- Color: Blob – Are blobs dark, light, or either?
- Color: Background – Is the background dark or light?
- Area Limit: Min – Minimum blob size to report
- Area Limit: Max – Maximum blob size to report
- Show – Graphic options to display

Section 5 | Slide 7 Section 5 | Slide 8


DetectBlobs: Region DetectBlobs: Results

Region

Section 5 | Slide 9 Section 5 | Slide 10

Set the Region around the area to be analyzed – to analyze multiple blobs in an area, you can extend the If ‘Number to Sort’ is set to 0 in the Property Sheet, the only result is the count.
model region’s rectangle to include all of the area to be analyzed. If ‘Number to Sort’ is set to 1 or more, the result is data:

- Row: Returns the Y-coordinate (Row) of the blob’s center of mass.


- Function inserted: GetRow
- Col: Returns the X-coordinate (Column) of the blob’s center of mass.
- Function inserted: GetCol
- Angle: Returns the angle of the blob’s center of mass, relative to the center of the ROI.
- Function inserted: GetAngle
- Color: Returns the color value (0 = black, 1 = white) of a blob.
- Function inserted: GetColor
- Score: A measure of how closely the blob matches the criteria of a FindBlobs function.
- Function inserted: GetScore
- Area: Returns the area of the blob (measured in pixels).
- Function inserted: GetArea
- Elongation: Returns a value that represents how a blob’s pixels are stretched out from the
blob’s center of mass.
- Function inserted: GetElongation
- Holes: Returns the number of holes contained within the blob.
- Function inserted: GetHoles
- Perimeter: Returns the length of the boundary around the blob.
- Function inserted: GetPerimeter
- Spread: Returns a value that represents how a blob’s pixels are distributed away from the
blob’s center of mass.
- Function inserted: GetSpread
Section 5 | Slide 9 Section 5 | Slide 10
Threshold Example: DetectBlobs: Graphics
Greyscale = 185
Thresh: 64
Found: 1
Blob: Black
Greyscale = 25

Thresh: 10
Found: 0
Blob: Black

Thresh: 192
Found: 1
Blob: Black

Thresh: 64
Found: 1
Blob: White

Section 5 | Slide 11 Section 5 | Slide 12

The greyscale count for the light area is 185 and for the dark area is 25. In-Sight outlines valid blob results in green – since the Region was set around the first blob, it returns one
result. Had the region been stretched to include the three blobs, it would have returned the result of 3.

The indexing starts at zero (0) – which is the largest blob. If three results had been returned the indexing
would return 0,1, and 2.

The pixel grayscale value within the blob is 35 and the pixel grayscale value surrounding the blob is 189.

Section 5 | Slide 11 Section 5 | Slide 12


DetectBlobs: Deciding Pass or Fail

Snippets
L29 = InRange(J29, 1900, 2400)

We use numbers 1 and 0 because we can easily use that as an


input into other functions (future sections).

Section 5 | Slide 13 Section 5 | Slide 14

As we learned in Section 3 when reviewing the Logic tools – in this example if the Area falls within 1900 In this section we will cover Snippets.
and 2400 then the tool will Pass.

NOTE: It’s a good idea to put all the logic in the same row as the applicable tool to make your job more
readable.
NOTE: The most important thing is to keep things properly labeled and organized.

Section 5 | Slide 13 Section 5 | Slide 14


Snippets Included Snippets

1.

Original Job

New Job

2.

Section 5 | Slide 15 Section 5 | Slide 16

Snippets access the Snippet dialog to automate frequently performed tasks by exporting groups of In-Sight Explorer includes Snippets featuring some of the most popular functionality used in machine
preconfigured cells, saved as a Cell Data (.CXD) file, to the Snippets folder on the PC. The snippet can vision applications.
then be imported into the spreadsheet. Alternately, snippets can be imported by dragging and dropping The Snippets tabs (found in the Palette pane) can be used to access the Snippets included in the
the snippet directly from the Palette into the spreadsheet. You can import and export snippets from one installation (#1 above).
spreadsheet to another spreadsheet so that small pieces of functionality can be reused. You can also access those Snippets you have previously exported to the Snippets folder (#2 above).

To export Snippets:
1. Select the cell(s) whose data you would like to export.
2. From the File Menu, click Snippet  Export, or alternately, right click on the cell(s) whose data
you would like to export and select Snippet  Export from the shortcut menu.
3. Select the location where you want to save the exported snippet.
4. To save the active cell(s) as a new file, enter a name inside the File Name: text box and click
Save. To replace an existing file, select that file from the list and click Save (this will overwrite
any data contained in the file). In-Sight Explorer automatically appends the .CXD file extension
to all exported cells.

To import Snippets:
1. Select an empty cell.
2. From the File menu, click Snippet  Import, or alternately, right click on the cell and select
Snippet  Import from the shortcut menu.
3. Select the location that contains the snippet to open.
4. Highlight the desired file and click Open. Alternately, you can double-click the snippet file or
type in the file name.

Section 5 | Slide 15 Section 5 | Slide 16


Image Tools

The goal is to enhance the original image

Image Tools

Original Image Tool Vision tool points


image to cell containing
(in another cell)
(A0) Image Tool

Section 5 | Slide 17 Section 5 | Slide 18

In this section we will cover a number of the Image Tools that are available in the Spreadsheets. The goal is to enhance the original image. We can do this by:
- Highlighting the desirable features
- Removing or diminishing the undesirable features

Tools using the filter must be contained within the search region of the image filter.

Note that prior to version 5.x, filters were listed in two categories of filters instead of just Filter: Neighbor
Filters and Point Filters.

In a color system, there are additional filters.

Section 5 | Slide 17 Section 5 | Slide 18


Nesting of Regions CompareImage

• Compares each part to a trained image (template)


Vision tool’s region must be the same as or fall inside the
• Result is an image where differences are in white
region of the Image Tool it references
Template

Original Image

Region of Image tool

Region of Vision tool Current Image


that references Image
tool CompareImage

Section 5 | Slide 19 Section 5 | Slide 20

Regions need to be nested, as shown. This slide shows how the CompareImage function works. The function trains on a template that
represents a good part, and then compares each part to be inspected to the template. The resultant
image represents the difference between the template and part. The greater the difference in greyscale
values for a pixel, the whiter the resultant pixel in the CompareImage. If the two images were identical,
CompareImage would be all black (greyscale 0).

A tool can then reference the CompareImage, as shown on the next slide

Section 5 | Slide 19 Section 5 | Slide 20


CompareImage Filters

DetectBlobs
Some Filter Types use a Threshold

DetectBlobs references cell B27, which is the cell containing


CompareImage

Section 5 | Slide 21 Section 5 | Slide 22

The DetectBlobs or other tools applied will now reference the new image created instead of the image cell Operation – Specifies the operation to perform on the image. Depending on the operation, different
$A$0. parameters are allowed. For example, Binarize uses a Threshold, whereas Clip uses Min and Max.

Some operations operate on a neighborhood of pixels. For those operations, the Kernel Rows and
Kernel Columns parameters are used. They need to be the number of pixels of the neighborhood that
works best.

Here are all the Operations available on a greyscale camera:

- Binarize - Gradient Full


- Bot (bottom) Hat - Greyscale Distance
- Clip - High Pass
- Close - Invert
- Dilate - Local Median
- Edge Magnitude - Low Pass (default)
- Equalize - Max
- Erode - Open
- Fill Dark Holes - Optical Density
- Fill Light Holes - Sharpen
- Gradient Vertical - Stretch
- Gradient Horizontal - Threshold Range
- Top Hat

Section 5 | Slide 21 Section 5 | Slide 22


Filter Using Threshold: Binarize Filters

Some Filter Types use Min and Max


Pixels in output image are all either black (0) or white (255)

Section 5 | Slide 23 Section 5 | Slide 24

Binarize – Specifies a black-and-white (‘binary’) threshold operation that compares each input pixel with Operation – Specifies the operation to perform on the image. Depending on the operation, different
the threshold level to determine whether the output pixel is white or black. Input pixel values equal to or parameters are allowed. For example, Binarize uses a Threshold, whereas Clip uses Min and Max.
above the threshold value are white (255); values below the threshold are black (0).
Some operations operate on a neighborhood of pixels. For those operations, the Kernel Rows and
Kernel Columns parameters are used. They need to be the number of pixels of the neighborhood that
works best.

Here are all the Operations available on a greyscale cameras:

- Binarize - Greyscale Distance


- Bot (bottom) Hat - High Pass
- Clip - Invert
- Close - Local Median
- Dilate - Low Pass (default)
- Edge Magnitude - Max
- Equalize - Open
- Erode - Optical Density
- Fill Dark Holes - Sharpen
- Fill Light Holes - Stretch
- Gradient Vertical - Threshold Range
- Gradient Horizontal - Top Hat
- Gradient Full

Section 5 | Slide 23 Section 5 | Slide 24


Filter Using Min & Max : Clip Filters: Clip

Example: Adjusting for hot spots


Minimum = 60
Maximum = 180 Original OCR models

60 180
0 255

Clipped

Section 5 | Slide 25 Section 5 | Slide 26

Clip – Specifies a "clipping" operation, which eliminates the extreme ends of the greyscale spectrum so This function will adjust for hot spots.
that the features in the output image are more uniform.
You can see that many of the numbers are not recognizable in the OCR models example – but in the
This operation compares the greyscale value of each input pixel to a minimum and maximum: Clipped example all of the numbers can be recognized.
- If the input pixel value is equal to or exceeds the minimum or maximum, the output pixel is
assigned (‘clipped’ to) the minimum or maximum value, respectively.
- If the input pixel value is within the minimum and maximum, the output pixel is assigned the
value of the input pixel.

Section 5 | Slide 25 Section 5 | Slide 26


Filters Using Neighborhood of Pixels Filter Using Neighborhood of Pixels: Erode

Some Filter Types convert based upon a neighborhood


of pixel values Shrinks white areas

original image too eroded image, can be


degraded to be read read by ReadIDMax
Section 5 | Slide 27 Section 5 | Slide 28

Some operations involve a neighborhood of pixel values. In this example, the Erode operation changes Erode – Specifies an ‘erosion’ operation, which shrinks bright features and increases dark features. The
each pixel value in the original image to a new greyscale value using a calculation based on a 4x4 result is an output image with larger areas of dark pixels. This operation is useful for removing light
neighborhood of pixel values around the original pixel. specks or, as in this example, removing the white “haloes” around the 2D matrix cells.

Section 5 | Slide 27 Section 5 | Slide 28


Limitations of Filters

• Today’s In-Sight tools generally don’t need to use Image


Tools

• Image Tools can decrease accuracy


It is better to improve the image through lighting
Image Tools: SurfaceFX

• Image Tools add time to your job

Section 5 | Slide 29 Section 5 | Slide 30

Use Image Tools judiciously. In-Sight’s vision tools have gotten better and better, so that today’s vision In this section we will cover an Image Tool named SurfaceFX.
tools generally don’t need to use Image Tools. The vision tools themselves can handle degraded images.

In the case where an image is so poor that a tool has difficulty, it’s better if you can improve your image
through improved lighting and optics. This is because an image Tool can decrease the accuracy of your
results. For example, binarizing an image produces a high-contrast image, but you have removed
information by going from a range of pixels (0-255) to all black and white (0 and 255).

If speed is critical, then another consideration is that adding an Image Tool will add to the time for the job

Section 5 | Slide 29 Section 5 | Slide 30


SurfaceFX Tool: What is it? SurfaceFX Tool: What is it?

• Enhances raised or embossed features • Shows small defects


Single Light SurfaceFX
in image – Chips, dents, wrinkle, puncture,
tears

• Combines 4 images of part taken with


different lighting into one high contrast image • Adds contrast
– Engraved, embossed,
stamped, etched, raised
• Removes noise and clutter from the surface
Chips, dents, wrinkles, punctures, tears, etc.
• Removes glare from lights
• Does not work on surfaces that lack reflection, e.g., glass – Ambient light, glare

Section 5 | Slide 31 Section 5 | Slide 32

SurfaceFX Tool: What is it?

• Enhances raised or embossed features in image


• Combines 4 images of part taken with different lighting into one high contrast mage
• Removes noise and clutter from the surface
-Chips, dents, wrinkles, punctures, tears
• Vision tools can reference the SurfaceFX image
• Does not work on surfaces lacking reflection, such as glass

Section 5 | Slide 31 Section 5 | Slide 32


SurfaceFX: How Does it work? SurfaceFX Tool: More Examples

Single Light SurfaceFX


Photometric Stereo

• Uses surface reflections/shadows


from 3 or more angled lights to
determine the surface structure

• Result: shows topography of


surface
- Raised features are white
- Lowered features are black

Section 5 | Slide 33 Section 5 | Slide 34

SurfaceFX uses an algorithm called Photometric Stereo. This algorithm is useful in highlighting small Features that are raised are white, features that are indented are black.
surface features - whether they are engraved, etched, embossed, dented, punctured, stamped, or raised.
Once contrast is created, other tools (OCR, RedLine, Blobs, InspectEdge, etc.) can easily use this
new surface image to inspect the part.

The diagram shows a setup using 3 lights, but In-Sight actually uses four banks of lights.

Section 5 | Slide 33 Section 5 | Slide 34


SurfaceFX: Options for Lights In-Sight 7802 Integrated Light for SurfaceFX

Different ways to step through 4 banks of lights: Palette  Input/Output IntegratedLightControl

• Integrated Light
You sequence light banks from spreadsheet

• External Lights
Low angle, 4 directions Run tool 4 times,
each with different
• External Control Options Bank, and save 4
- Discrete IO (built-in or IO Module) images in
spreadsheet

- CCS control box: sequences through 4 images

- External Control by PLC

Section 5 | Slide 35 Section 5 | Slide 36

The angle of the light is critical. There is no overall answer for where to put the light. The idea is to The IntegratedLightControl tool indicates which bank of lights should be on when triggering the camera.
create shadows or reflections on high and low features. Sometimes this means putting the light If its settings are different from the settings in Light Settings (see previous slide), then the
very low. Sometimes, it means putting the light very high (direct lighting). You may need to try different IntegratedLightControl settings govern.
positions and see which works best.
In the sample job, we will capture four images, each one acquired with a different light bank checked in an
IntegratedLightControl Property Sheet. We will start with the North Bank checked, and then trigger for
acquisition. Then, we will latch that image using a LatchImage function. Next, we will check East Bank
only, and trigger. We will latch this second image using a second LatchImage function. We will repeat
twice more, ending up with four images stored in the four LatchImage cells, each taken with a different
light bank. They will be referenced by the SurfaceFX tool (next slide).

Section 5 | Slide 35 Section 5 | Slide 36


SurfaceFX Property Sheet SurfaceFX: Example Using SurfaceFlaw Tool

Palette  Vision Tools  Image  SurfaceFX

right bottom left top

SurfaceFX image SurfaceFlaw tool applied to SurfaceFX

Section 5 | Slide 37 Section 5 | Slide 38

Note slightly different terminology is used compared to IntegratedLightControl: Right corresponds to


East, Bottom to South, Left to West, and Top to North.

Sigma
Specifies the smoothness value (0–10; default = 1) to help eliminate high-frequency noise in the
output image.

Brightness
Specifies the average intensity of the background in the output image. (0–255; default = 128)

Contrast
Specifies the intensity difference between surface features and the background pixels in the output
image. (0–100; default = 10)

Section 5 | Slide 37 Section 5 | Slide 38


SurfaceFX Tool: Example Using Coin SurfaceFX Tool: Example
N

W E
Sigma = 0 Sigma = 10

SurfaceFX

S
Brightness = 10 Brightness = 50
• Sigma: Smoothness value; helps eliminate high-frequency noise in output image.
• Brightness: Specifies average intensity of the background in output image
Section 5 | Slide 39 Section 5 | Slide 40

Features that are raised are white, features that are indented are black. SurfaceFX:

Sigma
Specifies the smoothness value (0–10; default = 1) to help eliminate high-frequency noise in the
output image.

Brightness
Specifies the average intensity of the background in the output image. (0–255; default = 128)

Contrast
Specifies the intensity difference between surface features and the background pixels in the output
image. (0–100; default = 10)

Section 5 | Slide 39 Section 5 | Slide 40


Summary Summary

• DetectBlobs locates blobs: sets of connected value SurfaceFX


above (or below) a specified threshold
• Enhances raised or embossed features in an image
• Snippets allow importing and exporting of cells for use
at a later time • Analyzes 4 images of a part taken with different
lighting to create a single high contrast image
• Image Tools can improve the image by
- Enhancing desired features • Can use integrated light on camera or external lights
- Reducing or eliminating undesired feature
• See sample job in Resources folder
• Filters are usually not needed, and they can decrease
accuracy and speed

Section 5 | Slide 41 Section 5 | Slide 42

In this section we covered the following topics: In this section we covered the following topics:

- The DetectBlobs function locates blobs – sets of connected value above (or below) a specified - SurfaceFX
threshold - Enhances raised or embossed features in an image
- Snippets allow importing and exporting of cells for use at a later time - Analyzes 4 images of a part taken with different lighting to create a single high contrast image
- Image Tools can improve the image by: - Can use integrated light on camera or external lights
- Enhancing desired features - See sample job in Resources folder
- Reducing or eliminating undesired feature
- Filters are usually not needed, and they can decrease accuracy and speed

Section 5 | Slide 41 Section 5 | Slide 42


Lab Exercise

Section 5 | Slide 43

Complete:
Lab Exercise 5.1 – DetectBlobs
Lab Exercise 5.2 – Snippets
Lab Exercise 5.3 – Dependencies Viewer
Lab Exercise 5.4 – Image Functions
Lab Exercise 5.5 – (if time allows)

Section 5 | Slide 43
In-Sight Spreadsheets Standard Section 5 | Lab Exercise In-Sight Spreadsheets Standard Section 5 | Lab Exercise

5. Set the Region to be a square around Hole A (see below).


Lab Exercise 5.1 – DetectBlobs
At the end of this lab exercise, Participants will be able to:
• Utilize DetectBlobs to check for size of holes

The Participant will utilize the following In-Sight Functions to successfully complete this
exercise:
• DetectBlobs

Follow the steps below to complete the lab exercise:

1. Load MyEdges.job from a previous lab.


2. Enter the comment Check Holes in cell B22.
3. Insert a DetectBlobs function in cell C24 in the spreadsheet.

Block Width

6. Leave the Number to Sort = 1.


7. Determine the grayscale values of you blob (the hole) and your background (the
block).
NOTE: This can be done by removing the Overlay. As your mouse moves across
the image, the Row and Column results along with the grayscale value of the
current pixel will be shown in the bottom left corner of the image.

The DetectBlobs Property Sheet displays.

4. Fixture the tool to the same result from the FindPatterns tool.
NOTE: Please refer to previous labs if you need assistance with fixturing.

Page 1 Page 2
In-Sight Spreadsheets Standard Section 5 | Lab Exercise In-Sight Spreadsheets Standard Section 5 | Lab Exercise

8. Write the approximate grayscale value of the following: 16. Calculate what ± 10% of that area value should be:

Blob Grayscale:_________________________________________ -10%: ___________________________________

Background Grayscale: ___________________________________ +10%: ___________________________________

9. Determine a good threshold value using the data from step 7. 17. Try this on a good block and a bad block. (In the next section, you will use the
data calculated in the step above to set the proper tolerance for the hole.)
HINT: Pick a value in between the blob grayscale and the background grayscale.
18. Repeat steps 3 – 17 for the other two holes (different spreadsheet cells) and write
10. Deselect Boundary Blobs. the 10% limits here:
11. Set the proper Blob Color.
12. Set the proper Blob Background. Middle hole: Right hole:
13. Set the Show Parameter to input and result graphics. -10%: ___________________ ________________

+10%: ___________________ ________________

Your spreadsheet should look similar to this:

And the image should look like this:

14. Click the OK button to finalize the DetectBlobs settings.


15. Notice the Area reported for a good hole.

Page 3 Page 4
In-Sight Spreadsheets Standard Section 5 | Lab Exercise In-Sight Spreadsheets Standard Section 5 | Lab Exercise

19. In order to reduce unused results from the spreadsheet and make it more
readable, you can remove the results for Elongation, Holes, Perimeter, and Spread Lab Exercise 5.2 – Snippets
by selecting cell K23, keeping the left mouse button depressed, and moving down
to cell N26. At the end of this lab exercise, Participants will be able to:
• Utilize Snippets to quickly create tolerances and graphics

The Participant will utilize the following In-Sight Functions to successfully complete this
exercise:
• CheckTolerance snippet to check for pass/fail
• Use a snippet to check for pass/fail and display color indicators

Follow the steps below to complete the lab exercise:


20. Right click and select Clear  Contents from the menu.
NOTE: This is the same as <Delete> on your keyboard. 1. Continue with MyBlobs.job.
2. Click on the Snippets tab in the Palette on the right side of the In-Sight Explorer
interface.

3. Insert a CheckTolerance.cxd Snippet (under Math & Logic) into cell L22.
NOTE: This is one row higher than you may think, but it is to accommodate the two
rows of headers in this snippet.

21. Save the job as MyBlobs.job on the In-Sight camera and your own folder on the
PC.

Page 5 Page 6
In-Sight Spreadsheets Standard Section 5 | Lab Exercise In-Sight Spreadsheets Standard Section 5 | Lab Exercise

8. In cell O27, insert an And statement to determine if all blobs had passed (don’t
forget to add a comment in cell N27).
9. Save the job as MySnippet on the In-Sight camera and your own folder on the PC.

4. Double-click in cell L24 and have it relative reference the result of the first blob
area in cell J24.

5. Set the Min and Max values to the -10% and +10% values you calculated earlier in
this lab.

6. Copy and paste the single row of cells L24 – P24 into cells L25 and L26.

7. Tweak the Min and Max values to 10% for the remaining blobs.

Page 7 Page 8
In-Sight Spreadsheets Standard Section 5 | Lab Exercise In-Sight Spreadsheets Standard Section 5 | Lab Exercise

3. Skip 10 rows in the spreadsheet and enter the comment Read Data Matrix code
Lab Exercise 5.3 – Image Tools in cell B11.

At the end of this lab exercise, Participants will be able to:


• Use an Image Tool to improve an image for inspection by a vision tool

The Participant will utilize the following In-Sight Functions to successfully complete this
exercise:
• Use the Erode filter operation to improve a degraded image of a Data Matrix code,
and then read the filtered image using ReadIDMax

Follow the steps below to complete the lab exercise:

1. Connect to your Emulator and make sure you are emulating a standard resolution 4. Skip a row and enter a ReadIDMax tool into cell C12. Click once on the Region
model – one whose model number ends in 00, for example, 5400. Be sure you parameter and then click on the Maximize Region button at the top of the Property
have saved the job from the previous lab, and then start a new job. Sheet. This will make the region the whole Field of View:
2. Find a folder on your desktop named IS_Student or Student or something similar.
(Ask the instructor if you need help.)
Navigate through subfolders named Classes  In-Sight Spreadsheets
Standard Resources Images (or something similar). Then drag an image file
named DegradedDataMatrix image into the spreadsheet pane.

This is a very degraded code:

5. In the Property Sheet, set parameters as follows:


Symbology Group: Data Matrix
Advanced Decode Mode: Allow Non-conformant Modules
Leave other parameters at their defaults:

Click OK to close the Property Sheet. The tool should not be able to decode the
degraded image:

6. Next, we are going to create a better image using the filter type called Erode.
Then we will change ReadIDMax so that it references the filtered image.

Page 9 Page 10
In-Sight Spreadsheets Standard Section 5 | Lab Exercise In-Sight Spreadsheets Standard Section 5 | Lab Exercise

7. Place a comment in cell C0: “Filter:” In cell D0, place a Filter tool. (In earlier
versions of In-Sight Explorer, the tool we want is called NeighborFilter.) Choose Exercise 5.4 – (if time allows)
Erode from the pull-down list of Filter Types.
Try other Filter Types such as Binarize, Clip, Stretch, Equalize, Edge Magnitude, Erode,
Click once on the Region parameter and then click on the Maximize Region and Dilate.
button at the top of the Property Sheet. This will make the region the whole Field
of View.

Set the following parameters:


Filter Type: Erode (which shrinks white areas)
Kernel Rows: 3
Kernel Columns: 3

Leave other parameters at their default values and click on OK to exit the Property
Sheet.

8. Change the ReadIDMax Property Sheet so that it points to the filtered image in cell
D0. It should now be able decode the data matrix:

9. The choice of kernel size (Kernel Rows and Kernel Columns) can affect whether a
tool is successful. In the above example, we left the kernel size at the default
value (3x3) and ReadIDMax was successful.

Try a kernel size of 5x5. Is ReadIDMax successful? Try a kernel size of 15x15. Is
ReadIDMax successful? Why or why not?

NOTE: The region for a tool must be no larger than the region of the filter it uses.
We accomplished this by maximizing the regions of both ReadIDMax and Filter to
be the whole Field of View.
10. We do not use this job in subsequent labs, so there is no need to save it.

Page 11 Page 12
In-Sight Spreadsheets Standard Skills Journal

Please answer the corresponding questions upon completion of each section. The
questions are to reinforce the skills learned during each In-Sight Spreadsheets Standard
section.

Section 5 – Blob & Image Tools


• Apply the Property Sheet parameters and auto-inserted information for the
ExtractBlobs tool to a sample image
• Apply a CompareImage function to create f filtered image of part defects
• Import and Export Snippets
• Explain when and how to use the SurfaceFX tool

1. What are the two modes for setting the threshold for ExtractBlobs?

2. Explain the difference between Number to Sort=0 and Number to Sort=5.

3. Name three image Filter Types and indicate what they do.

4. Briefly explain how the SurfaceFX tool works

Page 1
Objectives

Cell State, Error At the end of this section Participants will be able to:
Handling, and Calibration
Section 6 - Explain the uses of Cell State
- Explain the uses of Error Handling
- Implement a non-linear Calibration using the Calibration Wizard
- Identify the two steps in Calibration

Section 6 | Slide 2

In the sixth section of the In-Sight Spreadsheets Standard training we will focus on Cell State, Error At the end of this section Participants will be able to:
Handling and Calibration.
- Explain the uses of Cell State
- Explain the uses of Error Handling
- Implement a non-linear Calibration using the Calibration Wizard
- Identify the two steps in Calibration

Section 6 | Slide 1 Section 6 | Slide 2


Cell State Cell State

1.

Cell State is a way to enable or disable execution of one or


more cells in the spreadsheet.
2.

While disabled:
- Cell is not executed
- Cell’s contents remain as they were before disabling

Section 6 | Slide 3 Section 6 | Slide 4

The Cell State dialog allows you to enable or disable the execution of cells either explicitly or Configuring the State of one or more cells:
conditionally, based upon the value of a reference cell.
1. Select a cell or range of cells in the spreadsheet.
Disabled cells will not be executed when the spreadsheet updates, so these cells retain their current
values indefinitely, unless they are re-enabled. 2. On the Edit Menu, click Cell State.
NOTE: You can also Right Click on the cell and select Cell State from the menu.

Section 6 | Slide 3 Section 6 | Slide 4


Cell State Cell State

3.
Example 1:

4.

Use the logical result of 1 or 0 (the IF statement) from the


FindPatterns referenced cell to enable or disable the
ExtractHistogram’s cell state
5.

Section 6 | Slide 5 Section 6 | Slide 6

3. Select one of the following states for the selected cell(s): The ExtractHistogram should only update if a valid model is found.
- Disabled – Selected cell(s) will not execute upon trigger.
- Enabled – Selected cell(s) will execute upon trigger. NOTE: You should use the logical result of 1 or 0 (the If statement) from the FindPatterns referenced cell
- Conditionally Enabled – Selected cell(s) will be Enabled if the value of a reference cell is non- to enable or to disable the ExtractHistogram’s cell state.
zero.
1. Highlight cells to be enabled/disabled.
4. If you chose Conditionally Enabled in the previous step: 2. Right-click and select Cell State.
- Select either Relative or Absolute for the type of cell reference. 3. Choose Conditionally Enabled and click the Select Cell button.
- Click Select Cell to enter cell selection mode. 4. Select reference cell that will control cell states.
NOTE: If a cell is referenced to itself, that reference is ignored. 5. Cell State dialog should reflect selection. Click OK.
- Select a cell whose value will determine the state of the selected cell(s) from Step 1. Whenever 6. The ExtractHistogram cell should only update when a valid FindPatterns model is found.
Cell Reference evaluates to 0, the selected cell(s) will be Disabled; otherwise, they will be Notice it is grayed out when disabled.
Enabled.
NOTE: You will get a ‘Circular Reference’ error if you try to use the Logic 1/0 for the Histogram.
5. Click OK.

Section 6 | Slide 5 Section 6 | Slide 6


Cell State Cell State

Example 2:
WriteImageFTP is a function that writes out the current
image for every trigger of AcquireImage.

But suppose you want only to write failed images.

1. Create a cell with 1 for part fail, 0 for part pass.


2. Cell state WriteImageFTP to that cell.

enabled disabled

Section 6 | Slide 7 Section 6 | Slide 8

WriteImageFTP writes the current image to an FTP server on the network. Optionally, an SVG file may When a cell is cell stated, the controlling cell is shown to the right of the formula.
be created, which includes the overlay graphics on top of the image. This function is typically used to
automatically save images of failed inspections during runtime. When a cell is cell state disabled, its contents are shown in grey.

NOTE: The function requires the In-Sight vision system to be Online to send images and SVG files.
If an AcquireImage parameter is modified while the vision system is Online, the function will send the
current image.

Section 6 | Slide 7 Section 6 | Slide 8


Viewing Dependencies Viewing Dependencies: Arrow Properties

- Green arrow: cells are dependent on currently highlighted cell


- Blue arrow: highlighted cell depends on these cells
- Dashed arrow: Designates a cell state dependency
- Purple border: cell is not referenced elsewhere in spreadsheet

Section 6 | Slide 9 Section 6 | Slide 10

To view dependencies: A dependency exists when the expression in a cell references another cell or range of cells.
1. Choose the cells whose dependency you would like to see
2. Activate dependency viewing options There are two parts to each dependency:
3. View dependencies - Precedent – Refers to the source of the data, i.e. the cell or range of cells that a cell’s expression
refers to.
- Dependent – The destination of that data, i.e. a cell that contains an expression that refers to other
*This is an easy way to see what cells are interrelated cells.
The shortcut buttons on the Job Audit toolbar allow the user to: Dependencies are drawn as follows:
- Increases levels of dependencies
- Decreases levels of dependencies - Green – These cells are dependent on currently highlighted cell
- Clears all visible dependency arrows - Blue – The highlighted cell depends on these values
- Displays cell dependencies with errors - Dashed – Designates a cell state dependency
- Jumps to first error (#ERR)

Section 6 | Slide 9 Section 6 | Slide 10


In-Sight Error Handling (#ERR) In-Sight Error Handling (#ERR)

1.

FindPatterns

ExtractHistogram
Structure
2.

Section 6 | Slide 11 Section 6 | Slide 12

What does #ERR mean? A function that inputs #ERR also outputs #ERR, therefore errors are easily propagated throughout the
spreadsheet. Since the Histogram uses the fixturing from the Find Pattern tool, as you can see the #ERR
A cell that displays #ERR indicates that the cell could not execute its function correctly and is generally from the FindPatterns function has carried into the ExtractHistogram structure.
due to an invalid input parameter. An #ERR condition can occur for a variety of reasons, some due to
normal operation of the vision application, and some due to errors in the spreadsheet logic itself.

Two ways to obtain #ERR:


1. A function fails.
2. An input parameter is invalid.

Section 6 | Slide 11 Section 6 | Slide 12


In-Sight Error Handling (#ERR) CountError: How It Works

CountError returns the number of #ERRs occurring in a


In-Sight provides CountError and ErrFree functions to specified range of cells
handle #ERR.

Section 6 | Slide 13 Section 6 | Slide 14

Within the Mathematics tools In-Sight provides CountError and ErrFree functions to handle #ERR. How many Errors did the CountError return in the specified cells?

- CountError – Returns the number of errors in one or more cells or cell ranges. B7? __________________________________________________
- ErrFree – Converts #ERR in a cell to a numerical zero (0) or when not in #ERR passes through
whatever numerical value the cell holds. The result of this is to not pass the #ERR to their C7? __________________________________________________
referencing cells, this suppresses error propagation throughout the spreadsheet cells.
B7, C7, D7? ____________________________________________

Section 6 | Slide 13 Section 6 | Slide 14


CountError: How to Use ErrFree

Use CountError & conditional Cell State to enable or disable


other tools
A8 = Not (CountError(C7))
A10 = ExtractHistogram (…) if A8

Disabled ExtractHistogram Enabled ExtractHistogram

Section 6 | Slide 15 Section 6 | Slide 16

The ExtractHistogram function in cell A10 is cell stated to cell A8. ErrFree replaces #ERR with a zero.

In the example on the left, because CountError(C7)=1, A8=Not(CountError(C7))=0. - ErrFree keeps proper logical values.
This disables cell A10. - Converts #ERR in a cell to a numerical zero (0) or when not in #ERR passes through whatever
NOTE: Cell A10 appears in gray as a result of being cell state disabled. numerical value the cell holds. The result of this is to not pass the #ERR to their referencing
cells, this suppresses error propagation throughout the spreadsheet cells.
In the example on the right, because CountError(C7)=0, A8=Not(CountError(C7))=1. This enables cell
B10.
A10 appears normal.

Section 6 | Slide 15 Section 6 | Slide 16


Calibration – What is it? Grid Calibration (non-linear calibration)

Uses a commercial-quality grid with known distances:

All vision tools operate in the pixel world


Checkerboard

What does it mean to be 108 pixels long?


Dots

Section 6 | Slide 17 Section 6 | Slide 18

What does it mean to be 108 pixels long? Grid calibration uses a grid of squares or a grid of dots. These should be rigid, commercial grade grids of
high accuracy, on which the distance between corners of squares or between dots is known to a certain
This changes with each camera environment based on: level of accuracy. You should NOT use a grid printed on a piece of paper, because it can bend and
1. Working distance (part to camera) because the ink may bleed into the paper, yielding poor accuracy.
2. Mounting of the camera (directly over or at an angle)
3. Optics used (lens) Allows the following:
- Reporting measurements in real-world units, such as mm.
To get real-world, meaningful coordinates, such as inches or millimeters, it is necessary to Calibrate. This - Un-distorting the image to correct for:
relates the real world to the pixel world. - Lens distortion
- Perspective distortion

A grid may have a “fiducial,” which is a predefined pattern, which can be used to align with the coordinate
system of a robot.

Section 6 | Slide 17 Section 6 | Slide 18


Lens Distortion Lens Distortion

Undistorted Barrel
Image Distortion

Section 6 | Slide 19 Section 6 | Slide 20

Lenses have distortion effects on the image. These are some common types. This slide shows how the lens can take an undistorted image and make the center of the image appear
larger than the sides of the image. This is the barrel distortion.
In Pincushion distortion the image magnification increases with the distance from the optical axis. The
visible effect is that lines that do not go through the center of the image are bowed inwards, towards the
center of the image, like a pincushion.

In Wave distortion the image has the appearance of movement. It is a combination of the barrel distortion
and pincushion distortion. Straight lines appear curved inwards towards the center of the frame, then
curve outwards at the extreme corners.

In Barrel distortion the image magnification decreases with distance from the optical axis. The apparent
effect is that of an image which has been mapped around a sphere (or barrel).

Section 6 | Slide 19 Section 6 | Slide 20


Lens Distortion Perspective Distortion

Θ
Application Example (barrel distortion):
Use grid calibration so the bottles in the red region will be located as
accurately as the bottle in the green region.

Section 6 | Slide 21 Section 6 | Slide 22

Grid calibration is a form of non-linear calibration. Cameras that are not mounted perpendicular to the surface exhibit Perspective Distortion. The objects’
dimensions are skewed depending on the angle and location.
The benefits of non-linear calibration include:
- Accurate location of objects at the edges of the image
- Vision sensor mounting flexibility: allows you to maintain accuracy even when perpendicular
mount is physically impossible

Section 6 | Slide 21 Section 6 | Slide 22


CalibrateGrid: Calibration Wizard CalibrateGrid Step 1 – Setup Parameters

Checkerboard

Calibration Dialogue Box

Dots

Section 6 | Slide 23 Section 6 | Slide 24

Accessing CalibrateGrid in the Tool Palette opens the Calibrate Wizard dialog box. The Setup parameters allows the user to specify the following:

There are 3 steps to calibrate the vision system: The Grid Type that will be used to construct the calibration: Checkerboard, with fiducial; Checkerboard,
- Setup. When done configuring, click on the Pose link at the left to go to that step no fiducial; Dots, with fiducial; or Dots, no fiducial.
- Pose. When done configuring, click on the Calibrate button to do the actual calibration and see
results The Grid Spacing specifies either the size of the square in a checkerboard pattern, or the distance from
- Results. To exit Wizard, click the OK button. center to center of the does in a grid-of-dots pattern (.0000001 to 9999999; default = 10).

The Grid Units specifies the real-world measurement units (Microns, Millimeters (default), Centimeters,
or Inches) that the calibration will be based upon.

Number of Poses specifics the number of poses that will be required to complete the calibration.
Multiple poses are required when a single calibration pattern cannot fill the field of view. In this type of
calibration, the calibration pattern is placed in various known physical locations to cover the field of view.
Each position of the calibration pattern is called a pose.

Lens Model specifies the type of distortion correction (Radial (default) or Projection) to use based on the
type of lens being used to acquire the image.
- Radial refers to distortion that affects any optical lens where the magnification is different at the
edges of the field of view than at the center of the field of view.
- Projection refers to distortion introduced when the vision system’s optical axis is not perpendicular
to the scene being acquired.

When done with step 1, click on Pose in the list on the left.

Section 6 | Slide 23 Section 6 | Slide 24


CalibrateGrid Step 1 – Checkerboard with Fiducial CalibrateGrid: Multi-Pose Benefits

For Large Field of View:


- Can use a smaller, less cumbersome calibration plate.
- Take multiple poses throughout the field of view to
calibrate the entire image.

For Highly Distorted Images:


- Can pose multiple times to bring in a large number of
calibration points, for more accurate calibration
Section 6 | Slide 25 Section 6 | Slide 26

The fiducial for a checkerboard grid is the pattern of two rectangles shown above. It defines the origin The Number of Poses specifies the number of poses that will be required to complete the calibration.
(0,0) and the angle of the grid, which is useful for aligning to the coordinates of a robot. Multiple poses are required when a single calibration pattern cannot fill the field of view. In this type of
calibration, the calibration pattern is placed in various known physical locations to cover the field of view.
Each position of the calibration is called a pose.

CalibrateGrid can process from 1 to 30 poses.


What are the benefits of multi-pose calibration?

For Large Field of View:


- Use a smaller, less cumbersome calibration plate.
- Take multiple poses throughout the field of view to calibrate the entire image.

For Highly Distorted Images:


- Pose multiple times to bring in a large number of calibration points.

The Benefit is a more accurate calibration.

Section 6 | Slide 25 Section 6 | Slide 26


CalibrateGrid Step 2 – Pose Parameters CalibrateGrid: Results

Average & Maximum


Pixel Error Displayed
Calibration Result
Scale

The Grid features are


located automatically.

Section 6 | Slide 27 Section 6 | Slide 28

Place the grid under the camera and acquire an image. CalibrateGrid Results shows the following information:
The Origin Location (World Coordinates) specifies the X and Y location of the origin in real-world
Total Feature Points displays the total number of feature points that were extracted and used in
coordinates; in the event of a discrepancy between the angular relationship of the grid’s axes and the
the calibration. ≤
real-world axes, the Angle may be specified, as well.
Average Error displays the average error, in pixels, during the calibration.
Feature Points Found displays the total number of extracted feature points to be used in the calibration. Maximum Error displays the maximum error, in pixels, during the calibration.
Acquire Image specifies how the image will be acquired.
The Calibration Graphic displays a graphic representation of the calibration.
- Manual specifies that an image will be acquired after manually pressing the Manual Trigger
- Excellent = Error ≤ 0.25
button.
- Good = 0.25 < Error ≤ 0.50
- Live Mode specifies that the In-Sight vision system will enter live video mode, where the focus of
- Marginal =0.50 < Error ≤ 2.0
the vision system may be adjusted.
- Poor = 2.0 < Error ≤ 5.0
- From File specifies that an image will be loaded using the Open Image dialog.
- Very Poor = 5.0 < Error
Grid Axes specifies the grid axes of the calibration pattern when a calibration pattern without a fiducial is
selected.
Adjust Region launches interactive graphics mode to define a region of interest for the calibration; only
features within the region of interest will be used in the calibration.
Feature points Table displays the feature points extracted from the image in their pixel row / column
coordinates, as well as their calibration pattern location in grid coordinates (X,Y), relative to the origin.
The X and Y coordinates are updated anytime the origin value changes. Extracted feature points are
graphically displayed by a green colored ‘x’ at each feature location. Selecting a feature point from the
table, or clicking on a feature point in the graphical display, will highlight the feature in both locations and
show its coordinates graphically.
Click the Calibrate button.
Section 6 | Slide 27 Section 6 | Slide 28
CalibrateImage and TransformImage CalibrateImage

A0: AcquireImage
Grid under camera

Calibrating
CalibrateGrid
Contains conversion values for
going pixels  real-world units

Use CalibrateImage or TransformImage to create a new CalibrateImage


image Original image in
real-world units

Section 6 | Slide 29 Section 6 | Slide 30

CalibrateGrid is the starting point for all grid calibrations. It contains conversion factors necessary to go This shows how to use CalibrateImage to convert measurements into real-world units.
from pixels to real-world units, based on the array of checkers or dots.
This is the sequence used to calibrate the setup.
After CalibrateGrid, you follow with either CalibrateImage or TransformImage.
CalibrateImage represents the original image (which still has any distortion), but in real-world 1. A calibration grid must be placed under the camera, using the same setup that is used to
units. inspect parts.
TransformImage represents an image that has been corrected for lens distortion, but which is still 2. CalibrateGrid references AcquireImage to come up with conversion factors.
in pixels. In either case, you follow up with other vision tools. 3. CalibrateImage references both AcquireImage and CalibrateGrid to come up with the image in
real-world units.

Section 6 | Slide 29 Section 6 | Slide 30


CalibrateImage CalibrateImage
A0: AcquireImage
Part under camera

Inspecting CalibrateGrid
Contains conversion values for
going pixels  real-world units

CalibrateImage
Original image in
real-world units

Measurement tool (Example: FindSegment)


Tool references CalibrateImage
Results are in real-world units based on original
image
Section 6 | Slide 31 Section 6 | Slide 32

Once the sequence on the last slide is completed, the above sequence is used to create real-world 1. Reference image to be used (default – A0) for Image.
measurements on a part.. The measurement tools reference CalibrateImage, yielding results in real- 2. For Calib, reference the calibration structure created such as the previous CalibrateGrid function.
world units. 3. Open the FindSegment tool and reference the CalibrateImage cell for “Image.”

Now the vision tool (FindSegment) will output the distance in real world values as defined by the
calibration.

Section 6 | Slide 31 Section 6 | Slide 32


CalibrateImage TransformImage: Calibrating

A0: AcquireImage
Grid under camera
CalibrateGrid

CalibrateImage
CalibrateGrid
Contains factors to convert
Measurement in pixels pixels to real-world units
(tool references AcquireImage)

Real world measurement (mm)


(tool references TransformImage
CalibrateImage)
ExtractCalibration
Undistorted image in
(auto inserted)
pixels

Section 6 | Slide 33 Section 6 | Slide 34

CalibrateImage associates a calibration data structure with an image data structure, to create a new This slide and the next show how to use TransformImage to convert measurements into real-world units.
image data structure. The resulting data structure can be referenced by other vision tool functions, in
order to display their results in the world coordinates defined by the referenced calibration (ex. Real world In the Calibration stage, a calibration grid must be placed under the camera, using the same setup that is
measurement – mm). used to inspect parts.
CalibrateGrid references AcquireImage to come up with conversion factors.
NOTE: The output of any vision tool that reports in real-world coordinates cannot be used as a fixture or TransformImage references both AcquireImage and CalibrateGrid to come up with an undistorted
region input to another vision tool. image in pixels.
TransformImage auto-inserts an ExtractCalibration function.

Continued on next slide.

Section 6 | Slide 33 Section 6 | Slide 34


TransformImage: Inspecting Part TransformImage

A0: AcquireImage
Part under camera

CalibrateGrid

TransformImage ExtractCalibration
undistorted image in pixels

Measurement tool (Example: FindSegment)


Tool references TransformImage
Results are in pixels based on undistorted image

TransEdgesToWorld plus appropriate math


Results are in real-world units based on undistorted image
Section 6 | Slide 35 Section 6 | Slide 36

In the Inspection stage, the measurement tools(s) reference TransformImage, yielding results in pixels TransformImage generates a new image, using the information in the CalibrateGrid structure, to cut
based on the undistorted image. A measurement tool, such as FindSegment, references TransformImage down on lens distortion. This image is still in pixels.
to get a measurement in pixels, based on the undistorted image.

Finally, a TransEdgesToWorld function (or whatever transform is appropriate) references both


FindSegment and ExtractCalibration to yield measurements in real-world units.

Section 6 | Slide 35 Section 6 | Slide 36


TransformImage TransformImage

Original image of part:

Undistorted image after


TransformImage:

Section 6 | Slide 37 Section 6 | Slide 38

This spreadsheet shows the sequence of tools going from the original image to a measurement in The top image shows the original image represented by AcquireImage in cell A0.
millimeters based on an undistorted image.
The bottom image shows the image as corrected by TransformImage, using grid calibration.

Section 6 | Slide 37 Section 6 | Slide 38


Calibration Guidelines How accurate is the vision system?

You are calibrating the setup


? What part are you inspecting?
? How accurately is the calibration grid printed/manufactured?
? How good is your lens?
? Is the part or camera moving?
? What is the image quality?
? What is the resolution of the camera?
? How accurate are the vision tools?

• Overall Worst case is the sum of these errors.

• Keep calibration set-up identical to the production set-up **To determine the accuracy of your vision system,
• Calibrate periodically you need to Test It!!!**
Section 6 | Slide 39 Section 6 | Slide 40

Keep the calibration set-up identical to the production set-up: Some of the factors affecting accuracy:
- Keep calibration object and part in the same plane
- Limit calibration to region of image features of interest • What part are you inspecting?
Smooth edges are optimal; burrs/rough edges will reduce accuracy
Calibrate periodically – whenever you think that the setup may change (each shift, daily, etc.). • How accurately is the calibration grid printed/manufactured?
A grid printed on a standard ink jet or laser jet printer will limit accuracy
• How good is your lens?
High quality lenses with Telecentric properties will provide best results
• Is the part or camera moving?
Vibrations can cause camera movement
Part movement can blur the image
• What is the image quality?
High gain increases pixel jitter; Pixel saturation reduces accuracy
• What is the resolution of the camera?
How many pixels?
• How accurate are the vision tools?
Edge tools are accurate to .25 (1/4) pixel, PatFind to .1 (1/10) pixel, PatMax to .025 (1/40) pixel
• Overall Worst case is the sum of these errors.
These tens to be additive, and seldom cancel each other out.
1/10th pixel accuracy is attainable if the parts are flat and have well defined edges

**To determine the accuracy of your vision system, you need to Test It!!**

Section 6 | Slide 39 Section 6 | Slide 40


Summary Lab Exercise

• Cell State allows users to enable/disable cells on the fly.

• CountError and ErrFree allow an application to detect #ERRs and


deal with them in a logical way.

• The In-Sight system works in the pixel world. To get meaningful,


real-world units reported, you must use calibration.
- CalibrateGrid and CalibrateImage convert pixel measurements
to real-world units
- CalibrateGrid and TransformImage convert to real-world and
also correct for barrel, radial, and perspective distortion
- Calibration should be done when there is any change in the setup

• Many factors affect accuracy. The best way to determine accuracy


of a system is to measure parts having known accuracy.

Section 6 | Slide 41 Section 6 | Slide 42

In this section we covered the following topics: Complete:


Lab Exercise 6.1 – Error Handling
- Cell State allows users to enable/disable cells on the fly Lab Exercise 6.2 – Cell State
- CountError and ErrFree allow an application to detect #ERRs and deal with them in a logical Lab Exercise 6.3 – Calibration
way
- The In-Sight system works in the pixel world. To get meaningful, real-world units reported, you
must use calibration
- CalibrateGrid and CalibrateImage convert pixel measurements to real-world units
- CalibrateGrid and TransformImage convert to real-world and also correct for barrel,
radial, and perspective distortion
- Calibration should be done when there is any change in the setup
- Many factors affect accuracy. The best way to determine accuracy of a system is to measure
parts having known accuracy.

Section 6 | Slide 41 Section 6 | Slide 42


In-Sight Spreadsheets Standard Section 6 | Lab Exercise In-Sight Spreadsheets Standard Section 6 | Lab Exercise

4. Repeat step 3 for the Edge tool result.


Lab Exercise 6.1 – Error Handling
At the end of this lab exercise, Participants will be able to:
• Utilize the ErrFree tool to ensure that the final tool result is free of all errors
Block Width
The Participant will utilize the following In-Sight Functions to successfully complete this
exercise:
• ErrFree
5. To remove the #ERRs in the Blob tool, you will ErrFree the Area results.
Follow the steps below to complete the lab exercise: 6. Enter the comment Error Control in cell K23. You may need to make the column
a little wider.
7. Insert an ErrFree function into cell K24 that references cell J24 (the result of the
1. Continue with MySnippet.job.
Area from the first blob).
NOTE: For the Logic statements that we created throughout the spreadsheet job,
we need to ErrFree each one so that the result never goes to #ERR as Output
functions will not know what to do with that result.
2. Enter the comment Error Control in cell L15.

8. Insert an ErrFree function into cell L24 that references cell K24 (the new ErrFree
value).
9. Copy and paste cells K24 and L24 into cells K25 and L25.

3. Insert an ErrFree function into cell L16 (Histogram result) that references the Logic
result in cell K16.

10. Test with the good and bad blocks to ensure that no #ERRs are propagating
through the final tool result.
11. Save the job as MyErrorHandling.job on the In-Sight camera and your own folder
on the PC.

Page 1 Page 2
In-Sight Spreadsheets Standard Section 6 | Lab Exercise In-Sight Spreadsheets Standard Section 6 | Lab Exercise

The Cell State dialog displays.


Lab Exercise 6.2 – Cell State
At the end of this lab exercise, Participants will be able to:
• Integrate error handling and proper use of cell state

The Participant will utilize the following In-Sight Functions to successfully complete this
exercise:
• Cell State

Follow the steps below to complete the lab exercise:

1. Continue with MyErrorHandling.job.


2. Create a Pass / Fail logic statement that uses 1 to Pass and 0 to Fail for the
FindPatterns result in cell K12.
NOTE: You will not need to add the ErrFree statement step as the score never goes
to #ERR.
4. Select Conditionally Enabled and then click the Select Cell button.
5. Select cell K12, the logic statement that you just created and click <Enter>.
The absolute reference to cell K12 will display in the Cell Reference field.

3. Use this result to control the cell state of your ExtractHistogram. Select the Hist
tool and right click. Select Cell State.

6. Click the OK button.


Notice the cell when the block is found and when it is not found.

Part Found

Part Not Found

Page 3 Page 4
In-Sight Spreadsheets Standard Section 6 | Lab Exercise In-Sight Spreadsheets Standard Section 6 | Lab Exercise

7. Repeat the same process to control the cell state for the FindSegment and
DetectBlobs tools. Lab Exercise 6.3 – Dependencies Viewer
8. Save the job as MyCellState.job on the In-Sight camera and your own folder on
the PC. (You will not use MyCellState.job again until section 6.4.) At the end of this lab exercise, Participants will be able to:
• Explain how to view multiple levels of dependencies within the spreadsheet

The Participant will utilize the following In-Sight Functions to successfully complete this
exercise:
• Dependencies

Follow the steps below to complete the lab exercise:

1. Continue with MySnippet.job.


2. Highlight the FindPatterns structure.

3. Click the Show Dependency Levels Increase button.


Or: Click View  Job Auditing  Increase Dependency Levels.

Page 5 Page 6
In-Sight Spreadsheets Standard Section 6 | Lab Exercise In-Sight Spreadsheets Standard Section 6 | Lab Exercise

4. Notice the graphics showing which cells depend on the FindPatterns structure 6. Click the Show Dependency Levels Reset button to remove the dependency
(those in green) and which cells the FindPatterns (those in blue) depends upon. arrows.
7. Click the Save Job button to save your work.
NOTE: This will save your work under the last save job name, MySnippet.job.

5. Click the Show Dependency Levels Increase button again.


A second level of dependencies displays.

Block Width

Page 7 Page 8
In-Sight Spreadsheets Standard Section 6 | Lab Exercise In-Sight Spreadsheets Standard Section 6 | Lab Exercise

4. Configure the Setup step. The instructor will tell you the Grid Spacing, usually 5
Lab Exercise 6.4 – Calibration mm.
5. Select Pose – acquire an image of the calibration plate using Live Video. Once
At the end of this lab exercise, Participants will be able to: you are happy with the image, click anywhere in the image to stop Live Video, then
• Utilize the CalibrateImage to transform the pixel locations of the Image A0 to click the Calibrate button.
calculate the real world positions
• Use the image reference in the Edges (FindSegment) structure to convert the width
of the block from the previously found distance to millimeters

The Participant will utilize the following In-Sight Functions to successfully complete this
exercise:
• CalibrateGrid
• CalibrateImage

Follow the steps below to complete the lab exercise:

1. Continue with MyCellState.job.


2. Your instructor will provide a calibration grid. Position the calibration grid under the
camera at the same distance to the lens as the block.
3. Insert a CalibrateGrid function into cell C35.

The CalibrateGrid Wizard displays.


6. Select Result – the quality of the calibration is returned. Once complete click the
OK button.

7. Remove the calibration grid and return the block under the camera.

Page 9 Page 10
In-Sight Spreadsheets Standard Section 6 | Lab Exercise In-Sight Spreadsheets Standard Section 6 | Lab Exercise

8. Insert a CalibrateImage function into cell C36 to create an image based on real 12. Adjust your logic statement for the gap in cell K20 to account for the new results.
world units.

13. Save the job as MyCal.job on the In-Sight camera and your own folder on the PC.

The CalibrateImage Property Sheet displays.

9. The CalibrateImage must reference the original image cell A0 and the
CalibrationGrid cell C35.
10. In the original FindSegment (cell C20) – change the reference to the
CalibrateImage cell C36 instead of the original image cell of A0 to determine the
gap width in millimeters and click the OK button.

11. The value is returned in millimeters, does this make sense in terms of your
calibration?

Page 11 Page 12
In-Sight Spreadsheets Standard Skills Journal In-Sight Spreadsheets Standard Skills Journal

Please answer the corresponding questions upon completion of each section. The 5. List three factors that can affect the accuracy of a vision system.
questions are to reinforce the skills learned during each In-Sight Spreadsheets Standard
section.

Section 6 – Cell State, Error Handling & Calibration


• Explain the uses of Cell State
• Explain the uses of Error Handling
• Implement a non-linear Calibration using Calibration Wizard
• Identify the three steps in Grid Calibration

1. Suppose you want to communicate the Average result from ExtractHistogram but
only when a part fails. How would you do this in the spreadsheet?

2. What are the two types of situations that result in an #ERR?

3. Name the two functions that can handle cells with #ERR in them.

4. What are the three steps in Grid Calibration?

Page 1 Page 2
Objectives

At the end of this section Participants will be able to:


Discrete I/O
- Identify which functions to use to read from and write to
Section 7
discrete channels, and the choices for I/O settings
- Implement the ReadDiscrete and WriteDiscrete functions
correctly in a job, including the proper I/O settings
- List four conditions that can affect whether In-Sight is Online or
Offline

Section 7 | Slide 2

In the seventh section of the In-Sight Spreadsheets Standards training we will cover Discrete and Serial At the end of this section Participants will be able to:
I/O.
- Identify which functions to use to read from and write to discrete channels, and the choices for
I/O settings
- Implement the ReadDiscrete and WriteDiscrete functions correctly in a job, including proper I/O
settings
- List four conditions that can affect whether In-Sight is Online or Offline

Section 7 | Slide 1 Section 7 | Slide 2


In-Sight I/O Expansion Modules The Event Function

Example:
Discrete I/O
When a pulse occurs on discrete input line 0, the Event can be used to
Discrete Inputs Discrete Outputs update those portions of the spreadsheet that depend on the Event
Series instead of using the image acquisition cell (A0 ).
without with without with
expansion CIO-MICRO expansion CIO-MICRO

Micro 0 8 2 10

2000 1 N/A 4 N/A

5000 0 7 2 8

7000 3 11 4 12-14

All cameras also have a dedicated trigger input (Trigger+ and Trigger-)

Section 7 | Slide 3 Section 7 | Slide 4

This chart outlines the different models of I/O Expansion Modules and the benefits of each. When an Event function is triggered, any cells that depend on the Event will be executed. Unlike
AcquireImage, no new image is acquired. This function can also be used as a source of soft triggers for
What I/O module goes with what camera? the AcquireImage function and holds a value of one when activated.
- CIO Micro (CC)
- In-Sight Micro Event triggers include:
- In-Sight 7000 Series
- In-Sight 5600 Models Conditions
- Discrete Input
- CIO 1400 - Manual Acquisition Trigger
- In-Sight 5000 Series - Online/Offline state of system
- Job Load Done
- The In-Sight 2000 series does not have an I/O Expansion Module - Soft 0-7: Programmable in software
- Tune Button (In-Sight 7000 Gen II)

Error Conditions
- Acquisition Error (missed Camera Trigger)
- Discrete I/O Error (tracking pulse overrun)
- Serial Port Error

Section 7 | Slide 3 Section 7 | Slide 4


Discrete I/O (prior to release 5.4) Discrete I/O (release 5.4 and later)

Similar dialog for outputs

Section 7 | Slide 5 Section 7 | Slide 6

The Discrete Inputs Settings dialog configures the discrete input lines on the active In-Sight vision
system. Discrete inputs are read into the In-Sight spreadsheet using the ReadDiscrete function.

Discrete Inputs are configured under Sensor  Discrete I/O Settings  Input Settings.

NOTE: The Discrete Input Settings dialog is not supported with the In-Sight 8405 vision system.
Each line can be configured for one of the following functions:

- User Data – General purpose input line; used to turn Location and Inspection Tools On or Off.
- Reset Counters - Resets the EasyBuilder counters (Job.Fail_Count, Job.Inspection_Count.job,
Job.Pass_Count, <Tool>.Error_Count, <Tool>.Fail_Count, and <Tool>.Pass_Count) to 0.
- Event Trigger – Triggers an event, through logic created in the Spreadsheet View.
- Job ID Number – Provides one bit of a Job ID Number, which is loaded when the State of a
different input line with a Type of Job Load Switch is ON.
- Online/Offline – Forces the vision system Offline or Online (LOW (0) = Offline, and HIGH (1) =
Online).
- Acquisition Trigger – Triggers the vision system to acquire an image.
- Job Load Switch – ON reads all of the Job ID Number lines and loads the specified job.

Section 7 | Slide 5 Section 7 | Slide 6


Discrete Inputs Discrete Inputs: Opening a Job

• Job ID line(s) represent bits in a number.


Examples
000 = 0
011 = 3
101 = 5

• When the Job Load Switch is activated, the job in the camera starting
with the number is opened
Some lines can be configured
as either an Input or Output • Examples
0MyJob
3NortheastLine
5GearInspect
Section 7 | Slide 7 Section 7 | Slide 8

The Discrete Inputs Settings dialog configures the discrete input lines on the active In-Sight vision 1. Set exactly one Input Line’s Type to Job Load Switch.
system. Discrete inputs are read into the In-Sight spreadsheet using the ReadDiscrete function. 2. Set at least one Input Line’s Signal Type to Job ID Number.
3. The Job ID bit is a binary coded number. Lowest line number is least significant bit (LSB). Job ID lines
Discrete Inputs are configured under Sensor  Discrete I/O Settings  Input Settings. must be next to each other.
Example
NOTE: The Discrete Input Settings dialog is not supported with the In-Sight 8405 vision system. - 000 = 0
- 011 = 3
Each line can be configured for one of the following functions: - 101 = 5
4. Select an Input Line to configure and set the Signal Type to Job Load Switch. The job file that is
- User Data – General purpose input line; used to turn Location and Inspection Tools On or Off. loaded is indicted by the state (0 or 1) of any other Input Lines set to Job ID Number at the time of the
- Reset Counters - Resets the EasyBuilder counters (Job.Fail_Count, Job.Inspection_Count.job, Load Switch signal.
Job.Pass_Count, <Tool>.Error_Count, <Tool>.Fail_Count, and <Tool>.Pass_Count) to 0. NOTE: Must have a pin assigned as a Job Load Switch or the camera would constantly load jobs if the
- Event Trigger – Triggers an event, through logic created in the Spreadsheet View. Job ID pins were high.
- Job ID Number – Provides one bit of a Job ID Number, which is loaded when the State of a
different input line with a Type of Job Load Switch is ON.
- Online/Offline – Forces the vision system Offline or Online (LOW (0) = Offline, and HIGH (1) =
Online).
- Acquisition Trigger – Triggers the vision system to acquire an image.
- Job Load Switch – ON reads all of the Job ID Number lines and loads the specified job.

Section 7 | Slide 7 Section 7 | Slide 8


ReadDiscrete: Reads a Discrete Line ReadDiscrete Example

ReadDiscrete
• Signal Type = User Data
• Use ReadDiscrete in Spreadsheet

Example:
Each time an image is acquired, read state of input line 1

Low = 0 Volts High = 24 Volts

Section 7 | Slide 9 Section 7 | Slide 10

To Configure a Discrete Input Line: Each time an image is acquired, the ReadDiscrete tool and its accompanying MultiStatus (automatically
inserted) will change as follows:
1. On the Sensor Menu, click Discrete I/O Settings - If line 1 is Low, the ReadDiscrete will show as 0 and the MultiStatus will show a yellow light.
2. Click Input Settings - If line 1 is High, the ReadDiscrete will show as 1 and the MultiStatus will show a red light.
3. Click the I/O Module Configuration dialog and configure the I/O module. Once configured click the OK
button to close the I/O Module Configuration dialog and return to the Discrete Input Settings dialog. The colors can be changed to something other than yellow and red, if desired.
4. Select a Line to configure. The default settings vary depending on the In-Sight vision system being
configured and the type of I/O module connected to the system.
5. Optionally, enter an alternate Name for the input line by selecting the field containing the default
name, and entering a new name.
6. Select a Type for the selected input line.
a. Configure Line 1 as User Data type
7. Select a Signal type for the selected input Line, which controls the sensitivity of the input line to edge
transitions
a. Rising Edge – Changes to the State of the input line on the leading edge of a pulse
b. Falling Edge – Changes the State of the input on the falling edge of a pulse.
c. Both Edges – Changes the State of the input line on the leading edge and falling edge of a
pulse. This option is only available when Event Trigger is the selected type.
8. Click OK to accept the changes (changes are saved to flash memory), or click Cancel to undo the
changes.

Section 7 | Slide 9 Section 7 | Slide 10


Discrete Outputs Details Dialog

Depending on the Type chosen the details dialog offers


additional settings

Programmed Strobe

Some lines can be configured


as either an Input or Output

Section 7 | Slide 11 Section 7 | Slide 12

The Discrete Outputs Settings dialog configures the discrete output lines on the active In-Sight vision system. The Details dialog will offer additional settings dependent upon the Type selected.
Discrete outputs are sent out from the In-Sight spreadsheet using the WriteDiscrete function. Each line can be
configured for one of the following functions: - Pulse – When this checkbox is selected, the output will be pulsed for the duration of the Pulse
Length. Clear this checkbox for steady-state output. Output must be pulsed when the
- Programmed – Enables a WriteDiscrete function in the spreadsheet to control the State of this output line. Acquisition Delay is greater than 0.
Either pulsed or steady-state.
- Pulse Length (ms) – Duration of an output pulse; In-Sight Micro 1000 series, In-Sight 5000.
- High – Forces the output to HIGH (1).
and In-Sight 8405 vision systems (10 to 1000 ms; default = 10), and In-Sight 7000 series and
- Low – Forces the output to LOW (0).
- Acquisition Start – Signals that the vision system has initiated an acquisition. Always pulsed. In-Sight Micro 1402, 1412 and 1500 vision systems (1 to 1000 ms; default = 10)
- Acquisition End – Signals the completion of vision system acquisition. Always pulsed. - Acquisition Delay (N) – The number of acquisition or tracking pulses (0 to 1000) to delay the
- Job Completed – Signals each time the spreadsheet has completed an update. Always pulsed. output after a signal pulse is received by an output Line. If Acquisition Delay = 0, then the In-
- System Busy – HIGH when the vision system is running a job or responding to user input, LOW when the Sight sensor updates the output line immediately on evaluating the WriteDiscrete function. If
vision system is idle. Acquisition Delay is greater than 0, the output Line is always pulsed.
- Job Load OK – Signals the successful loading of a job. Always pulsed. - Time After Trigger (ms) – When this checkbox is selected, the output will be fired after the
- Job Load Fail – Signals the failure of a job load. Always pulsed. specified amount of time (0 to 10,000 ms)
- ERR: Missed Acquisition – Signals that an acquisition trigger was received before an Acquisition End signal
was sent, or that no image buffer was available for image acquisition when an acquisition trigger was received. NOTE: No output details are configurable for High, Low, System Busy, Online/Offline, Lifeline and
Always pulsed.
IO Module Standby Types.
- ERR: Tracking Overrun – Signals that EasyBuilder issued a delayed discrete output signal sometime after the
- Strobe Start Position – Specifies when the strobe should pulse.
time it was expected. Always pulsed.
- ERR: Tracking Queue Full – Signals that EasyBuilder issued a delayed discrete output for a line where a - Acquisition Start – Specifies that the strobe will pulse as the In-Sight vision system
different output had been previously scheduled to occur at the same time. Always pulsed. begins its acquisition. Supported on all vision system models except the In-Sight 8405
- Online/Offline – HIGH (1) when the vision system is Online, LOW (0) when the vision system is Offline. vision system.
- Lifeline (CIO-Micro and CIO-Micro-CC Only) – HIGH when the IN-Sight Micro vision system is actively - Camera Trigger – Specifies that the strobe will pulse upon receiving a camera trigger
connected to the CIO-Micro or CIO-Micro-CC. LOW when the connection with the vision system fails. event. Supported on all vision systems except the In-Sight 8405 vision system.
- Waveform – Enables a Waveform function, WriteWaveformPulseTrain or WriteWaveformClocked, in the - All Rows Exposed – Specifies that the strobe will pulse only when all pixel rows are
spreadsheet to control the State of this output line. exposed. Supported on the In-Sight 8405 vision system only.
Section 7 | Slide 11 Section 7 | Slide 12
Tri-Color LED WriteDiscrete: Send to a Discrete Line

• Signal Type = Programmed


• Use WriteDiscrete in Spreadsheet
Sensor Menu:

Example:
Each time an image is acquired, output pass (1) or fail (0) to line 0

Section 7 | Slide 13 Section 7 | Slide 14

- Tri-color LED: specifies pass/fail colors on cameras with Tri-color LED, e.g., In-Sight 7802.
When Signal Type for discrete output line 12 is set to Job Pass/Fail Cell, then whatever cell is
specified as watch cell under the Sensor Menu determines the color of the Tri-Color LED.

Section 7 | Slide 13 Section 7 | Slide 14


WriteDiscrete Example Online vs. Offline

WriteDiscrete

Online means that all In-Sight Input Offline means that most In-Sight Input
and Output signals are enabled. and Output signals are disabled.

NOTE: Value and color indicator will not update until system is online and triggered

Section 7 | Slide 15 Section 7 | Slide 16

A Programmed Output enables a WriteDiscrete function in the spreadsheet to control the State of this Online means that all In-Sight Input and Output signals (discrete, serial, network, and non-manual
output line. It will be either pulsed or steady-state. triggers) are enabled.

Event – Specifies the event on which to read the specified value. When Online:
This parameter must be a reference to one of the following: You can do this:
- Acquisition triggers
- The image data structure in cell A0 - Serial I/O
- A cell containing an Event function - Discrete I/O
- A cell containing a Button function - Network I/O
But not this:
Start Bit – Specifies the first bit of the set to be written. - Edit spreadsheet
- In-Sight 5000 series and Micro 1000 series: (0 to 11; default = 0) - Open Property Sheets
- In-Sight 7000 series: (0 to 13; default = 1)
NOTE: The Start Bit is on the left of the ‘LEDs’ and also capable of ‘bit shifting’ to use a 1 to Offline means that most In-Sight Input and Output signals are disabled.
activate pin 4 instead of pin 1 for example. When Offline:
You can do this:
Number of Bits – Specifies the number of bits in the set to be written. - Edit spreadsheet
- In-Sight 5000 series and Micro series: (0 to 12; default = 1) - Open Property Sheets
- In-Sight 7000 series: (1 to 14; default = 1)
But not this:
- Acquisition triggers
Value – Specifies the positive integer value to be written.
- Serial I/O
- In-Sight 5000 series and Micro series: (0 to 4,095; default = 0)
- Discrete I/O
- In-Sight 7000 series: (0 to 16,383; default = 0)
- Network I/O

Section 7 | Slide 15 Section 7 | Slide 16


Online / Offline: Some Ways to Switch Reminder about I/O

1.
2.

• In-Sight must be Online in order to send or receive


any I/O

• All settings (serial, discrete, etc.) are system-wide for


In-Sight, i.e., for all subsequent jobs loaded into In-
Sight.
3.

4. Native Mode Commands (more later)

If one way says Offline, it overrides another saying Online.


Section 7 | Slide 17 Section 7 | Slide 18

There are four ways to set Online and Offline state: In-Sight must be Online in order to send or receive any I/O

1. Sensor  Startup All settings (serial, discrete, etc.) are system-wide for In-Sight, i.e., for all subsequent jobs loaded into In-
2. Toolbar Sight.
3. Discrete input line configured for offline
4. Native Mode Commands Keep in mind what the function needs as output values:
ASCII commands sent to an In-Sight from another device (computer, PLC, etc.) over the - WriteDiscrete – Integer (0 or 1)
network - WriteSerial – String
Uses telnet protocol, port 23
- SO0 = Set Offline
- SO1 = Set Online

Section 7 | Slide 17 Section 7 | Slide 18


Summary Lab Exercise

• Discrete input and output lines are configured under


Sensor  Discrete I/O Settings

• To send discrete results from the spreadsheet, set the


output line to Programmed and use WriteDiscrete

• To read discrete lines into the spreadsheet, set the input


line to User Data and use ReadDiscrete

Section 7 | Slide 19 Section 7 | Slide 20

In this section we covered the following topics: Complete:


Lab Exercise 7.1 – Discrete I/O – Input
- Discrete input and output lines are configured under Sensor  Discrete I/O Settings Lab Exercise 7.2 – WriteDiscrete
- To send discrete results from the spreadsheet, set the output line to Programmed and use
WriteDiscrete
- To read discrete lines into the spreadsheet, set the input line to User Data and use
ReadDiscrete

Section 7 | Slide 19 Section 7 | Slide 20


In-Sight Spreadsheets Standard Section 7 | Lab Exercise In-Sight Spreadsheets Standard Section 7 | Lab Exercise

NOTE: Select the CIO-Micro for all cameras except the IS5000
Lab Exercise 7.1 – Discrete I/O – Input 3. Go to your Sensor’s Input Settings and change the name for Input Line 0 to Button
Push and the Type to Event Trigger and the Signal to Rising Edge.
At the end of this lab exercise, Participants will be able to:
• Use Input0 (or Input1) to trigger an asynchronous event
• Create WriteDiscrete functions to signal pass or fail over a discrete output line

The Participant will utilize the following In-Sight Functions to successfully complete this
exercise:
• Event 4. Enter the Comment Count Button Pushes in cell B38.
• Count
5. Insert an Event function into cell C40 of the spreadsheet.

Follow the steps below to complete the lab exercise:

1. Open MyCal.job from the last lab exercise.


2. Go to Sensor  Discrete I/O Settings to confirm the appropriate I/O expansion
module is selected.

The Event Property Sheet displays.

6. Select Discrete 0 as the Trigger:

Click the OK button. The Event will now trigger every time a signal is
detected on Discrete Input Line 0.

Page 1 Page 2
In-Sight Spreadsheets Standard Section 7 | Lab Exercise In-Sight Spreadsheets Standard Section 7 | Lab Exercise

7. Insert a Count tool into cell D40 of the spreadsheet.


Lab Exercise 7.2 – WriteDiscrete
At the end of this lab exercise, Participants will be able to:
• Create WriteDiscrete functions to signal pass or fail over a discrete output line

The Participant will utilize the following In-Sight Functions to successfully complete this
exercise:
• Global Bit
• WriteDiscrete
The Count Property Sheet displays.

Follow the steps below to complete the lab exercise:

1. Continue with MyInput.job.


2. Enter the Comment Global Bit into cell B43. Then insert an AND function in cell
C44 that references all of the tool results.

C44:

8. Set the Event to reference the Event entered into cell C40 and click the OK
3. Insert a WriteDiscrete function into cell C48 of the spreadsheet.
button.
9. Go Online and note the Count tool changes as you press the button connected to
the I/O module.
NOTE: The 24 VDC enters input line 0, input line 0 triggers the Event, and the
Event then activates the Count.
10. Save the job as MyInput.job on the In-Sight camera and your own folder on the
PC.

Page 3 Page 4
In-Sight Spreadsheets Standard Section 7 | Lab Exercise In-Sight Spreadsheets Standard Section 7 | Lab Exercise

The WriteDiscrete Property Sheet displays. The Line 0 Output Details displays.
4. Set the Start Bit to 0.
5. Set the Number of Bits to 1.
6. Reference the Value parameter to the logic that determines a Pass or Fail for the
part (toggles between 0 and 1). This is cell C44. √

10. Check the Pulse checkbox and set the Pulse Length to 1000 ms (1 second). This
is so you will easily see the LED pulse on the I/O module.
NOTE: WriteDiscrete will not show the current value of the logic until the system is 11. Click the OK button twice and go Online.
online and triggered. So its value might not match the Global Bit until then. 12. Place the good block under the camera and do a manual trigger. As you do, watch
the LEDs on the I/O Expansion module. OUT 0 should go on for a second.
7. Click the OK button. NOTE: The results displayed by the WriteDiscrete function should be a 1 for the
8. Go to Sensor  Discrete I/O Settings to set the pulse duration. good block.
The Discrete I/O Settings dialog box displays. 13. Place the bad block under the camera and do a manual trigger.
NOTE: The results displayed by the WriteDiscrete function should be a 0 for the bad
block. Notice the LEDs on the I/O Expansion module. OUT 0 should not go on.
14. Save the job as MyOutput.job on the In-Sight camera and your own folder on the
PC.

9. Go down to the Output portion of the dialog. Change Name to Pass/Fail and set
the Type for line 0 to Programmed.

Next, click on the Details button.

Page 5 Page 6
In-Sight Spreadsheets Standard Skills Journal

Please answer the corresponding questions upon completion of each section. The
questions are to reinforce the skills learned during each In-Sight Spreadsheets Standard
section.

Section 7 – Discrete I/O


• Identify which functions to use to read from and write to discrete channels
• Implement the WriteDiscrete function correctly in a job, including proper I/O
settings
• List four conditions that can affect whether In-Sight is Online or Offline

1. What are the two functions that are used in the spreadsheet to communicate over
discrete lines?

2. List three discrete input lines signal types. Hint: You can find them in the Discrete
Input/Output configuration dialog.

3. List three discrete output lines signal types. Hint: You can find them in the Discrete
Input/Output configuration dialog.

4. What are the four conditions that can affect whether In-Sight is online or offline?

Page 1
Objectives

At the end of this section Participants will be able to:


Network Communications
Section 8 - Describe different forms of network communication such as:
- PLC protocols
- FTP
- TCP/IP
- Explain Client/Server communication in TCP/IP communications

Section 8 | Slide 2

In the eighth section of the In-Sight Spreadsheets Standard training we will focus on Network At the end of this section Participants will be able to:
Communications.
- Describe different forms of network communication such as:
- PLC protocols
- FTP
- TCP/IP
- Explain Client/Server communication in TCP/IP communications

Section 8 | Slide 1 Section 8 | Slide 2


What is Networking Used For? Who can Communicate?

In-Sight  In-Sight
Data
- Transfer cell values between In-Sight cameras, PCs, or factory
floor devices
In-Sight  Computer
Control
- Control of Events, Triggers, Jobs, Online/Offline Status from a
third party source (Native Mode commands)

Image In-Sight  PLC/Controller


- For display purposes in third party applications
- For image archival

Section 8 | Slide 3 Section 8 | Slide 4

In-Sight networking is primarily used for 2 purposes. This slide shows the devices that the In-Sight camera can communicate with:

The first is data exchange. This could be an exchange between 2 In-Sight systems, such as passing a 2D - Another In-Sight Camera
code result from one In-Sight upstream informing another In-Sight down the line of the type of part that is - PLC/Controller
coming. The second In-Sight will run inspections accordingly. Data Exchange could also occur between - Computer
an In-Sight system and other Ethernet devices, such as a PLC. The PLC would gather the data from
several In-Sights and make a decision whether to shut or open a certain valve.

The other use of networking is in Remote control. On In-Sight, for example, one system could trigger
another In-Sight and have it capture an image. One In-Sight could also put another In-Sight online and
take it offline. One In-Sight could force a load of a certain job on another In-Sight. Remote control can
also occur between In-Sights and other devices. Robot control is a good example. In a simple pick and
place application, In-Sight would instruct a robot arm to move in X, Y, and Theta to successfully pick the
part.

In a “Master/Slave” setup, one In-Sight can act as a master which signals other In-Sight sensors on the
vision area network to acquire their images when the master system begins its acquisition. This “network
trigger” uses the same Ethernet cable, and so it has the potential to reduce factory wiring costs.

Section 8 | Slide 3 Section 8 | Slide 4


Protocols for PLC Terminology: TCP/IP and IP Address

EtherNet/IP *
– Rockwell such as the ControlLogix TCP/IP – Transmission Control Protocol / Internet Protocol,
– Can do Implicit and Explicit messaging a widely used protocol for communication on networks,
ProfiNET * including the Internet
– Siemens such as the S7-300 and S7-400
– Through Buffer commands and the variable table
MC Protocol (MELSEC) IP Address - At any point in time, each device on a given
– Mitsubishi such as Q- and L- series PLCs network must have a unique address of form
– CC link is the hardware implementation (remote registry) xxx.xxx.xxx.xxx, where xxx is a number 0-255
CIPSync
– Isochronous communication with embedded time stamps

* Interactive Tutorials are available in the Support section of the web (www.cognex.com)

Section 8 | Slide 5 Section 8 | Slide 6

EtherNet/IP implicit messaging allows an In-Sight vision system’s inputs and outputs to be mapped into IP Address: Each device on a network must have a unique IP address of form XXX.XXX.XXX.XXX,
tags in the ControlLogix PLC. Once these values are established, they are synchronized at an interval where XXX = 0 to 255. Example: 192.168.0.5
defined by the Requested Packet Interval (RPI). Each time the RPI expires, the PLC updates the vision
system’s status registers, and requests that the vision system update the PLC’s status registers. NOTE: Microsoft’s default TCP/IP network is 192.168.0.XXX

ProfiNET supports a second, alternative way of communicating with an In-Sight vision system, using the
Read and Write Record commands. These commands explicitly write to a specific area in memory on an
In-Sight vision system; the Read and Write Record commands are sent to a specific device and that
device always responds with a reply to that message. As a result, the Read and Write Record commands
are better suited for less frequently occurring operations.

The Mitsubishi MC Protocol (also known as MELSEC) communication protocol is an application-level


protocol implemented on top of the Ethernet TCP/IP and UDP/IP layers. It is typically used to access
information on Mitsubishi PLCs and motion controllers which support the MELSEC server protocol using
3E and 1E frames.
The client driver also supports SLMP (Seamless Message Protocol) connections to communication to CC-
Link IE Field networks.

CIPSync provides the increased control coordination needed for demanding events sequencing,
distributed motion control and other highly distributed application, where absolute time-synchronization of
devices is vital.

Section 8 | Slide 5 Section 8 | Slide 6


Terminology: IP Address Terminology: Subnet

Allowable IP addresses on a network are defined by its


• Two types of networks: subnet mask
- Static: IP addresses assigned by a person
- Dynamic (DHCP): IP addresses assigned by server (computer)
Example: 255.255.255.0
• In-Sight can be configured for either type of network
Addresses on this subnet could be:
• A new In-Sight is DHCP - 192.168.0.1
- 192.168.0.4
- 192.168.0.126
- 192.168.0.203

Section 8 | Slide 7 Section 8 | Slide 8

There are two types of Networks: The Subnet is a group of networked In-Sights with similar IP addresses.

- Static IP Address Subnet Mask defines which part of the IP address refers to the network and which part refers to the host.
- IP Addresses and subnet mask are assigned by person (Network Administrator) The subnet mask must be the same for all devices on a network. Example: 255.255.255.0.
- Stays the same with power cycling NOTE: 255 means all IP addresses on this subnet are identical in this position.
- Dynamic IP Address 0 means each IP address is different in this position.
- IP Addresses and subnet mask are assigned by server (computer) There are three types, or classes of subnet masks. The class of a particular subnet on a network is
- Might change with power cycling, depending on the server defined by the number of bits used to represent the network and host address portions in the IP address,
as in the table below:
NOTE: In-Sight can be configured on either type of network.
A new or repaired camera is shipped as DHCP.
Class Subnet Mask Network Address Host Address

A 255.0.0.0 8 bit 24 bit

B 255.255.0.0 16 bit 16 bit

C 255.255.255.0 24 bit 8 bit

For example, consider a networked In-Sight host system with the IP address 192.168.0.1. If the first three
numbers (192.162.0) identify the 24 bit network address, and the last number (1) is the 8 bit address for
the In-Sight host on the network, then the subnet mask for this host is 255.255.255.0

Section 8 | Slide 7 Section 8 | Slide 8


Hosts Not on the Same Subnet TCP/IP: Client / Server

Between In-Sight & any other TCP/IP device, such as:


To log onto a host not on the subnet, you need to
specify the Host’s IP address - Ethernet I/O module
- PLC
- PC running Visual Basic
- Another In-Sight

request
Client Server
response

Section 8 | Slide 9 Section 8 | Slide 10

The Explorer Host Table setup dialog provides In-Sight Explorer access to cameras on different subnets TCP/IP (Transmission Control Protocol/Internet Protocol) is the basic communication language or protocol
and allows you to create aliases for cameras. Explorer Host Table entries do not actually change the of the Internet. It can also be used as a communications protocol in a private network (either an intranet
network settings of hosts at the specified IP addresses; these entries simply appear as additional camera or an extranet).
to In-Sight Explorer.
Sockets can be configured to act as a Server and listen for incoming messages or connect to other
WARNING!: Cameras that are configured using DHCP cannot be guaranteed to have a specific IP address. applications as a Client. After both ends of a TCP/IP socket are connected, communication is bi-
Since the Explorer Host Table requires a fixed IP address in order to communicate with a camera, it is directional.
strongly recommended that you set up any camera on different subnets with static IP addresses.

To display the Explorer Host Table setup dialog:


System Menu  Explorer Host Table

Explorer Host Table Setup dialog controls


- Add – Creates a new entry in the host table. You will be prompted with the Add Host dialog,
where you can enter the host name and IP address of the new host. When finished, click OK.
- Edit – Modifies an existing entry in the host table. Select an entry from the list and click Edit.
You will be prompted with the Edit Host dialog, where you can change the host name or IP
address of the host you selected. When you are finished, click OK.
- Delete – Removes a host from the host table. Select the host you wish to delete and click
Delete.

Host Parameters
- Host Name – The name of the networked In-Sight camera to map (or assign) to the specified
address.
- IP Address – the IP address of the networked In-Sight system to map to the specified host
name.

Section 8 | Slide 9 Section 8 | Slide 10


Device Functions TCP
In-Sight as Client In-Sight as Server

• WriteDevice – Sends one or more cell values to another Event


device over the network using TCP/IP.
TCPDevice TCPDevice

• ReadDevice – Receives data from another device on the WriteDevice ReadDevice


network.
Data

• TCPDevice – Defines an In-Sight spreadsheet cell as


TCP/IP device, which opens a connection between the TCPDevice TCPDevice
In-Sight vision system and another TCP/IP device for
sharing data over the network.

**Timeout ignored in server mode**


Section 8 | Slide 11 Section 8 | Slide 12

WriteDevice – Sends one or more cell values to another device over the network using TCP/IP. This slide shows examples of the Property Sheet and Spreadsheet view for both ReadDevice and
- If value is a number, sent as a string WriteDevice.
- If value #ERR, nothing is sent

ReadDevice – Receives data from another device on the network.


- If running a job, waits to complete - otherwise, async

TCPDevice – Defines an In-Sight spreadsheet cell as a TCP/IP device, which opens a connection
between the In-Sight vision system and another TCP/IP device for sharing data over the network.
- Used with both WriteDevice and ReadDevice
- Establishes TCP/IP connection (Client/Server)

Section 8 | Slide 11 Section 8 | Slide 12


In-Sight Ports – Which One Should I Use? Starting with TCPDevice: Auto-Inserted Functions

Port Service
TCP 21 FTP Inserts WriteDevice & “Test Inserts ReadDevice on
String” on Client Server
TCP 23 Telnet
DHCP
UDP 68 (In-Sight vision system
only) Reserved Ports
TCP 502 Modbus Do not use!
TCP/UDP In-Sight
1069 Protocol/Discovery
UDP 2222 EtherNet/IP
TCP 5753 Audit Message Server
TCP 44818 EtherNet/IP Change C2 reference to
cell containing number or
TCP 50000 DataChannel
string such as FormatString

Section 8 | Slide 13 Section 8 | Slide 14

A Valid Port assignment is any unused number between 1 and 65535, except for the ports reserved for TCPDevice defines an In-Sight spreadsheet cell as a TCP/IP device (client or server), which opens a
In-Sight communications outlined on the table. In-Sight emulator users should always assign port connection between the In-Sight vision system and another TCP/IP device for sharing data over the
numbers 3000 (the default) and higher to prevent potential conflicts with ports reserved by services network. Once a TCP/IP Connection has been established, data is communicated using the ReadDevice,
reserved by services on the PC. WriteDevice, and QueryDevice functions. If the TCPDevice function initiates the communication with
another TCP/IP device on the network, then the cell is the TCP/IP server.

- For In-Sight to In-Sight, both sender and receiver must go Online before doing a WriteDevice or
ReadDevice
- Can have multiple connections between 2 devices, as long as each connection uses a different
port number
- Jobs can have up to 12 connections (TCPDevices)

Section 8 | Slide 13 Section 8 | Slide 14


Native Mode Commands Native Mode Commands

In-Sight Explorer Help:


• ASCII commands sent to an In-Sight from another
device (computer, PLC, etc.) over the network
• Uses telnet protocol, port 23

They can
Control In-Sight
Ex: Go online (SO1); open a job (LFmyjob)

Get info from spreadsheet


Ex: return value that’s in cell F12 (GVF012)

Put info into spreadsheet


Ex: put a value of 7.2 into cell A2 (SFA0027.2)

Section 8 | Slide 15 Section 8 | Slide 16

The Native Mode Details sets the terminator characters for Native Mode serial communications. The Help menu in In-Sight Explorer lists all of the Native Mode Commands, with examples of each. Basic
Native Mode Commands are two characters long, some with parameters following. Extended Native
These details are available only when Mode is set to Native. Mode commands are longer than two characters.
- Fixed Input Length – Reads a fixed number of characters before triggering an event.
- Input Terminator – The ASCII value is interpreted by In-Sight as the end of an incoming string.
The default is 13 (carriage return). Input Terminator is disabled if Fixed Input Length is
selected.
- Output Terminator – The ASCII value that In-Sight adds to each output string to mark the end of
the transmitted string. The default is -1; -1 specifies CRLF (carriage return/line feed).

Section 8 | Slide 15 Section 8 | Slide 16


Native Mode Commands

Issuing from a PC

Communicating with PLCs

Issuing from a PLC (more later)

Section 8 | Slide 17 Section 8 | Slide 18

One way to issue Native Mode commands from a PC is to go to the command line prompt and type in: This section gives an overview of how an In-Sight can send or receive values with a PLC in Spreadsheet
Mode. It assumes familiarity with the PLC end of the communications.
telnet <ip>

where <ip> is the IP address of the In-Sight system. This brings up a login screen, where you enter the
username and password with which you want to log onto the In-Sight. Follow with the Native Mode
commands you wish to send to the In-Sight. In the example in this slide, we tell In-Sight:

LFproduct.job

which causes the job named product to be opened from the camera. The 1 on the next line indicates
success.

A PLC can issues Native Mode commands in what is called Implicit Messaging, which is explained later in
this section.

Section 8 | Slide 17 Section 8 | Slide 18


Rockwell ControlLogix: Implicit Messaging Rockwell ControlLogix: Add-On Profile (AOP)

• In-Sight’s inputs and outputs are mapped into tags in the


ControlLogix or CompactLogix PLC
• Starting with In-Sight Explorer 5.5, an AOP is provided that
supports all 4.x and 5.x In-Sight systems
• Data is synchronized at interval defined by the Requested Packet
Interval (RPI). Non-deterministic.
• Download from the www.cognex.com Support section:
• Requires AOP (Add-On Profile) file

• Spreadsheet functions required:


• Extract the installation files and run the MPSetup.exe file
FormatOutputBuffer
WriteEIPBuffer*
• Some pre-5.5 versions of In-Sight Explorer use an EDS
FormatInputBuffer (Electronic Data Sheet) instead of an AOP. See Help for
ReadEIPBuffer* details.

* v5.1 and later


Section 8 | Slide 19 Section 8 | Slide 20

There are two kinds of messaging between In-Sight and the PLC: Implicit and Explicit. Use the Cognex Add-on Profile (AOP) when doing Implicit Messaging with In-Sight firmware. The AOP
can be downloaded from the Cognex web site.
With Implicit Messaging, you will use an AOP to establish a network connection.
Detailed information for installing the AOP is contained in the Integration of RSLogix and In-
Implicit Messaging requires spreadsheet functions in the In-Sight spreadsheet. For communicating from Sight guide.
In-Sight to the PLC, use FormatOutput Buffer and WriteEIPBuffer. (With firmware versions 5.1 and later,
use FormatOutput Buffer and WriteResultsBuffer.) This guide includes:
• How to install new cameras with the AOP.
For communicating from the PLC to In-Sight, use FormatInputBuffer and ReadEIPBuffer. • How to update cameras currently installed with EDS generated profiles over to use the AOP.
(With 5.1 and later, use FormatInputBuffer and ReadUserDataBufter.) • How to send data back and forth between camera/PLC.
• Reference to the assembly I/O.
Explicit Messaging does not require any functions in the spreadsheet. See a later slide for details.

Section 8 | Slide 19 Section 8 | Slide 20


Rockwell ControlLogix: Spreadsheet Functions Rockwell ControlLogix: Spreadsheet Functions

In-Sight to PLC PLC to In-Sight


Prior to 5.1 Prior to 5.1

5.1 and later


5.1 and later

Section 8 | Slide 21 Section 8 | Slide 22

To send values from In-Sight to a PLC , use FormatOutBuffer and WriteEIPBuffer (or WriteResultsBuffer) To receive values into the spreadsheet, use FormatInputBuffer and ReadEIPBuffer (or
ReadUserDataBuffer)

Section 8 | Slide 21 Section 8 | Slide 22


Rockwell ControlLogix: Assembly Objects Rockwell ControlLogix: Explicit Messaging

• Sent to single device which always responds with a reply


to that message
Sample Input Assembly:
• Better suited for operations that occur less frequently
• No AOP or EDS file needed
• No functions needed in spreadsheet. PLC issues MSG
instructions to In-Sight, usually Native Mode commands,
via telnet protocol

Example: change job on camera


Controller tags

Sample Output Assembly:

Section 8 | Slide 23 Section 8 | Slide 24

With Implicit Messaging, you also need to set up the Assembly Objects on the PLC. Unlike Implicit Messaging, there is no RPI. Usually, the PLC sends a MSG to In-Sight Explicit, set up for
a Native Mode command. No functions are used in the spreadsheet.
Assembly Objects represent the structure of the packet sent between the PLC and the Camera. Each bit,
or group of bits, is labeled according to its function and position in the packet.

When doing There are always two Assembly Objects: (1) one for sending from the camera to the PLC
(Input Assembly) , and (2) one for sending from the PLC to the camera (Output Assembly) . Each
Industrial Protocol has its own packet description, but many of the Assembly Objects tend to have the
same bits with very similar functionality.

Assembly Objects come in handy when you are unable to use the Add-On Profile (AOP) or Copy Rung
Instruction. If you are not using the AOP, even if you are using the EDS (Electronic Data Sheet) file, your
bits will not be labeled; all you will have is a large space in memory filled with bits. Using the position of
the bit on this map, it is possible to determine functionality without having them labeled. This can also
come in handy for troubleshooting, specially if the memory space on the PLC is not properly aligned with
the bits.

Formats vary with the version of In-Sight. Complete formats and explanations are found in the In-Sight
Explorer Help menu under:
“EtherNet/IP Object Model - In-Sight 4.x.x Firmware”
“EtherNet/IP Object Model - In-Sight 5.x.x Firmware”

Section 8 | Slide 23 Section 8 | Slide 24


Rockwell ControlLogix: Help Topics Other PLCs: Help Topics

“Communicate with a Siemens PLC on a PROFIBUS Network”


“Communicate with a Rockwell ControlLogix PLC” “Transferring Data .. via PROFINET”
“Ethernet/IP Communications” “PROFINET – Using Records with Siemens PLCs”
“Install the Add-On Profile” “PROFINET Communications with an In-Sight Vision System”
“Install the EDS Files” “PROFINET Settings dialog”
“MODBUS TCP Communications”
etc. “PROFINET Communications”

Section 8 | Slide 25 Section 8 | Slide 26

Section 8 | Slide 25 Section 8 | Slide 26


WriteImageFTP WriteFTP
The Spreadsheet cell contains a Structure(s) and auto Inserted
cell(s) depending on which Data Format was selected.

** String must be empty for auto-inserted functions.**


Section 8 | Slide 27 Section 8 | Slide 28

WriteImageFTP writes the current image to an FTP server on the network. This function is typically used WriteFTP writes a data file or appends a data string to a file. This function is typically used to log data
to automatically save images of failed inspections during runtime, using cell state. results that can be viewed in a file.
- The target device can be an In-Sight emulator, or any other host acting as an FTP server on the
network. 0 = TEXT
Standard ASCII Text file format (.TXT)
WriteImageFTP Parameters include:
- Event – Event to initiate image write
- Host Name –Name or IP address of FTP server
- User Name – Username for FTP server 1 = HTML (default)
- Password – Password for FTP server Standard HTML file format (.HTM)
- Image –Image to be saved
- File Name – Path where image is being saved
- Max Append Value – Maximum image count
- Reset – Reset image count to 0 4 = XML
- Data Format – BMP or JPG Standard XML file format (.XML)
- Screen Capture – Saves with overlay (only with 3400 system)
- Resolution – Resolution of saved image
- Disable FTP Queuing – Drops subsequent images in FTP queue if one is waiting

Section 8 | Slide 27 Section 8 | Slide 28


WriteFTP: Creating a Web-browsable Page FTP Functions – Authorized FTP Directory

Leave the String field Blank


Set the Data Format field to HTML

The Result is a web page on the PC that is viewable in the browser.

Links to images Inspection results

Section 8 | Slide 29 Section 8 | Slide 30

To create a log file of data with links to images: For both WriteImageFTP and WriteFTP, you must “Authorize” the directory in

1. In a cell, create a string of data to be logged (use FormatString) System Options  Emulation
2. WriteFTP: Leave the String field blank, set the Data Format field to HTML
- WriteFTP will auto insert cells, including WriteImageFTP
3. Change the auto inserted reference to be from Test String to your string
4. Result is a web page on the PC viewable in the browser, containing values and links to
corresponding images

Section 8 | Slide 29 Section 8 | Slide 30


WriteImageLocal and WriteLocal Important Reminders

Similar to WriteImageFTP and WriteFTP, except images and


data are written to SD card • Make sure that the system is online.

• The username and password are case sensitive.

• If a path is stated, make sure that the folder exists. In-


Sight does not create folders.

• To control which images are written, develop logic to


control the cell state of the WriteImageFTP function.

• Authorize the directory in System  Options

Section 8 | Slide 31 Section 8 | Slide 32

For those In-Sight cameras that have an SD card (e.g., In-Sight 7802), you can store images and Some of the common issues that you should be aware of include:
inspection data directly to the SD card.
! Make sure that the system is online
! The Username and Password are case sensitive
! If a path is stated, make sure that the folder exists. In-Sight does not create folders.
! To control which images are written, develop logic to control the cell state of the WriteImageFTP
function
! Authorize the directory in System  Options

Section 8 | Slide 31 Section 8 | Slide 32


Summary Lab Exercise

• In-Sight uses standard communication protocols.


• Device functions provide for TCP/IP client-server
communication.
- TCPDevice defines client and server as well as the port
- ReadDevice and WriteDevice do the actual
input/output
• Commands that control In-Sight are called Native Mode
commands, and are sent over the network using the telnet
(port 23).
• PLC communications may be Implicit or Explicit, and may
require an AOP or EDS file, as well as assembly objects.
• WriteImageFTP sends the current image to a PC.
• WriteFTP logs data to a PC file.

Section 8 | Slide 33 Section 8 | Slide 34

In this section we covered the following topics: Complete:


Lab Exercise 8.1 – Network Communication
- In-Sight uses standard communication protocols
- Device functions provide for TCP/IP client-server communication.
- TCPDevice defines client and server as well as the port.
- ReadDevice and WriteDevice do the actual input/output.
- Commands that control In-Sight are called Native Mode commands, and are sent over the
network using the telnet (port 23)
- PLC communications may be Implicit or Explicit, and may require an AOP or EDS file, as well as
assembly objects
- WriteImageFTP sends the current image to a PC
- WriteFTP logs data to a PC file

Section 8 | Slide 33 Section 8 | Slide 34


In-Sight Spreadsheets Standard Section 8 | Lab Exercise In-Sight Spreadsheets Standard Section 8 | Lab Exercise

3. Leave the Host Name blank as the In-Sight will be the Server. Allow all the
Lab Exercise 8.1 – Network Communication defaults to remain and click the OK button.
NOTE: Adding this function will automatically create a ReadDevice in the
At the end of this lab exercise, Participants will be able to: spreadsheet next to the TCPDevice.
• Use WriteDevice and ReadDevice commands to send data from an In-Sight camera
to HyperTerminal
Write to HyperTerminal
The Participant will utilize the following In-Sight Functions to successfully complete this
exercise:
• WriteDevice
• ReadDevice
• TCPDevice
4. Insert a FormatString function into cell E52.
• FormatString
NOTE: This function can be found under Text  String  FormatString. This will
allow you to choose multiple values as well as control the formatting of the string.
NOTE: The default behavior is that the Client usually writes information and the Server
receives it, we will need to set this up opposite in order to allow HyperTerminal to be the The FormatString dialog displays.
Client as it establishes the connection and the In-Sight will be the Server.

Follow the steps below to complete the lab exercise:

1. Open MyOut.job from the last lab exercise.


2. Insert a TCPDevice function into cell C51 of the spreadsheet.
NOTE: This function can be found under Input/Output  Network  TCPDevice.

Write to HyperTerminal

5. Format your Output String as follows:


Leading Text: ‘ (single quote)
Trailing Text: ‘ (single quote)
Terminators: None
Use Delimiter: Check the checkbox
The TCPDevice Property Sheet displays.
The three fixture vales
Decimal Places: 3
NOTE: Review the string on the bottom of the dialog box.

Page 1 Page 2
In-Sight Spreadsheets Standard Section 8 | Lab Exercise In-Sight Spreadsheets Standard Section 8 | Lab Exercise

8. Go to Sensor  Network Settings and make note of the IP Address of the In-
Sight.

9. Go Online.

Setting up HyperTerminal
10. Start HyperTerminal.
NOTE: This step will vary depending on your Operating System –
Windows XP: Start  All Programs  Accessories  Communications 
HyperTerminal
Windows 7: Does not have HyperTerminal installed by default. A folder
has been provided named ‘HyperTerminal’ on your desktop. Open the
folder and double-click on Hyperterm.exe.
11. Name the new Connection TCPIP.
6. Click the OK button. NOTE: Windows 7 users will not see the icons. This will not affect operation.
7. Insert a WriteDevice function into cell D52.
NOTE: This function can be found under Input/Output  Network  WriteDevice.

The WriteDevice should reference


1. The Event causing it to run ($A$0).
2. The TCPDevice that defines the channel
3. The information to send out
4. The string to be sent out (cell E52)

12. Select TCP/IP (Winstock) in the Connect using: drop-down menu.

Page 3 Page 4
In-Sight Spreadsheets Standard Section 8 | Lab Exercise

13. Enter the IP address of your In-Sight camera as the Host Address.
14. Enter 3000 as the Port Number.
15. Click the OK button.

16. Trigger your In-Sight camera.

17. Go Offline.
18. Save the job as MyComm.job on the in-Sight camera and your own folder on the
PC.

Page 5
In-Sight Spreadsheets Standard Skills Journal In-Sight Spreadsheets Standard Skills Journal

Please answer the corresponding questions upon completion of each section. The 5. Name the two functions that write images or data to an FTP server.
questions are to reinforce the skills learned during each In-Sight Spreadsheets Standard
section.

Section 8 – Network Communications


• List different forms of network communication such as:
− PLC protocols
− FTP
− TCP/IP
• Explain Client/Server communication in TCP/IP communications

1. List three forms of network communication.

2. What are the three functions used for TCP/IP communications?

3. In TCP/IP communications, how do the roles of the Client and Server differ?

4. How does In-Sight know whether it is the Client or Server?

Page 1 Page 2
Objectives
Order of Execution & At the end of this section Participants will be able to:
Operator Interface - Discuss the order in which cells are executed in the spreadsheet
Section 9 - List the In-Sight processor priorities and the information provided
by the Job Profiler
- Create a Custom View in a job, including Status indicators,
results of vision analysis, and a button to control the region of the
Histogram tool

Section 9 | Slide 2

In the ninth section of the In-Sight Spreadsheets Standard training we will focus on the Order of At the end of this section Participants will be able to:
Execution and Operator Interface.
- Discuss the order in which cells are executed in the spreadsheet
- List the In-Sight processor priorities and the information provided by the Job Profiler
- Create a Custom View in a job, including Status indicators, results of the vision analysis, and a
button to control the region of the Histogram tool

Section 9 | Slide 1 Section 9 | Slide 2


Order of Spreadsheet Execution Job Profiler

How the spreadsheet is executed: 1. 2.

1. An event must be triggered.


2. Cells that depend on the event are executed, in order of
dependence.
3. Cells that do not depend on the event are not executed (for
example: comments cells never change their content).

Order of execution is not necessarily top to bottom of


spreadsheet

Section 9 | Slide 3 Section 9 | Slide 4

The spreadsheet is executed as follows: Profile Job – Opens the Profiler for benchmarking cell execution times. The Profile Job dialog is also
helpful when tracking down formulas or functions that are causing errors (#ERR) in an In-Sight job.
1. An event must be triggered.
- AcquireImage is an event. NOTE: The Profile Job dialog is not accessible in Online mode, or when the current user is logged in with a
- An Event function is a user-defined event.
Protected or Locked Access level.
2. Cells that depend on the event are executed, in order of dependence.
3. Cells that do not depend on the event are not executed (for example: comment cells never
change their content). Uses:
- Time job, cell by cell
Where more than one cell can be executed at a given point, order is usually across row from left to right - Look at dependencies in the job
and makes its way down the spreadsheet: Some exceptions exist, e.g., WriteResult & ReadResult - Order of execution

NOTE: Order of execution is determined by a combination of dependencies and spreadsheet locations, In the Profile Job dialog:
with some exceptions 1. Click on Time to list in order of execution time (useful for optimizing)
2. Click on Expression to list alphabetically (useful for analyzing a job you inherited)

Section 9 | Slide 3 Section 9 | Slide 4


Priorities Steps for Application Creation

In-Sight allocates processor time according to priorities:

4. Complete and
Deploy the Job
High Low
Priority Priority 3. Design the
Operator Interface

1. 2. 3. 4. 5. 6. 2. Create a
Prototype Job

1. Analyze the
Problem

Section 9 | Slide 5 Section 9 | Slide 6

In-Sight allocates processor time according to priorities: Steps for Application Creation:

1. Trigger for image acquisition 1. Analyze the Problem √


2. Execution of spreadsheet - Determine what needs to be inspected
3. Serial port communications - Understand what is considered Good and Bad
- Except WriteSerial which is priority 2 2. Create a Prototype Job √
4. Ethernet communications - Use vision tools and logic to inspect the part
- Except WriteDevice which is priority 2 3. Design the Operator Interface
5. Image logging - Decide how the results will be given to the user (Visual, Ethernet, Discrete, etc.)
6. Screen update 4. Complete and Deploy the Job
- Final details for longevity and ease of use
NOTE: Display may not update every cycle if spreadsheet execution is Continuous or process demand is - Maintenance functions like back-up and restore
higher than normal.

Section 9 | Slide 5 Section 9 | Slide 6


Designing an Operator Interface Graphics Functions: Controls

Determine which operator controls may be needed for:


- Set up
- Production
- Maintenance and troubleshooting

Determine how results will be reported or communicated:


- Custom View
- Discrete output lines
- Serial output lines
- Networking
Ease of use is key!
Section 9 | Slide 7 Section 9 | Slide 8

Now you need to determine how the user will interact with the vision system. It is recommended that you Controls functions insert interactive elements into the spreadsheet that allow users to make simple
prototype your interface on paper before you begin to enter it into the spreadsheet. configuration changes in an In-Sight job without having to use the Insert Function dialog or property sheet.

Things to Consider: Controls are solely interactive when the Access Level for the current user is set to Full or Protected; they
1. Avoid overwhelming the operator with Controls. If you require many, it is possible to spread cannot be accessed when the Access Level is Locked.
them over several panels, using tools called Dialog Boxes and Wizards.
2. Limit the operator interface to just what the operator needs to see or data the operator needs to
enter.
3. Design the operator interface for clarity and ease of use.

Section 9 | Slide 7 Section 9 | Slide 8


Controls: CheckBox Controls: Button

Returns momentary 1 when Button is clicked, 0 otherwise


Returns 1 when box is checked, 0 when it is not

To check or uncheck box, click on the cell containing To activate Button, click on cell containing Button
CheckBox

1 0 1 0

Section 9 | Slide 9 Section 9 | Slide 10

A CheckBox inserts a labeled checkbox control into the spreadsheet. A Button inserts a labeled push button control into the spreadsheet. Optionally, a button press can be
NOTE: To access the property sheet for a CheckBox, right click the CheckBox and select Edit Function. configured to signal a spreadsheet event trigger.
NOTE: To access the property sheet for a Button, right-click the button and select Edit Function.
CheckBox Inputs
- Name – Specifies a text string label to appear next to the CheckBox. Button Inputs
- Name – Specifies a text string label; this name will appear on the button itself.
CheckBox Outputs - Trigger – Optionally specifies a trigger signal to occur when the button is pressed.
- Returns – A value of 0 when cleared (unchecked), and 1 when selected (checked). - -1 = none (default) – No trigger will be signaled.
- Results – A labeled CheckBox control. - 32 = manual – A manual trigger will be signaled.
- (80 to 87) = soft (0 to 7) – A soft trigger event will be signaled.
Example: Use to conditionally enable/disable cells via Cell State.
Button Outputs
- Returns – A value of 1 when the button is pressed; otherwise the return value is 0.
- Results – A labeled Button control.

Example: Buttons can trigger events that run other vision tools, using Soft Triggers

Section 9 | Slide 9 Section 9 | Slide 10


Controls: ListBox Controls: Entering Values and Strings

Allows user to select item from a pull-down list EditFloat

EditString
Index (0, 1, 2, …) is stored in the cell
0
1
2 EditInt

NOTE: The Choose function allows you to associate a variable # with the index
NOTE: To access the property sheet for an Edit… control, right-click the
#. You will often use a List Box with a Choose Function.
control and select Edit Function.
Section 9 | Slide 11 Section 9 | Slide 12

A ListBox inserts a drop-down list control in the spreadsheet. An EditFloat function inserts a floating-point edit control into the spreadsheet.
NOTE: Since the ListBox function can accept a variable number of strings, this function is not configured
through a property sheet. This function can be configured directly in the cell or in the formula bar on the EditFloat Inputs
Job Edit toolbar. - Min – Specifies the minimum value that the edit box control will accept (-9999999 to 9999999; default =
0).
- Max – Specifies the maximum value that the edit box control will accept (-9999999 to 9999999; default =
ListBox Inputs
100).
- String0, 1, 2, … – A variable-length of individual text strings, each of which returns a different EditFloat Outputs
value when selected by a user from the drop-down list. - Returns – The value entered into the control.
- Results – An EditFloat control.
ListBox Outputs
- Returns – A value representing the index number (Zero based) of the selected item. An EditString function inserts a text edit control into the spreadsheet.
- Results – A ListBox control. EditString Inputs
- Max String Length – Specifies the maximum text string length (in characters) that the edit box control will
Example: Use as index in Choose function to select different thresholds for different parts accept (1 to 255; default = 8).
Choose (cell, value for Index 0, value for Index 1, …) EditString Outputs
- Returns – The text string entered into the control.
Choose (A2, 1.34, 2.11, …) where A2 is cell with ListBox
- Results – An EditString control.

An EditInt function inserts an integer edit control into the spreadsheet.


EditInt Inputs
- Min – Specifies the minimum value that the edit box control will accept (-9999999 to 9999999; default =
0).
- Max – Specifies the maximum value that the edit box control will accept (-9999999 to 9999999; default =
255).
EditInt Outputs
- Returns – The value entered into the control.
- Results – An EditInt control.
Section 9 | Slide 11 Section 9 | Slide 12
EditRegion EditRegion

Defines a Region which can be referenced by one or more


Vision functions

• Auto-inserted functions that describe region


• Can be referenced by vision functions

Section 9 | Slide 13 Section 9 | Slide 14

An EditRegion function inserts an interactive graphical region control into the spreadsheet. When the EditRegion Outputs
control is clicked, the display switches to Interactive Graphics Mode where the size, position, rotation, and - Results – An EditRegion control, along with a corresponding results table that will be created in
curvature of the region can be adjusted. the adjacent cells to the right.

EditRegion Inputs The following Vision Data Access functions are automatically inserted into the spreadsheet to create the
- Image – This parameter must reference a spreadsheet cell that contains an Image data results table:
structure; by default, this parameter references A0, the cell containing the AcquireImage image
data structure. This parameter can also reference other Image data structures, such as those
returned by the Vision Tool Image functions or Coordinate Transforms Functions. Row GetRow(Region) The x-coordinate of the position.
- Fixture – Specifies the image coordinate system in which the input region is defined.
- Move – Disables or enables the ability to adjust the input region position. Col GetCol(Region) The y-coordinate of the position.
- Size – Disables or enables the ability to adjust the input region height and width.
- Rotate – Disables or enables the ability to adjust the input region orientation.
High GetHigh(Region) The height of the region.
- Bend – Disables or enables the ability to adjust the input region curvature.
- Name – Specifies a text label for the EditRegion control element in the spreadsheet.
- Show – Specifies the display mode for the EditRegion graphical overlay on top of the image. Wide GetWide(Region) The width of the region.

Angle GetAngle(Region) The orientation of the region.

Curve GetCurve(Region) The curvature of the region.

Section 9 | Slide 13 Section 9 | Slide 14


Graphics Functions: Displays Displays: Chart

Chart plots a value over time.


Example: Histogram Average

Section 9 | Slide 15 Section 9 | Slide 16

Display functions insert graphic displays into the spreadsheet. A Chart function inserts a strip chart display element that occupies a row of the spreadsheet. The size of
the chart can be adjusted by adjusting the row height.
The following Display functions insert graphics into the spreadsheet: NOTE: Charts are not supported in Dialogs.
- Chart
- ColorLabel Chart Inputs
- MultiStatus - Event – Specifies the update event to clock the next value into the chart. This argument must
- Status be a cell reference to either A0 (the AcquireImage cell) or to a cell containing an Image or Event
- StatusLight function.
- Value – Specifies the number to be charted. This argument usually contains a reference to a
spreadsheet cell containing a number that updates on every cycle. (-9999999 to 9999999;
default = 0).
- Number – Specifies the number of values to chart (2 to 5000; default = 10).
- Name – Specifies the text string label that appears in the chart display.
- Range: Min – Specifies the minimum vertical chart coordinate (-9999999 to 9999999; default =
0).
- Range: Max – Specifies the maximum vertical chart coordinate (-9999999 to 9999999; default =
0).

NOTE: If Range: Min and Range: Max are set to 0, the chart will continually scale the vertical coordinate
range according to the input values.

Section 9 | Slide 15 Section 9 | Slide 16


Displays: ColorLabel Displays: ColorLabel

Displays background and text in colors

Section 9 | Slide 17 Section 9 | Slide 18

A ColorLabel function inserts a color label display element in the spreadsheet. The label fills the cell with ColorLabels can be made to change color based on logic.
the background color and writes text in the foreground color.

ColorLabel Inputs
- Name – Specifies a test string label to appear in the foreground.
- Fore Color – Specifies the foreground color.
- Back Color – Specifies the background color.

Section 9 | Slide 17 Section 9 | Slide 18


Displays: Status, StatusLight, MultiStatus StatusLight References

For each of the tests, we used logic to come up with 1 for


All three functions reference another cell
pass, 0 for fail. These are the values you can use with
Status: negative zero positive StatusLight.

StatusLight: You choose colors from list

MultiStatus: Two colors for each bit; you choose colors from list

Section 9 | Slide 19 Section 9 | Slide 20

Status inserts a simulated LED status light display element into the spreadsheet. The Status cell StatusLight inserts a simulated LED status light display element into the spreadsheet; the color of the
displays the specified value as a red, yellow, or green LED, with text labels for each color. status light and the labels can both be user-specified. The StatusLight cell displays the specified color
LED with text labels for each color.
StatusLight inserts a simulated LED status light display element into the spreadsheet; the color of the
status light and the labels can both be user-specified. The StatusLight cell displays the specified color StatusLight Inputs
LED with text labels for each color. - Status – References the cell containing the status monitor value.
- Label: Positive – Specifies the text that will be displayed when the Status is a positive value.
MultiStatus inserts an array of simulated LED status lights into the spreadsheet. The function displays - Label: Zero – Specifies the text that will be displayed when the Status is zero.
the specified bits from a control value as a single LED with two color states. - Label: Negative – Specifies the text that will be displayed when the Status is a negative value.
- Color: Positive – Specifies the Status LED color that will be displayed when the Status is a
positive value. (default = green)
- Color: Zero – Specifies the Status LED color that will be displayed when the Status is zero.
(default = yellow)
- Color: Negative – Specifies the Status LED color that will be displayed when the Status is a
negative value. (default = red)

Section 9 | Slide 19 Section 9 | Slide 20


Graphics Functions: Image Image: PlotCross

Before

After

Section 9 | Slide 21 Section 9 | Slide 22

Image functions plot graphic overlays on top of the image. The PlotCross function plots a cross on the image.
NOTE: The Plot function will disappear (hide) when disabled.
The following Image functions insert graphics into the spreadsheet:
- PlotArc PlotCross Inputs
- PlotCircle - Cross – Specifies the image coordinates of the Cross to be plotted.
- PlotCompositeRegion - Row – The row coordinate of the cross’s center.
- PlotCross - Column – The column coordinate of the cross’s center.
- PlotData - Angle – The cross orientation.
- PlotLine - High – The cross height.
- PlotPoint - Wide – The cross width.
- PlotPolygon - Name – Specifies a text label to display with the graphic on the image.
- PlotRegion - Color – Specifies the color (default = green) of the plotted graphic.
- PlotString - Show – Specifies the display mode for the graphic overlay on the image.
- 0 = Off – The graphic will be hidden, except when the cell containing the function is
highlighted in the spreadsheet.
- 1 = On (default) – The graphic will be displayed at all times.

PlotCross Outputs
- Returns – A Plot data structure containing the graphic.

Section 9 | Slide 21 Section 9 | Slide 22


Plotting Graphics Creating a Custom View

1. Highlight the cell(s) that will constitute the Custom View


Must be a rectangular subset of spreadsheet.

Section 9 | Slide 23 Section 9 | Slide 24

You can use Snippets to help you build and plot graphics quickly in the spreadsheet.

A Snippet accesses the Snippet dialog to automate frequently performed tasks by exporting groups of
preconfigured cells, saved as a Cell Data (.CXD) file, to the Snippets folder on the PC. The snippet can
then be imported into the spreadsheet. Alternately, snippets can be imported by dragging and dropping
the snippet directly from the Palette into the spreadsheet.
- Import – Opens the Snippet dialog to import the snippet into the spreadsheet.
- Export – Opens the Snippet dialog to export the snippet to the Snippets folder on the PC.

NOTE: Snippets cannot be exported to the PC using the ExportData function.

Section 9 | Slide 23 Section 9 | Slide 24


Creating a Custom View Creating a Custom View

2. Indicate that they are the Custom View and how to display it. 3. Toggle between full spreadsheet view and Custom View

View  Custom View (or F6)

Section 9 | Slide 25 Section 9 | Slide 26

To create Custom View settings, select Edit  Custom View Settings:

- Select Cells – Allows selection of cells used for Custom View


- Move/Resize – Allows setting of size and location of Custom View
- Center – Centers Custom View on display screen
- Size To Fit – Resizes Custom View to original cell sizing
- Display Elements – Enable/disable the display of:
- Image
- Spreadsheet
- Graphics
- Refresh Conditionally – Allows the update of the display to be controlled through logic.

Section 9 | Slide 25 Section 9 | Slide 26


Summary Summary

• Cell dependencies govern the order of execution for • A Custom View is a block of cells which can be
the spreadsheet. displayed instead of a whole spreadsheet
• Control functions allow the operator to enter or
change information (buttons, checkboxes, etc.), • To create a Custom View, highlight cell(s), then
even when they are online. Edit  Custom View Settings

• Display functions show graphical information on


the spreadsheet (status lights, charts, etc.). • To toggle between the Custom View and
Spreadsheet, View  Custom View or <F6>
• Image functions draw information on the image
(crosses, points, etc.).

Section 9 | Slide 27 Section 9 | Slide 28

In this section we covered the following topics: In this section we covered the following topics:

- Cell dependencies govern the order of execution for the spreadsheet. - A Custom View is a block of cells which can be displayed instead of a whole spreadsheet
- Control functions allow the operator to enter or change information (buttons, checkboxes, etc.), - To create a Custom View, highlight cell(s), Edit  Custom Edit Settings
even when they are online. - To toggle between the Custom View and Spreadsheet, View  Custom View or <F6>
- Display functions show graphical information on the spreadsheet (status lights, charts, etc.).
- Image functions draw information on the image (crosses, points, etc.).

Section 9 | Slide 27 Section 9 | Slide 28


Lab Exercise

Section 9 | Page 29

Complete:
Lab Exercise 9.1 – Dependencies
Lab Exercise 9.2 – (if time allows)

Section 9 | Slide 29
In-Sight Spreadsheets Standard Section 9 | Lab Exercise In-Sight Spreadsheets Standard Section 9 | Lab Exercise

NOTE: You will see the updated time for each tool as well as a total cycle time in
Lab Exercise 9.1 – Profiler the lower right corner. The total time in the Profiler will be greater than the time in
the lower right-hand corner of the spreadsheet. This discrepancy is the rendering
At the end of this lab exercise, Participants will be able to: of graphics on the display.
• Create an Operator Interface in the spreadsheet

The Participant will utilize the following In-Sight Functions to successfully complete this Lab Exercise 9.2 – Operator Interface
exercise:
• Status
• Button In an earlier lab, we deliberately skipped the first 10 rows of the spreadsheet. Now, we are
• Custom View going to use those cells to define a Custom View, which will reference a number of cells
• References you already set up.

Though we will walk you through setting this up, please keep in mind that these are
Follow the steps below to complete the lab exercise: suggestions. If you want to try your own design, please keep the data in a logical fashion
and in a way that will make it easy for an operator to interpret.
1. Continue with MyComm.job from the previous lab exercise.
1. Continue with MyComm.job from the previous lab.
2. Click Sensor  Profile Job.
2. Enter the comment Part into cell B2.
The Profile Job dialog box displays.
NOTE: By default you see All cells – to see the time each tool takes to run, highlight 3. Insert a StatusLight function into cell C2.
the row and click the Acquire button. NOTE: This function is found under Graphics  Displays.
3. Click the Structure Only checkbox. This shows just those cells that have a The StatusLight Property Sheet displays.
structure in them (your tools).

4. Click the Acquire button again to run the inspection once.

Page 1 Page 2
In-Sight Spreadsheets Standard Section 9 | Lab Exercise In-Sight Spreadsheets Standard Section 9 | Lab Exercise

4. Configure the StatusLight Property Sheet as follows: NOTE: You may want to enter your units of measure (mm) into cell D6 so that the
Status – Reference the Global Bit result results make sense to your operator.
Label: Positive – Pass 12. Enter the comment Adjust for Bar Region Online in cell B55.
Label: Zero – Fail
13. Insert an EditRegion function into cell C57 in the spreadsheet.
Label: Negative – Error
Color: Positive – Green NOTE: This function is found under Graphics  Controls.
Color: Zero – Red The EditRegion Property Sheet displays.
Color: Negative – Dark Blue
5. Click the OK button.
6. Enter the comment Bar into cell B3.
7. Insert a StatusLight function into cell C3.
8. Configure the StatusLight Property Sheet as follows:
Status – Reference the Histogram result
Label: Positive – No Bar
Label: Zero – Bar Found
Label: Negative – Error
Color: Positive – Green
Color: Zero – Red
Color: Negative – Dark Blue
14. Fixture to the FindPatterns Region and enter the Name which will appear on the
9. Repeat the steps above for the Width and Holes. button as Bar Region. Select or deselect options that will be available to your user
The display should look similar to below: while they are online.

Good Block Bad Block

10. Enter the comment Distance in cell B6.


11. Reference the dimensional measurement of the block in cell D20.

15. Click the OK button and note the Bar Region button and auto-inserted
functions that are created.

Page 3 Page 4
In-Sight Spreadsheets Standard Section 9 | Lab Exercise In-Sight Spreadsheets Standard Section 9 | Lab Exercise

16. Attach the Region to the ExtractHistogram tool by going into the Property Sheet 19. Cut the Bar Region button and Paste it into cell C1, directly on top of the Custom
and referencing the Region parameters to the output of the EditRegion tool. View Status Lights area.
Because the region is now fixtured in EditRegion, we need to disable fixturing in
the ExtractHistogram tool itself. To do this, zero out the fixture information in the
Histogram tool.

20. Insert a PassFailGraphic snippet into cell B59 of the spreadsheet. This will add
an image display to your application to quickly show the user if the part is good or
bad.
NOTE: This snippet is found under Display.
21. To attach the snippet to your program, you need to make a reference from cell B60
to your Global Bit in cell C44.

17. Click the OK button.


NOTE: Now when the Bar Region button is pushed the region for the Histogram tool
is able to be changed.
18. Reset the region to be similar to the area originally set for the Histogram tool, being
sure not to get too close to the edge of the block.

Page 5 Page 6
In-Sight Spreadsheets Standard Section 9 | Lab Exercise In-Sight Spreadsheets Standard Section 9 | Lab Exercise

22. Location (cell C60) is an EditPoint function that allows you to set where the string 24. Select Custom View Settings under the Edit menu to finish creating your Custom
will be displayed on the image based on the upper left-hand corner. It displays a View.
point that can be moved. Leave it at the default for now. The Custom View Settings displays.

23. From the drop-down list, select the type of graphic that you would like to use to
show the Pass/Fail status of the job. (In our example below, we use Thumbs.)

25. Click the Select Cells button and choose cells B1 through D6.

Choices include:

Check:  Pass/Fail: Pass Fail

OK/NG: OK NG Thumbs: 
Page 7 Page 8
In-Sight Spreadsheets Standard Section 9 | Lab Exercise In-Sight Spreadsheets Standard Section 9 | Lab Exercise

NOTE: The selected cells will be outlined in red.


Once the cells are highlighted click the <Enter> key. The cells will display in the Lab Exercise 9.3 – (if time allows)
upper left. You can move/resize the window as you would like.
• Add the CountPassFail tool, which is found in the Clocked Data Storage category,
to the Custom View to determine the run rate for the result of each vision tool. The
actual Count functions should not be seen in the view.

• Insert a Chart function to graph the ExtractHistogram’s Average value over time.

26. Once you have positioned and sized the Custom View as you would like, click the
OK button. Although you have now defined the Custom View, it will not
yet be displayed. To toggle to the Custom View, click the <F6> key.
NOTE: To toggle between the spreadsheet and Custom View mode, click the <F6>
key.
27. Test the good block and the bad block and review the results.

Good Part Bad Part

28. Save the job as MyOp.job on the In-Sight camera and your own folder on the PC.

Page 9 Page 10
In-Sight Spreadsheets Standard Skills Journal

Please answer the corresponding questions upon completion of each section. The
questions are to reinforce the skills learned during each In-Sight Spreadsheets Standard
section.

Section 9 – Order of Execution & Operator Interface


• Discuss how cells are executed in the spreadsheet
• List the In-Sight processor priorities and the information provided by the Job
Profiler
• Create a Custom View in a job, including Status indicators, results of the
vision analysis and a button to control the region of the Histogram tool

1. List three things the Profiler can help you understand about a job.

2. List the three steps you need to do to create and see a Custom View.

3. List three Control functions.

Page 1
Objectives
At the end of this section Participants will be able to:
Deployment
Section 10 • Employ utilities available to deploy the In-Sight vision system
such as:
- User Lists
- Back-up/Restore
- Update Firmware
- Automatic Startup of the Camera

• Display their application at deployment using the VisionView 900

Section 10 | Slide 2

In the tenth section of the In-Sight Spreadsheets Standard training we will focus on Deployment. At the end of this section Participants will be able to:

- Employ utilities available to deploy the In-Sight vision system such as:
- User Lists
- Back-up/Restore
- Update Firmware
- Automatic Startup of the Camera
- Display their application at deployment using the VisionView 900

Section 10 | Slide 1 Section 10 | Slide 2


Sensor Menu Network Settings

Section 10 | Slide 3 Section 10 | Slide 4

The Sensor Menu allows you to configure settings and perform operations on the active In-Sight vision The Network Settings dialog configures the active In-Sight vision system to communicate on a TCP/IP
system. network.

NOTE: If you choose not to restart an In-Sight vision system after modifying its network settings, In-Sight
Explorer will not reflect the modifications until the vision system has been rebooted, despite the fact that
the Network Settings dialog reports the new configuration.
In-Sight emulators inherit their network settings from Windows; these settings cannot be changed with In-
Sight Explorer. To modify an emulator’s network settings, open the network applet in the Windows
Control Panel and make the desired change.

Section 10 | Slide 3 Section 10 | Slide 4


Date Time Settings User Access Settings

Section 10 | Slide 5 Section 10 | Slide 6

Date/Time Settings opens the Date/Time Settings dialog, allowing you to adjust the current date or time, The User Access Settings dialog maintains the access level and FTP read/write privileges for authorized
establish a time zone and configure SNTP (Simple Network Time Protocol) servers for automatic time users of In-Sight vision systems and emulators. The User Access Settings determine which users may
synchronization. log onto a particular In-Sight vision system through the Log On/Off dialog, as well as the types of changes
they can make to the active job. Each In-Sight vision system has its own User List, separate from every
- Used to establish the date/time on the active In-Sight camera. other vision system on the network. If a user needs access to a particular vision system, they must know
- In-Sight cameras maintain relative time only since last power up. a user name and password that already exists in that vision system’s user List.
- Automatic synchronization of the camera’s internal clock is allowed using a connection to a
Simple Network Time Protocol (SNTP) server. NOTE: Every In-Sight vision system is pre-configured with three users: admin, operator and monitor;
- Use a SNTP server to keep track of the absolute date/time. these users are configured for Full, Protected, and Locked access levels, respectively.
The maximum number of users that can be added to one In-Sight vision system is 32.
NOTE: The Date/Time Settings dialog is disabled for emulators.
After any changes are made within the Date/Time Settings dialog, the power must be cycled on the active Access Levels:
In-Sight vision system. - Full – Complete, unrestricted access.
- Protected – Allows you to edit tool parameters, but not to add or delete tools.
- Locked – You can only monitor the operation of the current sensor.

FTP Privileges:
- Read – The user has permission to read the file, but can not edit the file.
- Write – The user has permission to edit the file.

Section 10 | Slide 5 Section 10 | Slide 6


Image Settings Restart

Section 10 | Slide 7 Section 10 | Slide 8

The Image Settings dialog configures the resolution and frame rate of Live and Online images and the Restart opens the Restart Sensor confirmation dialog, which verifies that you intend to power cycle the
resolution to use from image acquisition. vision system.

NOTE: The Live Acquisition and Online image settings only affect the image that is sent to the In-Sight
Explorer user interface and do not affect the vision system image acquisition.

Section 10 | Slide 7 Section 10 | Slide 8


System Menu Log On / Off

Section 10 | Slide 9 Section 10 | Slide 10

The System Menu contains file operations that can be performed on multiple In-Sight cameras, as well The Log On/Off dialog establishes a user name and password used to log on to individual In-Sight vision
as functions and options related to In-Sight Explorer. New or existing cameras can be configured to systems.
operate on a network or to communicate directly to a PC using the Add Sensor/Device to Network dialog. The dialog can be used to log off from all vision systems. By default, In-Sight Explorer will use admin as
a user name and a blank password to log on to vision systems. If the Startup user name and password
defined in the Options dialog do not match the user name and password defined in the User Access
Settings dialog, the Log On/Off dialog will appear when In-Sight Explorer is launched.

Logging on to the In-Sight vision systems:


1. On the System Menu, click Log On/Off.
2. Enter the user name and password
3. Select the Remember Password checkbox for the entered User Name and Password to be
remembered at the next log in. By default, this option is unchecked, and the password needs
to be entered each time the system is logged on to.
4. Click the Log On button to finish.

NOTE: If you are logged on to a vision system and then change the User Name or Password, you may be
disconnected from any vision systems that cannot authenticate the new user information.

Once the log in is complete, the access level specified for the User Name in the destination system’s User
List will be in effect regardless of the user’s access level on the In-Sight vision system from which the log
on was initiated.

Section 10 | Slide 9 Section 10 | Slide 10


Create Report Backup

Section 10 | Slide 11 Section 10 | Slide 12

Create Report allows you to generate HTML or XML-based output files documenting the network The Backup dialog allows you to store an archival copy of the jobs, images, and sensor settings from an
configuration and job details of one or more In-Sight sensors or emulators on your network. In-Sight sensor or emulator to the hard disk on your PC. Network settings are stored in a backup but they
are restored only when you are restoring to the same sensor that was backed up.
NOTE: Extensible Markup Language (XML) is a web document authoring language similar to HTML.
While HTML is primarily targeted at text and graphics applications, XML was developed to create a highly The destination folder, or Backup Directory, for archived data is defined in the Options dialog. When In-
customizable standard method for storing organized groups of data that can be processed automatically. Sight Explorer archives data from and In-Sight camera or emulator, a folder is created in the Backup
Directory with a name identical to the In-Sight camera’s host name, along with a numerical suffix, starting
with ‘.000’. If a backup already exists for that particular camera, then the numerical suffix will be
incremented.

When you back up an In-Sight model has an SD card, you can check the rightmost column “Use SD
Card” and back up to the SD card.

NOTE: Backup operation should normally be done while In-Sight cameras are Offline because file transfer
operations may affect the performance of time-critical jobs. For jobs that are not time-critical, you can
enable Online backups by selecting the Allow Online Backups checkbox within the Options dialog.

Section 10 | Slide 11 Section 10 | Slide 12


Restore From How to Replace a Broken Camera
1. Take your replacement camera and physically connect it to a
network having a PC. Check the LED pattern for power and
network.

2. If the camera does not show up in the list in In-Sight Explorer’s


network pane, Refresh the list in the Network Pane.

3. If the camera still does not show up, select Add Sensor in the
System Menu.
- If the camera still does not show up in Add Sensor, remain in the Add Sensor
window and power cycle the camera.

4. While still in Add Sensor, set the replacement camera’s network


settings to be compatible with your network.

5. In the System Menu, do a Restore From of the appropriate


backup folder to the camera.
Always back up your camera any time changes are made!
Section 10 | Slide 13 Section 10 | Slide 14

The Restore From dialog allows you to retrieve the following: How to Replace a Broken Camera.

- The most current archive of jobs, images, and configuration data for an In-Sight sensor. ***Always back up your camera any time changes are made!***
- Older archived copies of jobs, images, and configuration data for an In-Sight sensor.
- An archive from a different In-Sight sensor. 1. Take your replacement camera and physically connect it to a network with PC b.
- Make sure the LEDs for Power and Networking are solid green.
NOTE: You must specify the archive directory of the camera to restore from; only one camera can be 2. If the camera does not show up in the list in In-Sight Explorer’s network pane, Refresh the list in the
restored at a time. Network Pane.
3. If the camera still does not show up, select Add Sensor in the System Menu.
Archived backups are restored from the Backup Directory, which is defined in the Options dialog. In-Sight - The camera should show up in the Add Sensor window (If not, cycle the power on the camera
Explorer will look for any subfolders that match sensor names from the In-Sight Network and display them while remaining in Add Sensor).
in the Restore From dialog. 4. While still in Add Sensor, set the camera’s network settings to be compatible with your network.
5. In the System Menu, do a Restore From of the appropriate backup folder to the camera.
NOTE: Cameras must be Offline in order to perform the Restore From function; attempting to restore an
Online camera will prompt a warning dialog to take the camera Offline.

Section 10 | Slide 13 Section 10 | Slide 14


Clone Update Firmware

Section 10 | Slide 15 Section 10 | Slide 16

The Clone allows you to mimic the job and image data, as well as most sensor settings, from a source In- The Update Firmware dialog allows you to load the most current version of In-Sight software onto an In-
Sight sensor to one or more destination In-Sight sensors. Sight sensor or multiple sensors.

NOTE: Host Name, Use DHCP server, IP Address, Subnet Mask, Default Gateway, DNS Server and NOTE: In order to update the firmware on an In-Sight vision system, the user must log on as a user with
Domain Name settings are not copied to the destination vision system(s). Full permissions.

WARNING!: Do not power cycle the device while the firmware is being updated.

NOTE: For In-Sight versions 3.3.0 through 4.9.3, a FUP Key was required to be purchased and installed
before upgrading the vision system’s firmware. Beginning with In-Sight version 5.1.0, the FUP Key is no
longer required. Older versions still require it but the FUP Key can now be downloaded free of charge.

Section 10 | Slide 15 Section 10 | Slide 16


Options VisionView Operator Interface

Multiple Platforms available:

- VisionView PC Software
- VisionView 900 Panel
- VisionView VGA
- VisionView CE-SL for Third-Party CE Panels

Section 10 | Slide 17 Section 10 | Slide 18

The Options dialog allows you to customize In-Sight Explorer’s startup and default GUI preferences, to Choose your own PC and run the VisionView interface directly on your existing
configure the Offline In-Sight emulator, and to configure file utility preferences. VisionView PC Software
machine – no additional hardware is required.

The Options dialog is comprised of the following optional settings: VisionView 900 is a powerful, low-cost operator interface panel that allows
VisionView 900 Panel operators to adjust vision tool parameters and regions of interest without
requiring a PC on your factory floor.
- Access Management
- Emulation This provides the flexibility to connect smaller- or larger-size monitors for
- File Utilities ‘control room’ viewing of images, results, CustomViews and EasyViews. The
VisionView VGA
- Image Display VisionView VGA supports VGA displays of different resolutions, as well as
- Job View touch screen displays.
- Record Defaults VisionView CE-SL (for 3rd Use your existing CE panels to save valuable space. There is no longer a need
- User Interface party CE panels) to install new monitors.

The VisionView application software all feature:


- Automatic detection – Quickly detect any Cognex vision system on your network.
- Mix and match Cognex In-Sight and DataMan systems – View up to nine (twelve with
VisionView PC or CE-SL) systems in a tiled view.
- Graphical interface – Display full color images, with graphic overlays and operator controls.
- Fast image updates – See the most recent inspection images so you can view your process in
real time.
- Access to CustomViews and EasyView – The operator controls created in the spreadsheet
will appear on the VisionView screen.
- Run-time ability to train fonts, without a PC – No downtime during changeovers – ideal for
OCR/OCV applications.

Section 10 | Slide 17 Section 10 | Slide 18


EasyView USB 2.0 ports

USB Storage Devices to Save Filmstrip


• Connection to USB Storage Devices
• Save selected failed images from the Filmstrip to
the USB storage device.
• Facilitates Quality Control

Keyboard and Mouse


• Allows flexibility to use USB Mouse and Keyboard
instead of touch-screen.

Future Use
• Other Cognex Products

Section 10 | Slide 19 Section 10 | Slide 20

If using the VisionView application, you can create an EasyView to customize how data is displayed and USB Storage Devices to Save Filmstrip
to determine if the data is editable by the operator. First, create the EasyView within In-Sight Explorer - Connection to USB Storage Devices
and save it to the camera’s job. Then, preview the EasyView during the VisionView PC Demo application - Save selected failed images from the Filmstrip to the USB Storage Device
on your PC, to ensure it is configured correctly. - Facilitates Quality Control

NOTE: The VisionView PC Demo has an automatic timeout of forty-five minutes. To purchase a licensed Keyboard and Mouse
version that provides complete functionality and does not timeout, install the VisionView PC Software, - Allows flexibility to use USB Mouse and Keyboard instead of touch-screen
which is available for download on the VisionView support site.
The look-and-feel of the VisionView PC Demo will be slightly different than that of the VisionView Future Use
Operator Interface Panel or VisionView VGA. - Other Cognex Products

Section 10 | Slide 19 Section 10 | Slide 20


Job Status Operator Controls

Section 10 | Slide 21 Section 10 | Slide 22

Bezel Colors VisionView Operator Controls


- Outside Bezel changes between green and red based on pass and fail.
- Online / Offline – Allows the operator to toggle the Online/Offline state of the active sensor(s)
Status Pane - Adjust Image – Offers Live Video mode or display rotation.
- Uses same icon as selected for Filmstrip. - Trigger – Allows the operator to trigger an inspection on the active sensor(s)
Determined by - Switch View – Allows the operator to toggle the views available on the active sensor
- In-Sight: Watch cell in ISE Results Queue. - Force Connection – If checked, and VisionView fails to connect to In-Sight sensor(s) because
- DVT: Results table overall pass/fail status. the maximum number of connections are already established or VisionView’s connection to the
sensor is converted to a view only connection because the other application has already
established the standard view connection, the Force Connection button appears. The operator
can press the Force Connection button to allow VisionView to reestablish the standard view
connection to the sensor(s).
NOTE: The standard view connection allows the operator to perform various Run Mode actions.
The view only connection allows the operator to view the operation of the sensor. With the view
only connection, ‘View Only’ is displayed in the image area, all interactive controls are disabled,
and the Trigger, Online/Offline, Send to Sensor, Focus, and TestRun buttons and the filmstrip are
hidden.
- Options
- Custom View Settings
- Language
- Save / Load Jobs

Section 10 | Slide 21 Section 10 | Slide 22


Split Views Options

Section 10 | Slide 23 Section 10 | Slide 24

Split Views will display both the Current Image and Last Failed Image – The screen is split in half with The VisionView panel allows a job to be loaded without opening In-Sight Explorer.
the current image on the left and the last failed image on the right (vertical). It can also be set up so that
the current image is on the top of the screen and the failed image is on the bottom (horizontal).

To display the VisionView View selection dialog Edit  VisionView View Selection.

Section 10 | Slide 23 Section 10 | Slide 24


VisionView Setup Summary

• The Sensor Menu contains information for the connected


camera/emulator.

• The System Menu contains settings for In-Sight Explorer


on that specific PC.

• Backup and Restore From provide easy ways to


replace systems that have broken.

• VisionView allows for display and customized control in


production environments.

Section 10 | Slide 25 Section 10 | Slide 26

VisionView Setup In this section we covered the following topics:

- Select Language - The Sensor Menu contains information for the connected camera/emulator.
- Opens the Language screen to select the language to use in the VisionView interface. - The System Menu contains settings for In-Sight Explorer on that specific PC.
- Choose Screen Layout for Operator - Backup and Restore From provide easy ways to replace systems that have broken.
- Select User Controls to Display - VisionView allows for display and customized control in production environments.
- Control display and saving of images
- Access to Settings
- Optional Password Protection
- Manage Backups/Restores
- Control image quality
- Select up to Nine Vision Sensors From the Network List
- Auto-Detect or Manually Select

NOTE: The first time VisionView is powered up, sensors on the same subnet are automatically detected
and displayed in the Selected Sensors list, without pressing the Auto Select Sensors button.

Section 10 | Slide 25 Section 10 | Slide 26


Lab Exercise

Section 10 | Slide 27

Complete:
Lab Exercise10.1 – Deployment and Finishing Applications

Section 10 | Slide 27
In-Sight Spreadsheets Standard Section 10 | Lab Exercise In-Sight Spreadsheets Standard Section 10 | Lab Exercise

The User Access Settings display.


Lab Exercise 10.1 – Deployment and Finishing Applications
At the end of this lab exercise, Participants will be able to:
• Utilize the utilities available in In-Sight Explorer to finish deploying the application
• Use the VisionView to display the application

The Participant will utilize the following In-Sight Functions to successfully complete this
exercise:
• User Access Settings
2. Click the Add button to create a new user.
• Startup
The User dialog displays.
• Report
• Backup
• Restore From

Follow the steps below to complete the lab exercise:

User Access Settings

1. Click on the User Access Settings link – this is found in the Sensor menu.

3. Enter the new user’s information in the appropriate fields and click the OK
button twice.
NOTE: You can use any information that makes sense to you in these fields.
4. After the new user is created, make sure that you can log onto your camera with
the new user information. Then log back onto your camera as before (admin).
5. Test the new user information on another camera.

Can you log in?


______________________________________________________

What would you need to do to be able to log in?


______________________________________________________

Page 1 Page 2
In-Sight Spreadsheets Standard Section 10 | Lab Exercise In-Sight Spreadsheets Standard Section 10 | Lab Exercise

Startup Report
1. Click on the Startup link – this is found in the Sensor menu directly above the 1. Click on the Create Report link – this is found in the System menu.
User Access Settings.

The Startup dialog displays.

The Create Report dialog displays.

2. To have the camera automatically open your completed job and go online at
startup, click on MyHistogram.job in the list and check the Start the Sensor in
Online Mode checkbox. Then click the OK button.

2. Select the In-Sight sensor to include in the Report and click the Create Report
button.
3. When the Report is complete, open it and scroll through it to become aware as to
what was saved.

Page 3 Page 4
In-Sight Spreadsheets Standard Section 10 | Lab Exercise In-Sight Spreadsheets Standard Section 10 | Lab Exercise

Backup Restore From

1. Click on the Backup link – this is found in the System menu. 1. Click on the Restore From link – this is found in the System menu.

The Restore From dialog displays.


The Backup dialog displays.

2. Select the In-Sight sensor to backup and click the Backup button.
3. When the Backup is complete, change the I/O setting to something different. Now
perform a Restore From.
2. Select the In-Sight camera to restore and then the most recent backup. Then
click the Restore From button.
3. When the Restore is complete, check the I/O settings again. What do they
show?

VisionView
Go to a VisionView station or open the demo software. Walk through the intuitive steps for
a few moments to become acquainted with the system. Log on to your camera to display it
through VisionView.

Page 5 Page 6
In-Sight Spreadsheets Standard Skills Journal In-Sight Spreadsheets Standard Skills Journal

Please answer the corresponding questions upon completion of each section. The
questions are to reinforce the skills learned during each In-Sight Spreadsheets Standard
section.

Section 10 – Deployment
• Employ utilities available to deploy the In-Sight vision system such as:
− User Lists
− Back-up/Restore
− Update Firmware
− Automatic Startup of the Camera
• Display their application at deployment using the VisionView 900

1. List three types of settings accessible from the Sensor Menu.

2. Name the two utilities that allow you to copy all the files in a camera to a PC, and
then copy them back to the same or another camera.

3. List at least four User Interface preferences you can change in the System 
Options menu.

Page 1 Page 2
Objectives

At the end of this section Participants will be able to:


Lighting and Optics
Section 11
- Explain Lighting and Optics terms
- Discuss the different Lighting Techniques
- Describe how the use of filters and colored lights will affect the
quality of the image

Section 11 | Page 2

The eleventh section of the In-Sight EasyBuilder Standard training will focus on Lighting and Optics. At the end of this section Participants will be able to:

- Explain Lighting and Optics terms


- Discuss the different Lighting Techniques
- Describe how the use of filters and colored lights will effect the quality of the image

Section 11 | Slide 1 Section 11 | Slide 2


Lighting and Optics are Important

Optics

• The success of machine vision applications depends on good


lighting and optics
• Garbage in, garbage out

Section 11 | Slide 3 Section 11 | Page 4

Why are Lighting and Optics important to capturing a good image in machine vision? In this section we will look at Optics.

• The success of machine vision applications depends on proper lighting


• Cameras do not see objects; they see the light reflected from the object towards them.
• Light allows the camera to see – if the camera cannot see the part or the mark, it can’t be read and it
can’t be inspected.
• Images with poor contrast and uneven illumination require more effort from the user, increasing
processing time.

The saying “garbage in, garbage out” goes back to the early days of computers, meaning if incorrect data
is input into a computer, the results are likely to be wrong. Similarly, if lighting is not good, inspection
results may be wrong: good parts may be rejected, and bad parts may be accepted.

Section 11 | Slide 3 Section 11 | Slide 4


Terms – Field of View (FOV) Terms – Working Distance (WD)

Section 11 | Page 5 Section 11 | Page 6

Field of View is the imaged area of an object under inspection. This is the portion of the object that fills Working Distance (WD) is the distance from the front of the lens to the object. Working distance
the camera’s sensor. Field of View is critical for choosing the correct optical components to use in an determines how far to position an imaging lens from the object under inspection. Working distance is
imaging application. Since resolution is dependent of field view, determining field of view affects what one critical when considering space constraints and lighting geometry.
is trying to analyze or measure.

Section 11 | Slide 5 Section 11 | Slide 6


Terms – Focal Length (mm) Terms – Focal Length (mm)

Section 11 | Page 7 Section 11 | Page 8

Focal Length (mm) is the distance from the center of the lens to the image sensor. The focal length of a The Focal Length of a lens determines its angle of view, and also how much the subject will be magnified
lens is the distance from the mid-point of the lens to the point at which light rays parallel to the center-line for a given position. Focal length also determines the perspective of an image. Longer focal lengths
of the lens are focused, as in the diagram. require shorter exposure times to minimize blurring caused by vibration.

The shorter the focal length, the more the lens bends the light rays, the wider its angle of view, so a lens
with a focal length of 28mm can fit a lot more of an object into the field of view than can a lens with a focal
length of 200mm.

The focal length of a lens is also considered an expression of its magnification power, and is usually
stated in millimeters.
For example: A lens with a range of focal length from 5.8mm to 17.4mm is called a 3x zoom, because
17.4 = 3 x 5.8.

Section 11 | Slide 7 Section 11 | Slide 8


Terms – Aperture Terms – Depth of Field (DOF)

Large Opening Small Opening

Section 11 | Page 9 Section 11 | Page 10

The Aperture range of a lens refers to the amount of light that the diaphragm can let inside the camera to Depth Of Field (DOF) is the maximum object depth that can be maintained entirely in focus. DOF is also
reach the sensor. The aperture is the physical opening in the lens that allows light to the sensor which the amount of object movement (in and out of focus) allowable while maintaining a desired amount of
determines the – Depth of Field (DOF) which is outlined on the next slide. focus.
The difference between the closest and farthest distances an object may be shifted before an
NOTE: The F number can be displayed as 1:X instead of f/X. unacceptable blur is observed. Depth of Field is also referred to as Depth of Focus.

NOTE: Depth of Field should not be confused with Working Distance.

Section 11 | Slide 9 Section 11 | Slide 10


Parameters of Image Quality Parameters of Image Quality

Distortion

Resolution

Contrast Perspective Error

Section 11 | Page 11 Section 11 | Page 12

Resolution is a measurement of an imaging system’s ability to reproduce object detail, and can be The term Distortion is often applied interchangeably with reduced image quality. Distortion is actually an
influenced by factors such as the type of lighting used, the pixel size of the sensor, or the capabilities of individual aberration that does not technically reduce the information in the image; while most aberrations
the optics. The smaller the object detail, the higher the required resolution. It is often expressed in terms actually mix information together to create image blur, distortion simply misplaces information
of line pairs per millimeter (lp/mm), or microns (μm). geometrically. This means that distortion can actually be calculated or mapped out of an image.

Contrast describes how well black can be distinguished from white at a given resolution on an object. Note: In extreme high distortion environments, some information and detail can be lost due to resolution
The lens, sensor, and illumination all play key roles in determining the resulting image contrast. Each one change with magnification or because of too much information being crowded onto a single pixel.
can detract from the overall contrast of a system at a given resolution if not applied correctly and in
concert with one another. Conventional lenses have angular fields of view such that as the distance between the lens and object
increases, the magnification decreases. This is how the human vision behaves, and contributes to our
depth perception. This angular field of view results in parallax, also known as Perspective Error, which
decreases accuracy, as the observed measurement of the vision system will change if the object is moved
(even when remaining within the depth of field) due to the magnification change. The angular field of view
of the Fixed Length in the image translates to parallax error in the image and causes the two cubes to
appear to be different sizes.

Section 11 | Slide 11 Section 11 | Slide 12


Types of Lenses Lens Advisor

Telephoto Lens Wide Angle Lens Zoom Lens

Macro Lens Telecentric Lens


http://www.cognex.com/Resources.aspx
Section 11 | Page 13 Section 11 | Page 14

A Telephoto lens is a lens constructed to produce a relatively large image with a focal length shorter than The Cognex Lens Advisor makes it easy to select the right lens for each Vision or ID application.
that required by an ordinary lends to produce an image of the same size. It is used for small or distant Depending on the information you have available about your application, select the tab that gives the
objects. information you would like the lens advisor to calculate.

A Wide Angle lens refers to a lens whose focal length is substantially smaller than the focal length of a There are three lens parameters that are related: Focal Length, Field of View, and Working Distance. If
normal lens. This type of lens allows for a wider depth of field and closer focusing distance. you specify two of these, the Lens Advisor will calculate the third for you.

A Zoom lens allows a camera to change smoothly from a long shot to a close-up or vice versa by varying
the focal length.

A Macro lens is optimized for high magnification applications. The size of the image = size of the object
(1:1).

A Telecentric lens optically corrects for perspective distortion. At any distance from the lens, a
Telecentric Lens will always have the same field of view.

Section 11 | Slide 13 Section 11 | Slide 14


Lighting Advisor What do I do …?

12mm 16mm 25mm

http://www.cognex.com/Resources.aspx
Section 11 | Page 15 Section 11 | Page 16

The Cognex Lighting Advisor lets you simulate different kinds of lighting on a variety of types of parts. If you’re focused on a part and need larger FOV:
At the top, choose a part that has characteristics similar to the part you are inspecting. On the right, - Go to a smaller focal length lens
choose a height of the light above the part. Then try out different lights from the list at the left. 25mm -> 16mm

If you’re focused on a part and need to mount camera closer (WD)


- Go to a smaller focal length lens
25mm -> 16mm

The smaller the focal length number, the closer the WD and larger the FOV.

Section 11 | Slide 15 Section 11 | Slide 16


What do I do …?

16mm / no spacer
Lighting

16mm / 2mm spacer


(infinity)

16mm / 2mm spacer


(closer)
Section 11 | Page 17 Section 11 | Page 18

You can try and get the same FOV with a 16mm lens and no spacers, but you will not get it in focus. In this section we will explore Lighting.
Another solution would be to go to a larger focal length lens.

Spacers
- Allow larger than normal magnification
- The higher the amount of spacers, the greater the magnification affect
- You must change working distance and focus to get a sharp image
- Spacers can reduce the quality of the image so care must be taken when deciding between
using a spacer or another lens

Section 11 | Slide 17 Section 11 | Slide 18


Lighting Considerations Light

NF Radio Waves Micro Waves Optical Radiation X-Ray Gamma

3000 km 10 km 30 cm 300 µm 1 mm 1 nm 10 pm

Visible Light
Thermal Radiation Mid-IR Close-IR UV-A UV-B UV-C X-UV

1 mm 50 µm 2500 nm 700 nm 400 nm 315 nm 280 nm 100 nm 1 nm

640 nm 600 nm 570 nm 555 nm 505 nm 490 nm 430 nm


V(λ) V‘(λ)

Section 11 | Page 19 Section 11 | Page 20

What should you consider when selecting your lighting? As defined by Encyclopedia Britannica, light is electro-magnetic radiation that can be detected by the
human eye. Electromagnetic radiation occurs over an extremely wide range of wavelengths, from gamma
? Is the surface finish specular (smooth and glossy) or diffuse (rough and dull)? rays, with wavelengths less than about 1 × 10−11 meters, to radio waves measured in meters.
? Does the surface exhibit directional reflectance (reflects light in a specific direction)?
? Does the appearance of the part change under different colors of light (darken or lighten)? Within that broad spectrum the wavelengths visible to humans occupy a very narrow band, from about
? Is the part’s surface flat or 3-dimensional? Curved? Irregular? Etched? Embossed? Raised? 700 nanometers (nm; billionths of a meter) for red light down to about 400 nm for violet light. The spectral
regions adjacent to the visible band are often referred to as light also, infrared at one end and ultraviolet
? Is the surface of the part stable or will it change over time (tarnish, oxidize, fade)?
at the other.
? How might ambient (all around) light affect the part?

Section 11 | Slide 19 Section 11 | Slide 20


Light and Camera Lighting Options

Section 11 | Page 21 Section 11 | Page 22

Images themselves are created when light reflects off of an item. It strikes the item and if it goes up into Also, it’s very important to take into account where your actual light source is in relation to your part and
the camera, then it is looked at as bright. to the camera.

If the light reflects away from the camera, then it’s considered dark. This allows for your edges and your We’re showing a dark field effect on the left side where it has the light source very low to the part, so it’s
textures to become prominent and that is how machine vision works. It looks at the differences between striking off the surface and away from the camera unless you have any divots or low spots. We’re going to
light and dark areas. talk about that in a moment.

Images are created when light reflects into the camera On the other side is bright field. In fact, this is a direct light source. You have a spotlight reflecting down
- It is bright when reflection is direct (or from stray light entering the lens) onto the part and then reflecting up into the camera itself.
- It is dark when light rays miss the camera
When considering the placement of part with respect to light and camera – keep in mind that angle and
This allows for edges or texture to become prominent. direction can greatly change appearance of part.

Section 11 | Slide 21 Section 11 | Slide 22


Lighting Options Bright Field

Same Part -
Different Light
Position

Section 11 | Page 23 Section 11 | Page 24

You’ll notice that the images that you get from each option are quite different. There’s a dark field one on Let’s talk about Bright Field first.
the left, so you see your hot cocoa clearly. But if you’re using a spotlight, as on the right, you’ll notice that
the top isn’t exactly flat but has some shadowing and some reflection. How bright field work?

The light strikes down on the flat surface of the part and then back up into your camera. If you have any
indentations or any raises upon your part what will happen is the light will strike down into it and then
away from your camera. This creates a dark edge.

If we look at this coin, we’ll notice that the flat surface of the coin gives you a bright field behind it and any
of the indentations or raisings on the coin gives you the dark effect.

The rays are perpendicular to the surface.

The shape and contour are enhanced:


- Diffuse surfaces are dark
- Flat, polished surfaces are bright
- Used to emphasize height changes

Section 11 | Slide 23 Section 11 | Slide 24


Dark Field Going from Bright Field to Dark Field

Far

Near

Distance from Object


Section 11 | Page 25 Section 11 | Slide 26

Dark Field has the opposite effect. Here the lights strike away from the part. Notice how the two techniques are simply a matter of how the light is reflected off of the object -
accomplished through positioning the light
If it’s a flat surface, the light will reflect away from the camera, so you’re going to get dark edges. As light So all that we did here is we took images of the part and we just changed where the light was with respect
hits anything that’s an indentation or a raise it will reflect back up into the camera. So thus, you’re going to the part and the camera.
to be getting a dark field for the back and any indentations or raises are going to be light. So you’ll notice as I start going through the images it goes from a bright field effect down through until you
finally have a dark field effect.
The rays are at an angle to the surface. The part underneath has not changed at all.

The shape and contour are enhanced:


- Diffuse surfaces are bright
- Flat, polished surfaces are dark
- Use to emphasize height changes

Section 11 | Slide 25 Section 11 | Slide 26


Diffuse and Collimated Light Use: Constant or Strobed

Frosted Glass or Plastic

Collimated
Light Rays Scattered Light Rays

Condenser Lens

Strobed Not Strobed


• Only on at acquisition • Always on
• Stops movement • Consistent light
Scattered Light • Requires light that responds quickly • Easy to set up
Collimated Light
Rays
Rays • Needs to be controlled • Heats up
Section 11 | Page 27 Section 11 | Page 28

The next technique is Diffusion, this is where you take collimated light rays and put them through some This is a part that is moving under the camera.
type of frosted glass or plastic that scatters them.
If a strobe is used, you will notice that there is no blurring of the image. It is as if the image is stopped. If
the part is not strobed, then there may be blurring dependent on how fast the part is moving. Motion can
also be stopped by decreasing the amount of time that the electronic shutter is open. You will need high
intensities of light to compensate for the reduced time.

Section 11 | Slide 27 Section 11 | Slide 28


Lighting Techniques

Direct (or Front)


Diffuse Off-Axis
Lighting Techniques Back Light

Structured Diffuse On-Axis

Section 11 | Page 29 Section 11 | Page 30

In this section we will look at various Lighting Techniques. A lighting technique is a combination of a light source and how it is placed respective to the part and the
camera. A particular technique can enhance your image such that it negates some features on your
image or by accentuating others like silhouetting a part to allow measurement of edges while negating the
surface details.

There are five basic techniques – Direct, Back, Diffuse On Axis as well as Diffuse Off Axis, and finally
Structured. Over the next few slides we will take a look at each technique in more detail.

Section 11 | Slide 29 Section 11 | Slide 30


Back Light Back Light

Hints:
• Keep light clean
• Use collimated film for high accuracy

Section 11 | Page 31 Section 11 | Page 32

We’ll start with the Back Light. It creates a silhouette to accentuate shape of the part to allow for This technique may help simplify measurements on the part as you easily get the outside edges, but keep
measurement but negates all surface detail. It creates the optimal contrast of basically black on white. in mind that you lose all surface detail. In some environments, back lights are not an option because the
mechanical fixture that holds the part would block the light.

Back lights are used often in gauging and measurement applications. It is important that you keep the
surface of the light clean and use collimation film to enhance the edges and increase accuracy of
measurements.

Advantages:
- Maximum contrast
- Simplifies image by creating a silhouette of the part

Disadvantages:
- Surface detail lost
- Difficult to use with objects in a fixture

Applications:
- Gauging and measuring dimensions (especially holes)

Section 11 | Slide 31 Section 11 | Slide 32


Back Light Applications Back Light Applications

Ring Light Back Light Ring Light Back Light

Section 11 | Page 33 Section 11 | Page 34

In this application we are looking to take measurements on the filament of the bulb. In the image on the In this application we need to ensure that the correct part is in the bag. The left image is taken with a ring
left, you can see a light bulb taken with a normal ring light (also known as a direct light source). Note the light. Notice all the glare and how you are not able to see clearly the part.
glare off of the glass - this actually takes away from the ability to clearly see the filaments of the bulb.
Using a back light, you negate all of the glare and don’t even see the bag and now the part can be
As seen in the image on the right, by using a back light we can see the inside circuitry quite clearly. inspected easily and consistently.

Section 11 | Slide 33 Section 11 | Slide 34


Direct Front Light Direct Front Light

Hints:
• Use 2 or more spotlights to minimize shadows
• Shadows can be used to improve contrast

Section 11 | Page 35 Section 11 | Page 36

The second type of lighting technique is Direct Front Lighting. This is where a light is illuminating a part This technique is easy to set up and can create excellent contrast; however it can give shadows as well.
from a slight angle as seen in the image. Spotlights are often used as direct front lights. This may or may not be beneficial to your application. It may also produce glare on shiny parts.

Often this technique is used to maximize contrast on low contrast images – making use of shadows for
edges – as well as freezing motion through strobes.

Note that more than one front light can be used to minimize shadows. This could be multiple spot lights
from different angles, or a right light that has the light coming down as a complete ring as opposed to a
single point.

Advantages:
- Easy to set up
- Maximum contrast

Disadvantages:
- 3D parts will cast shadows
- Causes specular reflection on shiny parts

Applications
- Maximum contrast for low contrast images
- Used as a strobe to freeze movement

Section 11 | Slide 35 Section 11 | Slide 36


Direct Front Light Application Structured Light

Front Light Accentuate Shadows

Section 11 | Page 37 Section 11 | Page 38

In this application we are inspecting the crevices of the sponge. Note that the image on the left is taken Another technique is called Structured Lighting. Structured lighting makes use of a known light pattern
with a simple front light. There is glare and a hotspot right in the middle. (normally a plane of light creating a line) that is used to determine dimensional information. They are
usually highly collimated lights sources such as lasers or fiber optic line lights.
Simply by moving the direction of the front light and making use of the resulting shadows, we can easily
analyze the crevices on the surface of this sponge and we can complete our inspection more accurately. Uses laser or fiber optic line light.

Section 11 | Slide 37 Section 11 | Slide 38


Structured Light Structured Light Applications

Line Light Change in Depth


Hint:
• Use fiber optic as opposed to laser when possible

Section 11 | Page 39 Section 11 | Page 40

This technique is an inexpensive way to measure depth and height as well as show surface detail on low In this application, we are looking for a change in depth on the part being inspected. On the left, you see
contrast parts. Take care with lasers. They are expensive and fragile. And as we mentioned earlier, they there is no change, the line is flat and therefore we can assume the surface is flat.
can be dangerous. Also make sure the part being inspected does not absorb light as you need the
reflection of the light in order to see the effect. On the right, the line is broken and we can see where there is a step in the middle. It’s that change in that
line that is used to determine the z dimension.
Applications that use structured lighting are ones such as gauging continuous features like steps or on
very low contrast parts where you are looking for line deformation. The more it is offset, the more difference in height.

When possible use a fiber optic line light in place of lasers to avoid issues such as safety and delicate
handling care.

Advantages:
- Inexpensive for measuring height/depth
- Shows surface profile on low contrast

Disadvantages:
- Lasers are expensive and must be handled with care
- Z direction is not highly accurate

Applications:
- Gauging continuous features
- Very low contrast part (line deformation)

Section 11 | Slide 39 Section 11 | Slide 40


Diffuse On-Axis Light (DOAL) Diffuse On-Axis Light (DOAL)

Hint:
• Use with another source for fill light

Section 11 | Page 41 Section 11 | Page 42

Diffuse On-Axis Lighting, also known as a DOAL, allows for light to get reflected directly at the part The great thing about DOAL lights is that they allow for your camera to be normal to (or directly over) the
without the light source getting in the way of the camera. This is being done by using a 50% silvered part being inspected. Note that in some cases, the thickness of the mirror could cause a double image.
mirror, where the light is shining directly onto the mirror and then reflected down onto the part, while the
camera can see through the mirror and capture the image of the part being illuminated. Diffuse on-axis applications would include detecting flaws on shiny, flat surfaces or illuminating the insides
of small cavities.

When using DOAL lights, because of the loss of light through the silvered mirror, you may need to
consider using another light source to get fill light in order to give consistent light over the complete part.

Advantages:
- Camera is normal to the part
- Creates a bright field effect

Disadvantages:
- Thickness of mirror can produce a double image

Applications:
- Detecting flaws on flat, shiny surfaces
- Illuminating small cavities

Section 11 | Slide 41 Section 11 | Slide 42


Diffuse On-Axis Light (DOAL) Applications Diffuse Off-Axis Light

Ambient Light Diffuse On-Axis (DOAL)

Section 11 | Page 43 Section 11 | Page 44

Here is a dot-pinned connector that has a rough surface. The image on the left shows the connector And the last lighting technique is Diffuse Off-Axis Lighting known as a cloudy day illuminator (CDI) or a
under ambient light. Dome Light. With diffuse off-axis lighting, the light is not reflected directly onto the part but first onto a
diffuse surface then onto the part.
The image on the right shows the connector under a DOAL light. Notice how the dots are accentuated
and the rough background is negated – there are also no hotspots or glare. In the image above, the lights at the bottom of the dome shine up into the dome area and then the
reflected light comes down on the part.

To remember the difference between Diffused On-axis lights versus Diffused Off-axis lights, remember
that Diffused On-axis (DOAL) lights shine light directly onto the part (On-axis – on the part) where the
Diffused Off-axis lights shines the light off something else first (Off-axis – off something else).

Section 11 | Slide 43 Section 11 | Slide 44


Diffuse Off-Axis Light Diffuse Off-Axis Light Applications

Hints: Direct Front (Ring) Diffuse Off-Axis


• Use with DOAL to fill the dead spot (Cloudy Day Illumination)
• Cloudy Day Illumination devices are available in a
variety of sizes

Section 11 | Page 45 Section 11 | Page 46

The Diffused Off-Axis technique negates shadows as if you are looking at something on a cloudy day. It Here is a foil lid application. With a ring light, there is a tremendous amount of glare. But with the cloudy
also negates the hot spots and glare that can cause problems with applications. Note that although using day illuminator, the lid can be totally inspected for possible holes or tears.
this technique eliminates issues caused by glare and shadows, you may introduce a dead spot (a darker
area) in the image due to the hole in the dome that is needed for the camera to view the part. You may
also find that there are space restrictions that limit the use of dome lighting.

Applications include locating defects on shiny, non-flat surfaces.

You may want to use this technique in conjunction with a DOAL to get rid of the dead spot (as seen in the
image above). And there are many different sizes of CDIs available.

Advantages:
- Complete diffuse illumination eliminates shadows
- Avoids hot spots and glares

Disadvantages:
- Dead spot due to hole for camera
- Reduced intensity

Applications
- Detecting flaws on rounded, shiny surfaces

Section 11 | Slide 45 Section 11 | Slide 46


Diffuse Off-Axis Light Applications Review of Techniques

Ring Light Back Light Diffuse On-Axis


(DOAL)

Dome Flat Dome (by CCS)

Note: The “dead spot” is gone


Front Light Ambient Light Diffuse Off-Axis
Section 11 | Page 47 Section 11 | Page 48

If you have space issues with using a dome – in other words, you do not have enough room between the Now let’s summarize the effects created from the different lighting techniques. On the left is the part under
camera and the part, CCS has created flat dome lights that accomplish a similar result as their larger ambient light.
counterpart. This thin, flat light is placed directly between the part and your camera. The camera can
focus through the dome lighting material so that you can see the part. Let’s first look at it under Front Light. Note the sharp shadow above the top of the part.

Notice that the dead spot is eliminated when using the flat dome.

Section 11 | Slide 47 Section 11 | Slide 48


Ambient Lighting

Filters and Colored Light

• Ambient lighting is external lighting that strikes your parts


Examples: ceiling lights, skylight

• It is a problem because it can vary and affect your inspection results

Section 11 | Page 49 Section 11 | Page 50

Ambient light tends toward the visible spectrum, meaning it includes red, orange, yellow, green, blue, In this section we will focus on Filters and Colored Lights.
indigo, and violet. If you use a red filter, then you are filtering out all but the red component of the ambient
light. Then, if you add your own bright red light, all of your light will get through the red filter, and
“overwhelm” the smaller red component from the ambient light.

Ways to deal with ambient lighting:


1. Get rid of it (ex: cover skylights)
2. Cover (shroud) the parts being inspected
3. If you can’t do 1 or 2, use a color filter and flood the part with your own lighting of the same
wavelength (ex: red)

Section 11 | Slide 49 Section 11 | Slide 50


Polarizing Filters Polarizing Filters

Section 11 | Page 51 Section 11 | Page 52

A polarizing filter is analogous to slatted blinds, in that it “slices” 3d light waves into parallel planes of light.
Two of these filters are used. The first, the “polarizer,” does the first slice. If the background is shiny and
flat, the parallel planes are reflected as parallel planes. They then pass through the second polarizer, the
“analyzer.” The analyzer is rotated in such a way that its slates are perpendicular to the incoming planes.
This blocks most of the parallel light.

But if the feature is not flat, it will scatter some of the incoming planes, so that some of the feature’s
reflected light will get through the analyzer to the camera. The net effect is to reduce the background
glare.

Section 11 | Slide 51 Section 11 | Slide 52


Polarized Light Application Color Filters

Ring Light without Polarizer Ring Light with Polarizer

Section 11 | Page 53 Section 11 | Page 54

Polarizers are used in imaging applications to reduce glare or hot spots, enhance contrast, or to perform There are many different types of filters in machine vision that can be utilized to improve or change the
stress evaluations. Polarizers can also be used to measure changes in magnetic fields, temperature, image of an object under inspection.
molecular structures, chemical interactions, or acoustic vibrations.
It is important to understand the different technologies behind the various types of filters in order to
In this application we want to read the information on the lid of the Grape Jam. As seen in the image on understand their advantages and limitations. Although there is a wide variety of filters, almost all can be
the left, when a Ring light without a polarizer is used there is a glare on the part of the lid that is raised, divided into two primary categories: colored glass filters and coated filters.
while the information that is in the indentation is visible.

By adding the polarizer to the ring light, as on the right, it reduces the glare and the entire lid is
recognizable.

Section 11 | Slide 53 Section 11 | Slide 54


Color Color Application

Red Purple

Actual Colors White Light


Orange Blue

Yellow Green

Red Light Green Light Blue Light

Section 11 | Page 55 Section 11 | Page 56

The next thing to talk about is the spectrum and color because we live in a color world. We can use color In this image there is a red, green, and blue strip.
light with grayscale cameras to enhance or to negate some of the features that we’re interested in. What
we need to keep in mind is that opposite colors on the color wheel darken the colors across from them. In a grayscale world, you’ll notice that it just becomes varying shades of gray. Now yes, we do know
which order it goes in, so if you were asked, “What’s the green one?” You could easily say it’s the middle.
- Like colors make it look more white in the background. But what if you didn’t know the order?

- Use colored light to create contrast. Well, we could use a different light. So if we used a red light, what we would see is that the red would be
the lighter color. If we used the green light, we would notice that the green would be the lighter color and
- Use like colors or families to lighten: if we used the blue light, then the blue would be the lighter color.
- Yellow light makes yellow features brighter

- Use opposite colors or families to darken:


- Green light makes red features darker

Section 11 | Slide 55 Section 11 | Slide 56


Colored Light Application Infrared Light (700 nm - 1 mm)

Ambient Light White Light

Blue Light
Section 11 | Slide 57 Section 11 | Page 58

Here’s an application where we want to enhance the date code on a jar lid but we want to negate the There’s also infrared light. Infrared is in the range above 700 nanometers. Operators won’t even know
other printed lettering. Since we’re interested in the red print and we want to get rid of the blue print, then that it’s there and the intensity moves down through it.
we could use a blue light. Notice how the blue kind of blends away into the background while that red print
really stands out crisply. Now you can use an OCV or OCR vision tool very easily. Infrared (IR) light is invisible to the human eye.
- Operators don’t know they are there
The Date information is accentuated with blue light. - Negates all color – everything is gray
- Blue print is negated - Does not permeate materials
- Red print is now darkened
Some of the different uses – barcode and shrinkwrap, credit cards, cows, and color crayons.

Section 11 | Slide 57 Section 11 | Slide 58


Infrared Light Application Ultraviolet Light (10 - 400 nm)

Ambient Light IR Light and IR Filter*


(IMIF-BP850)

Section 11 | Page 59 Section 11 | Page 60

Now, one of the great things with infrared light is that it’s great with produce. The other type of light is ultraviolet light which is below the 400 nanometer range. Ultraviolet light can
fluoresce ink or glue. It’s not the actual light itself that you need, but the fact that you’re fluorescing the
Here’s an image where you see an avocado in normal ambient light that looks perfectly good but if you glue or the ink up into the visible range.
put an IR light on it, notice how there’s a bruise there. Anytime you’re using infrared or ultraviolet light,
you really should use a filter on your camera. This allows for you to make sure that you’re only looking at Many materials fluoresce under ultraviolet (UV) light:
the light or rather what is being fluoresced from the light and not the light itself because that can - Ink, Labels, Glue
sometimes throw off your application.
By putting Direct UV (or Near-UV) light on part - visible light is emitted.
NOTE: Filters should be used in most IR and UV applications.
NOTE: Filters should be used to block out UV light and only allow fluorescent wavelengths into the camera.

Section 11 | Slide 59 Section 11 | Slide 60


Ultraviolet Light Application Summary

• Good lighting is important, with the goal of making the


features stand out from the background.

• There are many choices in lighting and optics. The


positioning of a light is also important.

• Ambient light should be blocked or minimized.

No Filter Longpass UV Filter • The use of filters and colored lights can enhance the
image, even in a greyscale camera.

Section 11 | Page 61 Section 11 | Page 62

Here’s an example of ultraviolet light. In this section we covered the following topics:

If we take a look at this part underneath the ultraviolet light with no filter, you’ll notice that it’s making the - Good lighting is important, with the goal of making the features stand out from the background
text of the number fluoresce but you’re also seeing the glare of the light behind there as well. But if we - There are many choices in lighting and optics. The positioning of a light is also important.
add a UV filter, the UV light is blocked and only lets the fluorescing ink up into the visible range to the - Ambient light should be blocked or minimized.
camera. - The use of filters and colored lights can enhance the image, even in a greyscale camera.

The UV ink lettering is illuminated using a Black Light.

Section 11 | Slide 61 Section 11 | Slide 62


Question 1

What lighting technique is best suited to illuminate


round metal shiny parts?

A. Dark Field
Quick Quiz B. Back Light
C. Bright Field
D. Diffuse Off-Axis (Cloudy Day Illumination/Dome)

Section 11 | Page 63 Section 11 | Page 64

Quick Quiz! What lighting technique is best suited to illuminate round metal shiny parts?

A. Dark Field
B. Back Light
C. Bright Field
D. Diffuse Off-Axis (Cloudy Day Illumination / Dome)

Answer: ____________________

Section 11 | Slide 63 Section 11 | Slide 64


Question 2 Question 3

If you had this part to inspect and were only concerned One lighting technique will inspect all parts in any
about the blue vertical print, what color light would you application.
use?
– True
A. Red
– False
B. Green
C. Blue
D. Infrared

Section 11 | Page 65 Section 11 | Page 66

If you had this part to inspect and was only concerned about the blue vertical print, what color light would One lighting technique will inspect all parts in any application.
you use?
- True
A. Red - False
B. Green
C. Blue Answer: ____________________
D. Infrared

Answer: ____________________

Section 11 | Slide 65 Section 11 | Slide 66


In-Sight Spreadsheets Standard Skills Journal

Please answer the corresponding questions upon completion of each section. The
questions are to reinforce the skills learned during each In-Sight Spreadsheets Standard
section.

Section 11 – Lighting & Options


• Explain Lighting and Optics terms
• Discuss the different Lighting Techniques
• Describe how the use of filters and colored lights will affect the quality of the
image

1. If you increase the aperture of a lens, you let more light in. What would be a
disadvantage of increasing aperture?

2. List three kinds of lens distortion.

3. What type of lens has no perspective distortion?

4. What type of filter is good for removing background glare?

5. What are the two categories of wavelengths that the camera “sees” that humans do
not?

Page 1
In-Sight Spreadsheet Standard Final | Project In-Sight Spreadsheet Standard Final | Project

Inspection Tasks:
In-Sight Spreadsheets Standard – Final Lab
1. Consistently find the part in the image. Assume that the scale is constant and that
the part can rotate, even to being upside down.
Lab Objective:

Your task is to create an In-Sight Spreadsheet inspection with a real part using what you
learned during your In-Sight Spreadsheets Standard class.
2. Measure the width of the part as shown in the picture above. The width of the block
All inspection tasks must be completed and the good/bad part should be about 40-50 mm, depending on your setup. Your inspection should report
must always pass/fail your inspection. Assume that the only defects on parts will appear the width in millimeters.
as seen on the bad side of the plate, i.e., no other variations.

There are no requirements on how these tasks are to be completed. Be prepared to discuss
your solution at the end of class.
3. Check that the connectors have all pins present and that they are correctly installed.
Please record the tool used and where the tool is found within the Spreadsheet on the lines
provided.

4. Check that the part has the correct number of LEDs installed.

2
3
5. The camera must be triggered via a button function in the spreadsheet.

6. Create a Custom View that shows the status of all of the inspection tasks and the status
of overall inspection.
3

4
NOTE: The numbers on the image refer to the corresponding lab steps.

Page 1 Page 2
Y

Grid spacing = 10.000 Millimeter


Y

Grid spacing = 10.000 Millimeter


In-Sight Spreadsheets Standard Skills Journal Answer Key In-Sight Spreadsheets Standard Skills Journal Answer Key

Please answer the corresponding questions upon completion of each section. The questions are to Section 2 – Software, Image and Calibration
reinforce the skills learned during each In-Sight Spreadsheets Standard section. • Manage multiple networked In-Sight systems from a single PC
• Explain the basic principles and terminology of image acquisition
Section 1 – Hardware and Connections
• Record and play back images
• Demonstrate how to connect the In-Sight camera to the network
• Navigate through the spreadsheet
• Explain who Cognex is and its place in the market
• Save job files
• Identify the In-Sight Product Offerings
• Load job files
• Discuss how Image Chip pixels and Field of View affect resolution
1. Name three window panes in the Spreadsheet Mode of In-Sight Explorer.
1. The camera is not seen on the list in the Network Pane. List possible cause and solutions. a. Network Pane
a. The camera and Ethernet power LED is NOT on - supply power b. Spreadsheet Pane
b. IP address of Camera or PC is NOT compatible with network addressing– In the c. Files Pane
User Interface - Open Add Sensor dialog box (System Menu). Verify or Change IP d. Palette Pane
address. If camera is not listed, power cycle the camera while the Add Sensor box is
open 2. List three types of online trigger.
c. Check your firewall settings. Disable if not needed or add ISE as an exception. a. Camera
d. Network is down - Ping the IP address of the camera from command prompt (DOS). b. Continuous
c. External
2. Name two other lines of machine vision products besides In-Sight that Cognex offers. d. Manual
a. DataMan e. Network (and others)
b. Displacement Sensors (3D)
c. VisionPro 3. Name the two types of filmstrips. What are the two main differences?
d. Checker a. PC and Sensor filmstrips
b. PC stores on computer, limited only by hard drive space. Sensor stores on camera,
3. List at least three Model series of In-Sight cameras. 20 images max.
a. 5000
b. Micro 4. What are the two kinds of reference in a spreadsheet cell and how do they differ?
c. 7000 a. Absolute and Relative
d. 8000 b. Relative reference changes when copied to another cell, absolute does not
e. 2000
5. Explain why is it a good idea to Save the job periodically when you are making changes.
4. Name the two User Interface modes of In-Sight Explorer.
a. EasyBuilder View When you are making changes, the job is in the working memory (RAM). If power to the
b. Spreadsheet View camera was lost, you would lose your changes. When you Save, you copy the job to flash
memory, which retains the information even if power is lost.
5. What are at least three benefits of the Spreadsheet Interface?
a. Allows access to all In-Sight functionality
b. Allows creation of custom graphical user interface (Custom View)
c. Allows complex logic statements
d. Can be slightly faster

6. What are the results of using a higher resolution imaging chip?


a. Same Field of View, more pixels per feature
b. Larger Field of View, same pixels per feature
c. Increased accuracy with measurement tools.

Page 1 Page 2
In-Sight Spreadsheets Standard Skills Journal Answer Key In-Sight Spreadsheets Standard Skills Journal Answer Key

Section 3 – Pattern & Logic Tools Section 4 – Histogram & Edge Tools
• Apply the Property Sheet parameters and auto-inserted information for FindPatterns to a • Apply the Property Sheet parameters and auto-inserted information for the
sample image ExtractHistogram tool to a sample image
• Configure the PatMax Patterns tools • Apply the Property Sheet parameters and auto-inserted information for the Edge
• Create basic mathematical formulas involving If, And, InRange, and Not functions Functions to a sample image
• Identify uses for the PatMax technology • Describe the two groups of Edge functions
• Explain why the region for an Edge tool must be rotated to detect a horizontal edge
1. What are the three parameters that are typically used to locate a part?
a. Row 1. List the five results automatically inserted in the cells by the ExtractHistogram tool.
b. Column a. Threshold
c. Angle b. Contrast
c. DarkCount
2. List three parameters in the FindPatterns tool that provide a tradeoff between speed and d. BrightCount
accuracy. e. Average
a. Model Type
b. Coarseness 2. List at least two kinds of inspections for which ExtractHistogram may be used
c. Accuracy a. Presence/absence
d. Accept Threshold b. Uniformity of grey values
e. Confusion Threshold c. Illumination levels

3. List three enhancements in the PatMax Property Sheets compared to the FindPatterns tool. 3. List the two categories of Edge Tools and name a tool in each category
a. Elasticity a. Tools that find edges: FindCircle, FindLine, FindSegment, etc.
b. Ignore Polarity b. Tools that operate on edges: PairDistance, PairEdges, etc.
c. Angle can have different Start and End
d. Scale can have different Start and End 4. What is the correct method to draw an Edge Tool in relation to the edge to be found?
e. Aspect Ratio tolerances a. Perpendicular --- X
f. Edge model in PatMax follows the curve of the model, FindPatterns follows the pixel b. Parallel
grid c. ANY
:
4. List at least three logic functions and give an example of each. 5. Why must the region for an Edge tool be rotated when locating a horizontal edge?
a. If – if (A5>10, I,0) The side of the region that looks for an edge (the side with an arrow) must cross the edge.
b. And – And (A5,A6,A7)
c. InRange – InRange(A5,6,8)
d. Not – Not(if(A5>10, I,0))

5. Suppose you are inspecting for the proper location of the metal tab on top of a shiny soda
can using a bright light above it. So there is glare. Compare the advantages and
disadvantages of using FindPatterns, PatMax, and PatMaxRedLine.
a. FindPatterns: Might work well, and has advantage of coming with all standard In-
Sights. Uses grid-based feature representation, which is less accurate than the
other two tools.
b. PatMax: Deals better with shiny parts with glare. Is an optional tool. Might be
slower than other two tools.
c. PatMaxRedLine: Has both speed and high resolution, might work better with shiny
parts with glare. Is an optional tool.

Page 3 Page 4
In-Sight Spreadsheets Standard Skills Journal Answer Key In-Sight Spreadsheets Standard Skills Journal Answer Key

Section 5 – Blob & Image Tools Section 6 – Cell State, Error Handling & Calibration
• Apply the Property Sheet parameters and auto-inserted information for the ExtractBlobs • Explain the uses of Cell State
tool to a sample image • Explain the uses of Error Handling
• Apply a CompareImage function to create a filtered image of part defects • Implement a non-linear Calibration using the Calibration Wizard
• Import and Export Snippets • Identify the three steps in Grid Calibration

1. What are the two modes for setting the threshold for ExtractBlobs? 1. Suppose you want to communicate the Average result from ExtractHistogram but only
a. Automatically when a part fails. How would you do this in the spreadsheet?
b. Manually a. Set up a cell that has a 1 for fail, 0 for pass.
b. Then cell state the communication tool to that cell.
2. Explain the difference between Number to Sort=0 and Number to Sort=5
The first will return just the count of blobs in the region. The second will look for 5 blobs and
return geometric results for each blob found 2. What are the two types of situations that result in an #ERR?
a. Invalid specification of a parameter in a Property Sheet
3. Name three Image Functions and indicate what they do
b. A tool that tries to find something cannot find it.
a. CompareImage – shows difference between trained model and part
b. Erode – shrinks white
3. Name the two functions that can handle cells with #ERR in them.
c. Dilate – expands white
a. CountError
d. Binarize – makes image all black and white
b. ErrFree
e. Clip – clips low and high greyscale values
f. Stretch – expands greyscale range
4. What are the three steps in Grid Calibration?
a. Setup
b. Pose
c. Results

5. List three factors that can affect the accuracy of a vision system.
a. Part being inspected
b. Accuracy of calibration grid
c. Quality of lens
d. How well camera is mounted
e. Image quality
f. Accuracy of the vision tools

Page 5 Page 6
In-Sight Spreadsheets Standard Skills Journal Answer Key In-Sight Spreadsheets Standard Skills Journal Answer Key

Section 7 – Discrete I/O Section 8 – Network Communications


• Identify which functions to use to read from and write to discrete channels • List different forms of network communication such as:
• Recognize which functions to use to read from and write to a serial port - PLC protocols
• Implement the WriteDiscrete function correctly in a job, including proper I/O settings - FTP
- TCP/IP
• List four conditions that can affect whether In-Sight is Online or Offline
• Explain Client/Server communication in TCP/IP communications
1. What are the two functions that are used in the spreadsheet to communicate over discrete
1. List three forms of network communication
lines? a. PLC
a. ReadDiscrete b. FTP
b. WriteDiscrete c. TCP/IP

2. List three discrete input lines signal types. Hint: You can find them in the Discrete 2. What are the three functions used for TCP/IP communications?
a. ReadDevice
Input/Output configuration dialog.
b. WriteDevice
a. User Data c. TCPDevice
b. Event Trigger
c. Online/Offline 3. In TCP/IP communications, how do the roles of Client and Server differ?
d. Job Load Switch / Job ID Client initiates the communication, knows what device is the Server
Server waits to be contacted, does not know in advance what device will make contact
3. List three discrete output lines signal types. Hint: You can find them in the Discrete
Input/Output configuration dialog. 4. How does In-Sight know whether it is the Client or Server?
a. Programmed a. By Host Name in TCPDevice.
b. Online/Offline a. If blank, In-Sight is server.
c. High b. If not, In-Sight is Client.
d. ERR: Missed Acquisition
e. Strobe
5. Name the two functions that write images or data to an FTP server.
a. WriteImageFTP
4. What are the four conditions that can affect whether In-Sight is online or offline.
a. Startup dialog box b. WriteFTP
b. Online/Offline button in toolbar (or pull-down menu)
c. Discrete input line
d. Native Mode command

Page 7 Page 8
In-Sight Spreadsheets Standard Skills Journal Answer Key In-Sight Spreadsheets Standard Skills Journal Answer Key

Section 9 – Order of Execution & Operator Interface Section 10 – Deployment


• Discuss how cells are executed in the spreadsheet • Employ utilities available to deploy the In-Sight vision system such as:
• List the In-Sight processor priorities and the information provided by the Job Profiler - User Lists
• Create a Custom View in a job, including Status indicators, results of the vision - Back-up/Restore
analysis, and a button to control the region of the Histogram tool - Update Firmware
• Automatic Startup of the Camera
• Display their application at deployment using the VisionView 900
1. What determines the order in which cells are executed in the spreadsheet?
Order of dependence. Where there is a choice, order is across row, then down to next row.
1. List three types of settings accessible from the Sensor Menu.
a. Network
2. What is the order of priority in which In-Sight handles an acquisition trigger?
b. Date and Time
a. Image acquisition c. User Access
b. Execution of spreadsheet d. Image
c. Serial port communications (Except WriteSerial which is priority 2)
d. Ethernet communications (Except WriteDevice which is priority 2) 2. Name the two utilities that allow you to copy all the files in a camera to a PC, and then copy
e. Image logging them back to another camera.
f. Screen update a. Backup
b. Restore
3. List at least three categories the Profiler will display in a table about a job. 3. List at least four User Interface preferences you can change in the SystemOptions menu.
a. Execution time a. Startup User Name
b. Order of execution b. Emulation
c. Dependencies
c. Backup Directory
d. Contents of cells
e. Results of tools d. Default View
e. Record Defaults
4. List the three steps you need to do to create and see a Custom View. f. Language
a. Select EditCustom View Settings
b. Select the cells to be used in the Custom View
c. Select View Custom View (F6)

5. List at least three Control functions.


a. Button, CheckBox, ListBox,
b. EditFloat, EditString, EditInt, EditRegion

Page 9 Page 10
In-Sight Spreadsheets Standard Skills Journal Answer Key

Section 11 – Lighting & Optics


• Explain Lighting and Optics terms
• Discuss the different Lighting Techniques
• Describe how the use of filters and colored lights will affect the quality of the image

1. If you increase the aperture of a lens, you let more light in. What would be a disadvantage
of increasing aperture?
a. Decreases Depth of Field

2. List three kinds of lens distortion


a. Pincushion
b. Wave
c. Barrel

3. What type of lens has no perspective distortion?


a. Telecentric

4. List at least five lighting techniques.


a. Direct
b. Dome
c. Back
d. Structured
e. DOAL

5. What type of filter is good for removing background glare?


a. Polarizer

6. What are the two categories of wavelengths that the camera “sees” that humans do not?
a. Infrared
b. Ultraviolet

Page 11
Cognex Corporation
1 Vision Drive, Natick, MA 01760
Phone: 508-650-3000, Fax: 508-650-3333
Website: http://www.cognex.com

In-Sight Spreadsheets Standard (550.1.0)


Copyright © March 2018 Cognex Corporation. All Rights Reserved. Printed in the USA.

This document may not be copied in whole or in part, nor transferred to any other media or language without the written permission
of Cognex Corporation.

You might also like