Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Look up keyword
Like this
8Activity
0 of .
Results for:
No results containing your search query
P. 1
Machine Vision for Line Detection with Labview

Machine Vision for Line Detection with Labview

Ratings: (0)|Views: 1,124|Likes:
Published by Than Lwin Aung
LabVIEW Machine Vision for Line Detection EGR315: Instrumentation
Spring 2010 Than Aung and Mike Conlow Department of Physics and Engineering Elizabethtown College Elizabethtown, Pennsylvania Email: aungt@etown.edu, and conlowm@etown.edu. Abstract – To improve the previous development avoidance proprietary of a visual without that is obstacle using not The package a prototype virtual instrument was developed to attempt to improve processing speed by using LabVIEW for the entire image processing
LabVIEW Machine Vision for Line Detection EGR315: Instrumentation
Spring 2010 Than Aung and Mike Conlow Department of Physics and Engineering Elizabethtown College Elizabethtown, Pennsylvania Email: aungt@etown.edu, and conlowm@etown.edu. Abstract – To improve the previous development avoidance proprietary of a visual without that is obstacle using not The package a prototype virtual instrument was developed to attempt to improve processing speed by using LabVIEW for the entire image processing

More info:

Published by: Than Lwin Aung on Feb 15, 2011
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

04/07/2013

pdf

text

original

 
LabVIEW Machine Vision for Line DetectionEGR315: Instrumentation
Spring 2010Than Aung and Mike ConlowDepartment of Physics and EngineeringElizabethtown CollegeElizabethtown, PennsylvaniaEmail: aungt@etown.edu,and conlowm@etown.edu. 
 Abstract
 – 
To improve the previousdevelopment of a visual obstacleavoidance algorithm without usingproprietary software that is notfinancially practical to purchase. Thegoal is to use more advanced methods toproduce more concise output in terms of turning angle and the nearest point of interest.I. Introduction
The following data is an analysis of improvements made to a system previouslydeveloped using NI Vision Developmentsoftware to detect white lines on nonuniform grass. The need for this arose due toover complexity of a vision system that is onan autonomous robot that is used as alearning platform. The current system [5]uses a DVT Legend 554C that collects andfilters the images internally and transmitsthe relevant data, via TCP/IP string, to aLabVIEW program that is a closed loopmotor control. During the fall semester of 2009 using the NI Vision Developmentpackage a prototype virtual instrument wasdeveloped to attempt to improve processingspeed by using LabVIEW for the entireimage processing procedure through a USBwebcam [6].There were several improvements thatneeded to be made to the prototype in orderto justify its implementation over theprevious vision system. The turningalgorithm depended on a set of linedetection sub virtual instruments thatgenerated large amounts of noise due toinadequate intensity filtering. To resolvethese and other issues the filtering,thresholding, and line detection wereprogrammed using base package of LabVIEW, using LabVIEW IMAQ andIMAQ USB to capture the images from awebcam [6].The result is a great improvement over theprevious version. Further enhancements stillneed to be implemented in order to operateproperly in the field, but the goals that wereset for this semester have been met.
 
II. Background
The previous project mainly employed NIVision Development Module 9.0 (TrialVersion), which provides various imageprocessing and machine vision tools. Byusing the edge-detection sub virtualinstrument we implemented the followingline detection algorithm.The image resolution is set to 320x240pixels capturing at 8 frames per second.Each frame is then converted to an 8 bitgray-scale image and then the image issegmented into regions as follows [6]:Figure 1: Edge Detection RegionsWhite lines are detected with IMAQ EdgeDetection by finding lines in the eightboarder regions represented in green inFigure 1. In our algorithm, we use twovertical lines (VL1 and VL2) and twohorizontal lines (HL1 and HL2) detectors.VCL (Vertical Center Line) is thencalculated by averaging VL1 and VL2.Likewise, HCL (Horizontal Center Line) iscalculated by averaging HL1 and HL2. Theline angle is then calculated by find theangle between HCL and VCL, by using

 Where m
2
is the slope of HCL and m
1
isthe slope of VCL. By using the intersectionpoint and the angle between HCL and VCL,the appropriate heading for the robot isdetermined.Although the algorithm seems simpleenough, there are a lot of drawbacks. First,when converting from 32 bit color images to8 bit gray-scale images there is a loss of edge information in every frame. In thepresence of background noises it is verydifficult to detect stable edges, therebymaking line detection less accurate. Second,using four edge detectors is unnecessarilyredundant, and over-use of edge detectorsresults in slower processing. Third, we didnot have time to implement the filters toeliminate the noises and to threshold theunnecessary pixels information. Finally,since we used the 30-day trial version of NIVision Development Module, to continueusing the program the only option was topurchase the three thousand dollar fullversion.Therefore, the primary motivation of ourproject was to solve the problems we faced
 
by using NI Vision Development, andimprove upon the shortcomings of the firstproject. With the main goals in mind, wedeveloped the second version of our linedetection algorithm.
III. Implementation
Our project goals were to reduce thenoises during the image acquisition, enhancethe edge information, and stabilize thedetected line even with the backgroundreflections and light sources. Therefore, wedivided the project into different modularprocesses to achieve these goals.
A. Single Color Extraction
The images acquired from the camera(Creative VF-0050) are 320x240 32-bitcolor images. Although we can simplyconvert the 32-bit color (RGB) images to 8-bit grayscale images by averaging the 32-bitcolor, we have learned how to use a bettermethod for eliminating the noises andenhancing the edge information. Since thebackground of the images is mostly green,we decided that if we just simply extract theblue color pixels from RGB the images, wecan reduce the noises and enhance the whitecolor lines. The thought process behind thisis that the dirt and grass are mostlycomposed of reds and greens, so if we wereto only look at objects composed of someamount of blue the most intense blues wouldbe whites.In binary format, 32 bits color isrepresented as follows:Alpha Red Green Blue
xxxx xxxx xxxx xxxx
xxxx xxxx xxxx xxxx
(x is a binary 1 or 0). In order to extract Bluecolor information, we performed an
AND
 operation with the 32 color bits with thefollowing binary bit mask [2].
0000 0000 0000 0000 0000 0000 1111 1111
Figure 2: 32-bit Color ImageFigure 3: 8-bit Blue Color Image

Activity (8)

You've already reviewed this. Edit your review.
1 hundred reads
1 thousand reads
Abraham Muñoz liked this
Frantz Denis liked this
azzamtri liked this
suknats liked this

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->