Colour Image Segmentation Using FPGA

Chapter 1 INTRODUCTION
People are only interested in certain parts of the image in the research and application of the image. These parts are frequently referred as a target or foreground (other part is called background); they generally correspond to the image in a specific and unique nature of the area. It needs to extract and separate them in order to identify and analyze object, on this basis it will be possible to further use for the target. Image segmentation is a technique and process which divide the image into different feature of region and extract out the interested target. Here features can be pixel grayscale, colour, texture, etc. Pre-defined targets can correspond to a single region or multiple regions. To illustrate the level of the image segmentation in image processing, we have introduced "image engineering" concept ", it bring the involved theory, methods, algorithms, tools, equipment of image segmentation into an overall framework. Image Engineering is a new subject for research and application of image field, its content is very abundant. According to the different of the abstract degree and research methods, it can be divided into three levels: Image processing, image analysis and image understanding.

Figure 1 - Hierarchical Needs

Dept. of ECE, VJCETPage 1

Colour Image Segmentation Using FPGA

Image processing is emphasis on the transformation between the images and improves the visual effects of image. Image analysis is mainly monitor and measure the interested targets in the image in order to get its objective information as a result build up a description of the image, the key point of the image understanding is further study on the nature of each target and the linkage of each other as well obtain an explanation of objective scenario for original image as result guide and plan to action. Image processing, image analysis and image understanding have different operational, refer to Figure 1. Image processing is relatively low-level operations; it is mainly operated on the pixel-level. Then image analysis enters the middle-level, it focuses on measuring, expression and description of target. Image Understanding is mainly high-level operation, essentially it focus on the operation and illation of data symbol which abstracts from the description. Image segmentation is a key step from the image processing to image analysis, it occupy an important place. On the one hand, it is the basis of target expression and has important effect on the feature measurement. On the other hand, as the image segmentation, the target expression based on segmentation, the feature extraction and parameter measurement that converts the original image to more abstract and more compact form, it is possible to make high-level image analysis and understanding. In the actual production life, the application of image segmentation is also very wide and almost appeared in all related areas of image processing as well as involved various types of image. In these applications, image segmentation is usually used for image analysis, identification and compress code, etc.

Dept. of ECE, VJCETPage 2

Colour Image Segmentation Using FPGA

Chapter 2 THE STUDY OF COLOR IMAGE SEGMENTATION
Human eyes can distinguish thousands of colours but can only distinguish 20 kinds of grayscale, so we can easily and accurately find the target from the colour images. However, it is difficult to find out from the grayscale image. The reason is that colour can provide more information than grayscale. The colour for the pattern recognition and machine vision is very useful and necessary. At present, specifically applied to the colour image segmentation approach is not so much as for the grayscale images, most of proposed colour image segmentation methods are the combination of the existing grayscale image segmentation method on the basis of different colour space. Commonly used for colour image segmentation methods are histogram threshold, feature spaceclustering, region-based approach, based onedge detectionmethods, fuzzy methods, artificial neural network approach, based on physical model methods, etc. The basic idea of region growing is a collection of pixels with similar properties to form a region. First, we need to find a seed pixel as a started point for each of needed segmentation. And then merge the same or similar property of pixel (Based on a predetermined growing or similar formula to determine) with the seed pixel around the seed pixel domain into the domain of seed pixel. These new pixels as a new seed pixel to continue the above process until no more pixels that satisfy the condition can be included, and then the region has grown. In the practical application of this method we need to address three questions: first, chose or determined a group of seed pixel which can correctlyrepresent the required region; second, fixed the formula which can contain the adjacent pixels in the growth; third, made rules or conditions to stop the growth process. The advantage of region growing algorithm is easy to complete and compute. Similar to the threshold, the region growing methods are rarely used alone; it is often used with other segmentation methods. The practical method of this subject combines the watershed algorithm and region growing algorithm for colour image segmentation. The disadvantage of region growing: first, it needs human interaction to obtain the seed point, so that the user needs to implant a seed point in every region which needs to extract; second, The patterns of regional growth are also sensitive to noise as result the extracted region has empty or links the separate region under the case of local effect. This article according a certain rules to automatic select seed pixels as
Dept. of ECE, VJCETPage 3

b) Fixed the formula which can contain the adjacent pixels in the growth. The steps are as follows: First.Colour Image Segmentation Using FPGA well as effective solve the first question. VJCETPage 4 . Metmert and Jackway further described the dependency relationship between pixels in the seed growth: i) The first order of dependence occurs when the number of pixels has the same difference ratio as their vicinity. of ECE. The method in this paper is combines the watershed algorithm on the basis of Frank and Shouxian Cheng's as well Dept. The seed region growing algorithm is proposed by Adams and Bischof. 2. We carry on the regional growing on the basis of the watershed segmentation algorithm. they selected seed which can represents needed segmentation region based on certain similarity criteria and proposed a strategy to solve the above two pixels dependence. The disadvantage of domain decomposition technique may cause destruction of the border. we need to find a seed pixel as a started point for each of needed segmentation.1ALGORITHM BASICS The basic idea of region growing method is a collection of pixels with similar properties to form a region. this method effectively solved the second questions. And then merge the same or similar property of pixel (Based on a pre-determined growing or similar formula to determine) with the seed pixelaround the seed pixel domain into the domain of seed pixel. In the practical application of this method we need to address three questions: a) Chose or determined a group of seed pixel which can correctly represent the required region. Domain decomposition technique makes seed region continually split into four rectangular regions until the internal of every region is similar. c) Made rules or conditions to stop the growth process. ii) The second order of dependence occurs when a pixels has the same difference ratio as their vicinity.These new pixels as a new seed pixel to continue the aboveprocess until no more pixels that satisfy the condition can be included. Frank and Shouxian Cheng applied the automatic seed selection method. Region merging is often combines with the region growing and domain decomposition in order to merge the similar sub-region into a domain as large as possible.

Figure 2 ± Steps in Algorithm First. This article proposes a new region growing algorithm on the basis of traditional seed region growing algorithm. secondly. when the image is a colorized.Colour Image Segmentation Using FPGA proposes a new seed region growing method. the graphic will be effect by utilize monochrome criteria. but also depends on the type of practical image data. The selection of growth criteria not only depends on the specific issues themselves. engage in regional growth. VJCETPage 5 . For example. Therefore. the regional merged. on this basis. automatically select the part of the region as a seed region. this method is begun with a set of "seed" point and attached the adjacent pixels which has the similar properties with the seed (such as the grayscale or the specific range of colour) to every seed on the growth of the region [5]. Dept. Seed region growing (SRG) is an image segmentation method which proposed by Adams and Bischof. Finally. we carry on seed selection and regional growth according to the hue and saturation in the colour image in this paper. we use the watershed algorithm to initialize segmentation for the image as well as form segmentation results. automatically selected some regions as seed region to carry on the growth of region according to certain rules. of ECE. according to a certain rules. and then started from the region which formed by watershed algorithm.

Thus the inputted images are converted into raw bit streams.Colour Image Segmentation Using FPGA Chapter 3 BLOCK DIAGRAM INPUT IMAGE FPGA SPARTAN 3E SEGMENTATION OUTPUTIMAGE UART OBJECTRECOG NITION Figure 3 ± Block Schematic 3. Each pixel value thus ranging between 0 and 255 is stored as a header file. Dept. The inputted image is of RGB colour format. Green & Blue.1 BLOCK DIAGRAMDESCRIPTION The entire block diagram can be divided into 2 segments ± the raw data inputting & the data processing. Here we split the 3 planes into separate planes and the intensity of each and every plane is considered separately. VJCETPage 6 . a resizing mechanism is done to obtain a standard input image resolution. where 0 represents the Black & 255 represents the White. Thus there are 3 planes of colour schemes ± Red. of ECE. INPUTTING IMAGE The image inputted is either in . When the inputted image is of varying size.jpg format. The image is loaded using the MATLAB interface.tif or . The inputted image is either of 64x64. The intensity variations in each and every pixel are converted into grayscale. The grayscale values range from 0 to 255. 128x128 or 256x256 pixel size.

Once the region of interest has been jotted out using the algorithms. The algorithm code developed using the Xilinx XPS/IDE is then loaded into the Spartan 3E using the serial bus. of ECE. The Spartan 3E performs the algorithm and segmentation. Dept. The entire data processing is done at the Spartan 3E processor. Once the target object has been identified. the region under interest is outputted back to the system using the UART Serial Bus Interface. The thresholding as well as the region growing algorithm is performed at the Spartan. the object recognition is being performed to determine the nature of the object for our concern. VJCETPage 7 .Colour Image Segmentation Using FPGA DATA PROCESSING The raw bit stream generated using the MATLAB is transferred to the Spartan 3E FPGA processor.

Colour Image Segmentation Using FPGA HARDWARE Dept. VJCETPage 8 . of ECE.

1 Block Diagram Figure 4 ± Spartan EDK Trainer Kit Block Diagram D p . of ECE.Colour Image Segmentation Using FPGA Chapter 4 SPARTAN 3 FPGA TRAINER KIT 4. VJCETP  £ ¢ ¡   9 .

of ECE. JTAG and RS232 connectors are provided in the kit for programing of the Spartan3E. Also others feature like 7segment display. VJCETPage 10 . LED. dip switch. The Spartan 3E used in the trainer kit has a QFP packaging and got a temperature range of 0 to 85degree Celsius. Details of each component are discussed below. LCD display are provided in the kit.Colour Image Segmentation Using FPGA The FPGA used in the kit is SPARTAN 3E which is a product of Xilinx and it has got 500k gate. The SRAM is of 256*16k size. It has got a clock generator of 50 kHz which is given as input to the DCM of FPGA. There point for the Vcc to connect and regulator to regulate the voltage.GenerallyFPGAis available in two speed grade high and standard speed grade. Generally a 5v adapter is used to connect the kit. The trainer kit provides an external SRAM for loading program and other functions. Dept.

Because of their exceptionally low cost.5V.1 Introduction The Spartan-3E family of Field-Programmable Gate Arrays (FPGAs) is specifically designed to meet the needs of high volume. FPGA programmability permits design upgrades in the field with no hardware replacement necessary. mini-LVDS. and the inherent inflexibility of conventional ASICs.2. The five-member family offers densities ranging from100. the lengthy development cycles.8V. deliver more functionality and bandwidth per dollar than was previously possible. SSTL I at 1.5V. 1. including broadband access. Also. cost-sensitive consumer electronic applications. The Spartan-3E family builds on the success of the earlierSpartan-3 family by increasing the amount of logic per I/O. or 1.8V. 2.display/projection. Spartan-3E FPGAs also support most low voltage differential I/O standards like LVDS which is called as low voltage differential signaling . 4. combined with advanced 90 nm process technology.2. commonly used for memory applications.2V.5V and 1.1. and digital television equipment.Colour Image Segmentation Using FPGA 4.8V. Bus LVDS. Differential HSTL (1. New features improve system performance and reduce the cost of configuration.8V and 2. Dept. Type I).2 FPGA SPARTAN3E XC3S500E 4.3V PCI at 33 MHz. and in some devices.5V LVPECL inputs . FPGAs avoid the high initial cost. Types I and III).3V low-voltage TTL (LVTTL) .5V.000 to 1. Spartan-3E FPGAs support the single-ended standards like 3. an impossibility with ASICs. 2. The Spartan3E family is a superior alternative to mask programmed ASICs.3V. Differential SSTL (2. setting new standards in the programmable logic industry. 66 MHz . RSDS. Spartan-3E FPGAs are ideally suited to a wide range of consumer electronics applications.6 million system gates. commonly used in memory applications.8V. significantly reducing the cost per logic cell. Low-voltage CMOS (LVCMOS) at 3. These Spartan -3E FPGA enhancements. home networking . VJCETPage 11 . HSTL I and III at 1.2 Features I/O CAPABILITIES of Spartan 3E The Spartan-3E FPGA Select IO interface supports many popular single-ended and differential standards. of ECE.

VJCETPage 12 . and the Spartan-3E family CLB structure is identical to that for the Spartan-3 family. of ECE. The LUTs can be used as a 16x1 memory (RAM16) or as a 16-bit shift register and additional multiplexers and carry logic simplify wide logic and arithmetic functions. Each CLB is identical. and each slice contains two Look-Up Tables (LUTs) to implement logic and two dedicated storage elements that can be used as flip-flops or latches.Colour Image Segmentation Using FPGA 4.3 Architectural Overview Figure 5 FPGA Architecture Configurable Logic Blocks The Configurable Logic Blocks (CLBs) constitute the main logic resource for implementing synchronous as well as combinatorial circuits. Dept.2. Each CLB contains four slices.

Pin names designate a Dedicated Input if the name starts with IP.Colour Image Segmentation Using FPGA Figure 6 . IP or IP_Lxxx_x. for example. Each path has its own pair of storage elements that can act aseither registers or latches. input path. VJCETPage 13 . In Figure below the signal path has a coarse delay element that can be bypassed. For the differential Dedicated Inputs. Dedicated inputs retain the full functionality of the IOB for input functions with a single exception for differential inputs (IP_Lxxx_x). and 3-state path. of ECE. The IOB is similar to that of the Spartan-3 family with the following differences: A. Input-only blocks are added In Spartan -XC3S500E there are a total of 232 I/O and 56 among them are input only pin. There are three main signal paths within the IOB and they are the output path. unidirectional or bidirectional interface between a package pin and the FPGA¶s internal logic. Dedicated Inputs are IOBs which are used only as inputs. the on-chip differential termination is not available. The input Dept. Programmable input delays are added to all blocks Each IOB has a programmable delay block that optionally delays the input signal. B.CLBs In case of XC3S500E total 1164 CLBs which is arranged in 46 rows and 34 columns Input/Output Blocks The Input /Output Block (IOB) provide a programmable.

C.Programmable Delay Element The delay values are set up in the silicon once at configuration time. and must be either used or not used for both paths Figure 7 . VJCETPage 14 . DDR flip-flops can be shared between adjacent IOBs Double-Data-Rate (DDR) transmission describes the technique of synchronizing signals to both the rising and falling edges of the clock signal.Colour Image Segmentation Using FPGA signal then feeds a 6-tap delay line. In this way. The coarse and tap delays vary. the delay is programmable in 12 steps. The first. The delay inserted in the path to the storage element can be varied in six steps. And these flip-flops can be shared by two IOBs Dept. The primary use for the input delay element is to adjust the input delay path to ensure that there is no hold time requirement when using the input flipflops with a global clock. coarse delay element is common to both asynchronous and synchronous paths. They are nonmodifiable in device operation. All six taps are available via a multiplexer for use as an asynchronous input directly into the FPGA fabric. of ECE. Three of the six taps are also available via a multiplexer to the D inputs of the synchronous storage elements.

Each bank maintains separate VCCO and VREF supplies. one for each of the FPGA¶s I/O banks. Block RAM synchronously stores large amounts of data while distributed RAM. 3. The separate supplies allow each bank to independently set VCCO. VCCINT is the main power supply for the FPGA¶s internal logic. The VCCO supplies. Figure 8 . 2. the VREF supplies can be set for each bank.IOB Banks Supply Voltages for the IOBs The IOBs are powered by three supplies: 1.we have 20 no of block ram with addressable bits of 368. primarily to optimize the performance of various FPGA functions such as I/O switching.640 in column of 2. power the output drivers. Block RAM Spartan-3E devices incorporate 4 to 36 dedicated block RAMs. The voltage on the VCCO pins determines the voltage swing of the output signal. which are organized as dualport configurable 18 Kbit blocks. Similarly. VCCAUX is an auxiliary source of power. Dept. is better suited for buffering small amounts of data anywhere along signal paths. In case XC3S500E .Colour Image Segmentation Using FPGA IOBs Organization The Spartan-3E architecture organizes IOBs into four I/O banks as shown in Figure below. of ECE. VJCETPage 15 .

The multipliers are located together with the block RAM in one or two columns depending on device density. The multiplier blocks primarily perform two¶s complement numerical multiplication but can also perform some less obvious applications. Each port has its own dedicated set of data.Colour Image Segmentation Using FPGA Internal Structure of the Block RAM The block RAM has a dual port structure. such as simple data Dept. Data transfer from Port A to Port B 4. and clock lines for synchronous readand write operations. as shown in Figure below: 1. There are four basic data paths. VJCETPage 16 . The two identical data ports called A and B permit independent access to the common block RAM. Data transfer from Port B to Port A Figure 9 Block Ram Data Path Multiplier Blocks The Spartan-3E devices provide 4 to 36 dedicated multiplier blocks per device. which has a maximum capacity of 18. Write to and read from Port B 3. Write to and read from Port A 2. of ECE.432 bits. control.

fully digital solutions for distributing. typically caused by the clock signal distribution network. Frequency Synthesis: The DCM can generate a wide range of different output clock frequencies derived from the incoming clock signal. VJCETPage 17 . C. The DCM eliminates clock skew by phase-aligning the output clock signal that it generates with the incoming clock signal. delaying. The DCM supports three major functions: A. This is accomplished by either multiplying or dividing the frequency of the input clock signal by any of several different factors. dividing. Logic slices also implement efficient small multipliers and thereby supplement the dedicated multipliers.Colour Image Segmentation Using FPGA storage and barrel shifting. of ECE. This mechanism effectively cancels out the clock distribution delays. multiplying. Phase Shifting: The DCM provides the ability to shift the phase of its entire output clock signals with respect to the input clock signal.it has the provision to shift the phase 90 and 180 Dept. B. Clock-skew Elimination: Clock skew within a system occurs due to the different arrival times of a clock signal at different points on the die. and phase-shifting clock signals. all of which are undesirable in high frequency applications. Clock skew increases setup and hold time requirements and increases clock -to-out times. Digital Clock Manager Blocks This blockProvide self-calibrating.

of ECE.DCM Functional Blocks and Associated Signals Dept. VJCETPage 18 .Colour Image Segmentation Using FPGA Figure 10 .

JTAG is the informal name often used to describe the standard that resulted from the work of this group.Colour Image Segmentation Using FPGA Chapter 5 J TAG JTAG is an acronym that stands for ³Joint Test Action Group´. As many of these modern ICs had many hundreds of pins it quickly became impractical to add test points for all the new pins. These extra features.The pins that comprise the TAP interface are used to control the access to a long chain of I/O cells at each pin of a device. These signals taken together are known as the TAP or Test Access Port. Specifically.1 Standard JTAG Connector Signals JTAG utilizes a standard set of signals to communicate with the IC or ICs under test.These new devices had their pins (called balls) on the bottom of the chip. it is possible to set and determine the state of a pin. 5. those devices can often be tested as well. The main problem that the JTAG group set out to solve was that traditional In-Circuit Test or ICT. VJCETPage 19 .JTAG emulators leverage extended registers and boundary-scan instructions .JTAG Boundary-Scan Test tools allow the hardware level debugging. programming and testing of circuit boards. Key members included: TI. The ever decreasing size of modern electronic circuits and the rise of multi-layer printed circuit boards were also key drivers for JTAG. This change was due to the rise in use of surface mount devices such as Ball Grid Array (BGA) devices. since other non-JTAG devices may be connected to these pins. of ECE. By clocking data patterns in and reading values out. step and read/write memory and registers. the standard is known as IEEE1149. was no longer as effective as it once was for board test. When soldered down to a circuit board the pins could not be accessed as they were covered by the chip itself. stop. Intel and others. allow the JTAG connector to be used to control a microprocessor that is run.1 Boundary-Scan.The group was a consortium of vendors focused on problems found when testing electronic circuit boards.This standard Dept. The term JTAG is used to describe test and debug interfaces based on the specifications brought about by this group. By extension. put on-chip by the processor manufacturer.

Ability to verify the presence and integrity of the entire scan chain Dept. shifted. Is it oriented correctly. Is it bonded to the board. often used with JTAG emulation or when multiple chains must be tied together 5. of ECE. is it rotated. TCLK( Test Clock ) This is the JTAG System CLOCK from the tool into the device 4.The various signals are: 1. VJCETPage 20 .Colour Image Segmentation Using FPGA configuration is typically used for FPGA (such as Xilinx) JTAG programming adapters. Orientation of the device. is the internal pin to amplifier interconnect damaged. Chip Level Boundary Scan Testing Boundary Scan allows to do the following types of chip level testing like Presence of the device ± Is the device on the board. an important function the j tag can be used to perform is boundary scan. TDO( Test Data Out) This is the data out of the last IC in the chain on the DUT back to the test tool 3.1. TMS( Test Mode Select) This signal from the tool to the device is used to manipulate the internal state machine that controls Boundary-Scan operation 5. TRST (Test Reset) This is an optional signal. Is it soldered properly or are their issues with the solder joint. TDI( Test Data In ) This is the data from the JTAG tool into the device under test or (DUT) 2. the wrong package. did it get soldered on. Itcan be used in both chip level and board level testing A. Read the devices ID register (get chip revision level information) Board Level Testing Boundary Scan is Testing at the board level adds inter-device and board-level testing such a.1 Boundary Scan Other than downloading data to the chip.

VJCETPage 21 . if the device on the chain is a microprocessor or DSP. solder issues (cold or ho t Joints). shorts.1) failures. stuck at and device functional failures. most likely you will have access to RAM and FLASH memory via the address. E. CPLDs. D. data and control bus. Boundary Scan is prefect for testing for common problems like unfitted or ill fitted devices. As we only need good power. When bringing up and debugging your new hardware. assembly and manufacturing problems will exist. as well as open. Initializing and Programming Devices You may also be able to do initial device programming.Colour Image Segmentation Using FPGA and each device on it. not all devices may be fitted. This can allow you to ID and Program and test these so-called devices like FPGAs. Improving Debug Productivity As will be able to focus your debug efforts on the new release of firmware. Device interconnect tests. C. For example. Dept. Short and Stuck At (0. knowing full well that your hardware is good initial firmware or diagnostics are written for your new hardware. ground and at least one part on the JTAG Chain to begin testing and we should be able to ID the part on the chain and then test for opens and shorts for any board area that is touched by this device. Boundary Scan comes to the rescue in several important ways such as B. Boundary Scan can be used to rule out bad hardware. Finding Assembly Defects Prototypes are often rushed through assembly in order to make engineering deadlines. As a result. of ECE. Open. Testing Partially Populated hardware When you get your initial boards.

RS-232 is the traditional name for a series of standards for serial binary single-ended data and control signals connecting between a DTE (Data Terminal Equipment) and a DCE (Data Circuit-terminating Equipment). the meaning of signals.1 Voltage Levels The RS-232 standard defines the voltage levels that correspond to logical one and logical zero levels for the data transmission and the control signal lines. RS-232 drivers and receivers must be able to withstand indefinite short circuit to ground or to any voltage level up to 25 volts. VJCETPage 22 . or character encoding. 6. signal levels of 5 V. Since transmit data and receive data are separate circuits. user data is sent as a time-series of bits. Each data or control circuit only operates in one direction that is. and 15 V are all commonly seen depending on the power supplies available within a device. and the physical size and pin out of connectors.The 3 V range near zero volts is not a valid RS-232 level. 10 V. the standard defines a number of control circuits used to manage the connection between the DTE and DCE. the interface can operate in a full duplex manner. of ECE. The standard defines the electrical characteristics and timing of signals. supporting concurrent data flow in both directions. It is commonly used in computer serial ports. Valid signals are plus or minus 3 to 15 volt. 12 V. Both synchronous and asynchronous transmissions are supported by the standard. In addition to the data circuits. In RS-232.Colour Image Segmentation Using FPGA Chapter 6 RS 232 In telecommunications. Dept. The standard specifies a maximum open-circuit voltage of 25 volts. signaling from a DTE to the attached DCE or the reverse. The standard does not define character framing within the data stream.

Ring Indicator Shield Table 1 ± Pins of RS232 Dept.Data Terminal Ready GND . of ECE.Data Set Ready RTS .2 RS 232 DB9 connector PINOUT Pin number 1 2 3 4 5 6 7 8 9 Name CD .Request To Send CTS .Receive Data TXD . VJCETPage 23 .Carrier Detect RXD .Colour Image Segmentation Using FPGA 6.Signal Ground DSR .Transmit Data DTR .Clear To Send RI .

of ECE. VJCETPage 24 .Colour Image Segmentation Using FPGA SOFTWARE Dept.

The picture is given as input to MATLAB Input image is resize to required size The input image is converted to grey scale Image is then converted to bit stream The output of the MATLAB is the bit stream of the input of the image. The code is combinations of many algorithms like region growing.2 System C (Xilinx XPS) The Programmingfor image segmentation is done in system c. They are: 7.Colour Image Segmentation Using FPGA Chapter 7 SOFTWARE DEVELOPMENT TOOLS The software part of our project involves two sections. of ECE. VJCETPage 25 . 7. II. histogram approach. III. which contains all the information about the pixel values . IV. The flowchart of the program is given below Dept.1 MATLAB The algorithm of the MATLAB programming is:I. and edge detection.these is then given to the Spartan trainer kit using a j-tag.

Colour Image Segmentation Using FPGA 7. VJCETPage 26 . of ECE.3 FLOWCHART Input Image Divide into different parts Estimate seed point Compare with neighboring pixels YES If difference > threshold NO Group the pixels Figure 11 ± Algorithm Flow Chart Dept.

if nargin && ischar(varargin{1}) gui_State. of ECE. Now using the threshold values. III. gui_Singleton. Else gui_mainfcn(gui_State.Colour Image Segmentation Using FPGA 7. 'gui_LayoutFcn'. []). .4 ALGORITHM I. @segment_OpeningFcn. end Dept. gui_State = struct('gui_Name'.. The pixel above threshold is made 256 and below is made zero 7.... [] .. end if nargout [varargout{1:nargout}] = gui_mainfcn(gui_State. varargin{:}). 'gui_OpeningFcn'.. The image is divided into segments for easy analysis The median value for each segment is calculated The edge of the image detected by comparison with the default image and all the pixel value outside the image is made zero IV. pixel with similar value is grouped with in the image V..5 MATLAB CODE function varargout = segment(varargin) gui_Singleton = 1. 'gui_Singleton'... 'gui_OutputFcn'. mfilename. @segment_OutputFcn. 'gui_Callback'. . . II.gui_Callback = str2func(varargin{1}). VJCETPage 27 . . . varargin{:})..

handles.filename=filename. handles). function varargout = segment_OutputFcn(hObject. guidata(hObject. VJCETPage 28 . axes(handles. handles) filename=handles. end % --. varargin) handles. handles). function pushbutton1_Callback(hObject. handles) [filename.avi'.0) warndlg('User pressed cancel') else a=aviread(filename).axes1). if isequal(filename.Executes on button press in pushbutton1. of ECE.0) | isequal(pathname. pathname] = uigetfile('*. eventdata.Colour Image Segmentation Using FPGA % End initialization code % --. str1='frame'. handles) varargout{1} = handles. % --. 'Pick an video').filename.Executes on button press in pushbutton2.output.Executes just before segment is made visible. eventdata. str2='. Dept. guidata(hObject.output = hObject. movie(a). function segment_OpeningFcn(hObject. eventdata. function pushbutton2_Callback(hObject. eventdata. handles.bmp'.

frm_name=frame2im(frm(i)).*'.Colour Image Segmentation Using FPGA %%%%%%%%%%%%%%%%%%%%%%%%%%%%%frame seperation% q=2.frm_cnt=frm_cnt.').Executes on button press in pushbutton3. for i=1:frm_cnt. if isequal(filename.a=b. %%%%%%%%%%%%%%%% quantization value file=aviinfo(filename).NumFrames handles. of ECE. warndlg('process completed'). frm(i)=aviread(filename. h = waitbar(0. imwrite(frm_name. % to get inforamtaion abt videodct file frm_cnt=file. % Write image file Dept. handles) [filename. % read the Video file % Convert Frame to image file % No. b=imresize(a. handles. pathname] = uigetfile('*. a=imread(filename).of frames in the videodct file filename1=strcat(strcat(num2str(i)).filename1).filename). guidata(hObject. function pushbutton3_Callback(hObject. handles).0) warndlg('User pressed cancel') else filename=strcat(pathname. % --.h) end close(h). VJCETPage 29 .. eventdata. waitbar(i/10.[64 64]).str2). imshow(b)..'Please wait.0) | isequal(pathname. 'Pick an Image').i).

fprintf(fid.'%c'.').bmp'). eventdata.'{'). fprintf(fid.'test. else imwrite(res. if (p==3) res=rgb2gray(res). end % --. for i=1:c.'{').as). % as=8. te=res(i.'wt'). fprintf(fid. [r c p]=size(res).te). as='unsigned char Input[64][64]='. % Update handles structure guidata(hObject.'%c'.:).h'.'%c\n'.Colour Image Segmentation Using FPGA % imwrite(a.'%d.'%c'. res=double(res).'test. end [r c]=size(res). of ECE.'\n%c\n'. VJCETPage 30 .bmp'). function pushbutton4_Callback(hObject. fprintf(fid.'}'). handles).'.'test.bmp'). fprintf(fid. handles) res= handles. fprintf(fid.Executes on button press in pushbutton4. imwrite(res.a. fid=fopen('Image2.'. Dept.

exit. int Cen[8].h> #include <math. int EDGEIMAGE[64][64]. int Value_Check.'%c %c'.SR5. of ECE. fclose(fid).count.Executes on button press in pushbutton5. Dept.SR7. int SEG1[64][64]. handles) delete('image3.'). int TH1=255. int CHECK1. function pushbutton6_Callback(hObject.'%c'.h" int INPUT1[64][64]. eventdata.'.h'). int INPUT2[64][64].Colour Image Segmentation Using FPGA end. int FIRSTFILTER[64][64]. handles) close all. delete('test. warndlg('Files Deleted Succesfully'). 7.'). function pushbutton5_Callback(hObject.bmp'). VJCETPage 31 .SR3.'. eventdata.'}'. int Sliding_windowr[3][3]. int n=8.h" #include "Image3. fprintf(fid.SR9.SR2. int diff1[64][64].SR6. % fprintf(fid.Executes on button press in pushbutton6. % --.t. helpdlg('Files created Succesfully').6 SYSTEM C CODE #include <stdio.SR8. int SR1.SR4.h> #include "Image4. int TH2=0. % --.

arr[4]=SR5. int arr[8]. float bbr. arr[2]=SR3.int SR6. of ECE. VJCETPage 32 . arr[j]=arr[j+1].0. arr[7]=SR8. } Dept. int CHECK5. int CHECK8.i<=n-1. arr[5]=SR6.int SR5. }//for }/// for if (n%2==0) { median=(arr[n/2]+arr[n/2+1])/2.int SR7.Colour Image Segmentation Using FPGA int CHECK2.int SR2. float median1(). } else continue. arr[j+1]=t. arr[1]=SR2.int SR3. int CHECK3.int SR4. arr[6]=SR7.int SR8) { int i.i++) { for(j=0. float median. for (i=0. int CHECK4. int CHECK7. arr[3]=SR4. arr[0]=SR1. float median1(int SR1. int CHECK6.j.j<=n-1.j++) { if (arr[j]<=arr[j+1]) { t=arr[j].

Colour Image Segmentation Using FPGA else { median=arr[n/2+1].j<64.temp2.l. printf("%d\n". } } Dept. of ECE.i++) { for (j=0.i++) { for( j=0.j++) { diff1[i][j]= (INPUT1[i][j].i++) { for( j=0.m.INPUT2[i][j]).j.i<64.j++) { INPUT1[i][j]=InputImage3[i][j].temp3.i<64.n.k. } } for (i=0. } } for( i=0.INPUT2[i][j]). }//end of median void main() { int i.i<64.diff1[i][j]). printf("%d \n".INPUT1[i][j]). } return (median). // printf("%d \n". VJCETPage 33 . for( i=0. int temp1.j<64.j++) { INPUT2[i][j]=InputImage4[i][j].count.j<64.

i<64.i++) { for (j=0. VJCETPage 34 . } printf("%d \n".j++) { Value_Check=diff1[i][j].j++) { if (i==0 & j==0) { } else if (i==0 & j==127) { } else if (i==127 & j==0) { } else if (j==127 & i==127) { } else if (i==1) { Dept. if (Value_Check>20) { SEG1[i][j]=255. of ECE.i<64.i++) { for (j=0.Colour Image Segmentation Using FPGA for(i=0.j<64.j<64.SEG1[i][j]). } } for(i=0. } else { SEG1[i][j]=0.

SR1=Sliding_windowr[1][1]. Sliding_windowr[3][2]=SEG1[i+1][j]. Sliding_windowr[1][2]=SEG1[i-1][i].SR8).SR6.SR5. bbr=median1(SR1. // disp(Sliding_window). SR2=Sliding_windowr[1][2]. ///SR6=Sliding_windowr[2][2]. SR5=Sliding_windowr[2][3]. } Dept.FIRSTFILTER[i][j]).SR7. printf("%d\n". Sliding_windowr[2][1]=SEG1[i][j-1]. Sliding_windowr[3][1]=SEG1[i+1][j -1]. Sliding_windowr[2][3]=SEG1[i][j+1]. Sliding_windowr[3][3]=SEG1[i+1][j+1]. Sliding_windowr[1][3]=SEG1[i-1][i+1]. SR6=Sliding_windowr[3][1]. SR4=Sliding_windowr[2][1].SR3.SR2. SR3=Sliding_windowr[1][3]. VJCETPage 35 .SR4. Sliding_windowr[2][2]=SEG1[i][j].Colour Image Segmentation Using FPGA } else if (i==127) { } else if (j==127) { } else if (j==1) { } else { Sliding_windowr[1][1]=SEG1[i-1][j-1]. of ECE. SR7=Sliding_windowr[3][2]. } FIRSTFILTER[i][j]=bbr. SR8=Sliding_windowr[3][3].

i++) { for (j=0. if (CHECK1==TH1) { if (CHECK2==TH1) { if (CHECK3==TH1) { EDGEIMAGE[j][i]=0.i<64.j++) { EDGEIMAGE[j][i]=255. CHECK2=FIRSTFILTER[j+1][i+1]. of ECE. CHECK3=FIRSTFILTER[j+1][i].j<64.i++) { for( j=0. VJCETPage 36 .i<64. } } ////////////////////////////DETECTING ONES///////////////// for(i=0.j<64. } } } Dept.Colour Image Segmentation Using FPGA }//end of for loop ///////////////////////EDGE DETECTION for( i=0.j++) { if ((i==0)&&(j==0)) { CHECK1=FIRSTFILTER[j][i+1].

} } } } else if ((i==127 )&&( j==0)) { CHECK1=FIRSTFILTER[j-1][i].Colour Image Segmentation Using FPGA } else if ((i==0)&&(j==127)) { CHECK1=FIRSTFILTER[j][i+1]. CHECK3=FIRSTFILTER[j+1][i]. CHECK2=FIRSTFILTER[j+1][i+1]. if (CHECK1==TH1) { if (CHECK2==TH1) { if (CHECK3==TH1) { Dept. if (FIRSTFILTER[j+1][i]==TH1) { if (FIRSTFILTER[j+1][i-1]==TH1) { if (FIRSTFILTER[j][i-1]==TH1) { EDGEIMAGE[j][i]=0. of ECE. CHECK3=FIRSTFILTER[j][i+1]. VJCETPage 37 . CHECK2=FIRSTFILTER[j-1][i+1].

of ECE.Colour Image Segmentation Using FPGA EDGEIMAGE[j][i]=0. Dept. } } } } else if (i==1) { CHECK1=FIRSTFILTER[j][i-1]. CHECK2=FIRSTFILTER[j-1][i-1]. VJCETPage 38 . if (CHECK1==TH1) { if (CHECK2==TH1) { if (CHECK3==TH1) { EDGEIMAGE[j][i]=0. } } } } else if ((j==127)&&(i==127)) { CHECK1=FIRSTFILTER[j][i-1]. CHECK3=FIRSTFILTER[j-1][i].

Dept. CHECK5=FIRSTFILTER[j][i+1].Colour Image Segmentation Using FPGA CHECK2=FIRSTFILTER[j+1][i-1]. of ECE. CHECK3=FIRSTFILTER[j+1][i]. } } } } } } else if (i==127) { CHECK1=FIRSTFILTER[j-1][i]. if (CHECK1==TH1) { if (CHECK2==TH1) { if (CHECK3==TH1) { if (CHECK4==TH1) { if (CHECK5==TH1) { EDGEIMAGE[j][i]=0. VJCETPage 39 . CHECK4=FIRSTFILTER[j+1][i+1].

CHECK5=FIRSTFILTER[j+1][i]. } } } } } } else if (j==127) { CHECK1=FIRSTFILTER[j][i-1]. CHECK3=FIRSTFILTER[j-1][i]. CHECK4=FIRSTFILTER[j-1][i+1]. CHECK2=FIRSTFILTER[j-1][i-1]. CHECK3=FIRSTFILTER[j][i-1]. of ECE.Colour Image Segmentation Using FPGA CHECK2=FIRSTFILTER[j-1][i-1]. VJCETPage 40 . CHECK5=FIRSTFILTER[j][i+1]. if (CHECK1==TH1) { if (CHECK2==TH1) { if (CHECK3==TH1) { if (CHECK4==TH1) { if (CHECK5==TH1) { EDGEIMAGE[j][i]=0. Dept. CHECK4=FIRSTFILTER[j+1][i-1].

VJCETPage 41 . if (CHECK1==TH1) { if (CHECK2==TH1) { if (CHECK3==TH1) Dept. CHECK2=FIRSTFILTER[j-1][i+1]. } } } } } } else if (j==1) { CHECK1=FIRSTFILTER[j-1][i]. CHECK3=FIRSTFILTER[j][i+1]. of ECE. CHECK5=FIRSTFILTER[j+1][i]. CHECK4=FIRSTFILTER[j+1][i+1].Colour Image Segmentation Using FPGA if (CHECK1==TH1) { if (CHECK2==TH1) { if (CHECK3==TH1) { if (CHECK4==TH1) { if (CHECK5==TH1) { EDGEIMAGE[j][i]=0.

CHECK6=FIRSTFILTER[j][i-1]. CHECK5=FIRSTFILTER[j+1][i]. CHECK8=FIRSTFILTER[j+1][i]. of ECE. VJCETPage 42 . CHECK2=FIRSTFILTER[j-1][i-1]. CHECK4=FIRSTFILTER[j+1][i-1].Colour Image Segmentation Using FPGA { if (CHECK4==TH1) { if (CHECK5==TH1) { EDGEIMAGE[j][i]=0. CHECK3=FIRSTFILTER[j][i-1]. if (CHECK1==TH1) { if (CHECK2==TH1) { Dept. } } } } } } else { CHECK1=FIRSTFILTER[j-1][i]. CHECK7=FIRSTFILTER[j+1][i-1].

Colour Image Segmentation Using FPGA if (CHECK3==TH1) { if (CHECK4==TH1) { if (CHECK5==TH1) { if (CHECK6==TH1) { if (CHECK7==TH1) { if (CHECK8==TH1) { EDGEIMAGE[j][i]=0. of ECE. VJCETPage 43 . } } //2 //1 } //3 }//4 }//5 }//6 }//7 }//8 } } } Dept.

if (CHECK1==TH2) { if (CHECK2==TH2) { if (CHECK3==TH2) { EDGEIMAGE[j][i]=0. } } } } else if ((i==0)&&(j==127)) { CHECK1=FIRSTFILTER[j][i+1]. VJCETPage 44 .i<64. if (FIRSTFILTER[j+1][i]==TH2) Dept. of ECE.i++) { for (j=0. CHECK3=FIRSTFILTER[j+1][i].j++) { if ((i==0)&&(j==0)) { CHECK1=FIRSTFILTER[j][i+1]. CHECK3=FIRSTFILTER[j+1][i].Colour Image Segmentation Using FPGA ////////////////////////////////////DETECTING ZEROS for(i=0. CHECK2=FIRSTFILTER[j+1][i+1]. CHECK2=FIRSTFILTER[j+1][i+1].j<64.

} } } } else if ((i==127 )&&( j==0)) { CHECK1=FIRSTFILTER[j-1][i]. CHECK2=FIRSTFILTER[j-1][i+1]. VJCETPage 45 . } } } } else if ((j==127)&&(i==127)) { CHECK1=FIRSTFILTER[j][i-1]. Dept. if (CHECK1==TH2) { if (CHECK2==TH2) { if (CHECK3==TH2) { EDGEIMAGE[j][i]=0. of ECE. CHECK3=FIRSTFILTER[j][i+1].Colour Image Segmentation Using FPGA { if (FIRSTFILTER[j+1][i-1]==TH2) { if (FIRSTFILTER[j][i-1]==TH2) { EDGEIMAGE[j][i]=0.

Colour Image Segmentation Using FPGA CHECK2=FIRSTFILTER[j-1][i-1]. CHECK4=FIRSTFILTER[j+1][i+1]. if (CHECK1==TH2) { if (CHECK2==TH2) { if (CHECK3==TH2) { EDGEIMAGE[j][i]=0. CHECK3=FIRSTFILTER[j+1][i]. CHECK5=FIRSTFILTER[j][i+1]. } } } } else if (i==1) { CHECK1=FIRSTFILTER[j][i-1]. CHECK3=FIRSTFILTER[j-1][i]. CHECK2=FIRSTFILTER[j+1][i-1]. VJCETPage 46 . of ECE. if (CHECK1==TH2) { if (CHECK2==TH2) { Dept.

VJCETPage 47 . CHECK3=FIRSTFILTER[j][i-1].Colour Image Segmentation Using FPGA if (CHECK3==TH2) { if (CHECK4==TH2) { if (CHECK5==TH2) { EDGEIMAGE[j][i]=0. CHECK4=FIRSTFILTER[j+1][i-1]. CHECK5=FIRSTFILTER[j+1][i]. } } } } } } else if (i==127) { CHECK1=FIRSTFILTER[j-1][i]. of ECE. CHECK2=FIRSTFILTER[j-1][i-1]. if (CHECK1==TH2) { if (CHECK2==TH2) { if (CHECK3==TH2) { Dept.

Colour Image Segmentation Using FPGA

if (CHECK4==TH2) { if (CHECK5==TH2) { EDGEIMAGE[j][i]=0; } } } } } } else if (j==127) { CHECK1=FIRSTFILTER[j][i1]; CHECK2=FIRSTFILTER[j-1][i-1]; CHECK3=FIRSTFILTER[j-1][i]; CHECK4=FIRSTFILTER[j-1][i+1]; CHECK5=FIRSTFILTER[j][i+1]; if (CHECK1==TH2) { if (CHECK2==TH2) { if (CHECK3==TH2) { if (CHECK4==TH2) { if (CHECK5==TH2)
Dept. of ECE, VJCETPage 48

Colour Image Segmentation Using FPGA

{ EDGEIMAGE[j][i]=0; } }

} } } } else if (j==1) { CHECK1=FIRSTFILTER[j-1][i]; CHECK2=FIRSTFILTER[j-1][i+1]; CHECK3=FIRSTFILTER[j][i+1]; CHECK4=FIRSTFILTER[j+1][i+1]; CHECK5=FIRSTFILTER[j+1][i]; if (CHECK1==TH2) { if (CHECK2==TH2) { if (CHECK3==TH2) { if (CHECK4==TH2) { if (CHECK5==TH2) { EDGEIMAGE[j][i]=0; } } }

Dept. of ECE, VJCETPage 49

Colour Image Segmentation Using FPGA } } } else { CHECK1=FIRSTFILTER[j-1][i]; CHECK2=FIRSTFILTER[j-1][i-1]; CHECK3=FIRSTFILTER[j][i-1]; CHECK4=FIRSTFILTER[j+1][i-1]; CHECK5=FIRSTFILTER[j+1][i]; CHECK6=FIRSTFILTER[j][i-1]; CHECK7=FIRSTFILTER[j+1][i-1]; CHECK8=FIRSTFILTER[j+1][i]; if (CHECK1==TH2) { if (CHECK2==TH2) { if (CHECK3==TH2) { if (CHECK4==TH2) { if (CHECK5==TH2) { if (CHECK6==TH2) { if (CHECK7==TH2)

Dept. of ECE, VJCETPage 50

j<64.j++) Dept.Colour Image Segmentation Using FPGA { if (CHECK8==TH2) { EDGEIMAGE[j][i]=0. of ECE.i++) { for( j=0. } //1 } //2 } //3 } //4 } //5 } } }//8 } //7 //6 } } ////////////////////////////////////// //printf("output"). VJCETPage 51 .i<64. for( i=0.

j++) { if (FIRSTFILTER[i][j]==255) { count=count+1.j<64. of ECE.i<64.count). } } } // printf("%d \n". } Dept.Colour Image Segmentation Using FPGA { printf("%d \n". } } for( i=0.EDGEIMAGE[i][j]).i++) { for( j=0. VJCETPage 52 .

activities. intensity. curves. relatively noor low-technology methods such as human intelligence agents and postal interception. or texture. governments now possess an unprecedented ability to monitor the activities of their subjects.When applied to a stack of images. the resulting contours after image segmentation can be used to create 3D reconstructions with the help of interpolation algorithms like Marching cubes. 8.Colour Image Segmentation Using FPGA Chapter 8 APPLICATIONS In computer vision. or a set of contours extracted from the image (see edge detection). The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze. etc.) in images. such as color. technologies such as high speed surveillance computers and biometrics software. Surveillance is very useful to governments and law enforcement to maintain social control. or interception of electronically transmitted information (such as Internet traffic or phone calls). Adjacent regions are significantly different with respect to the same characteristic(s). typical in Medical imaging. and prevent/investigate criminalactivity. More precisely. With the advent of programs such as the Total Information Awareness program and ADVISE. recognize and monitor threats.1 AUTOMATIC SURVILLENCE ± Project Application Surveillance is the monitoring of the behavior. The word surveillance may be applied to observation from a distance by means of electronic equipment (such as CCTV cameras). Each of the pixels in a region is similar with respect to some characteristic or computed property. usually of people and often in a surreptitious manner. segmentation refers to the process of partitioning a digital image into multiple segments (sets of pixels. Image segmentation is typically used to locate objects and boundaries (lines. VJCETPage 53 . It may also refer to simple. also known as superpixels). or other changing information. The result of image segmentation is a set of segments that collectively cover the entire image. Dept. and laws such as the Communications Assistance For Law Enforcement Act. of ECE. image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain visual characteristics.

The amount of footage is also drastically reduced by motion sensors which only record when motion is detected. Now with cheaper production techniques. Figure 12 . VJCETPage 54 . Analysis of footage is made easier by automated software that organizes digital video footage into a searchable database. of ECE. The presence of human being in the surveillance area can be easily identified by image segmentation.Colour Image Segmentation Using FPGA Surveillance cameras are video cameras used for the purpose of observing an area. Cameras and recording equipment used to be relatively expensive and required human personnel to monitor camera footage. They are often connected to a recording device.Camera Image 1 Dept. it is simple and inexpensive enough to be used in home security systems. IP network. The fig. and for everyday surveillance. and/or watched by a security guard/law enforcement officer. and by automated video analysis software (such as VIRAT and HumanID) . Image segmentation can be effectively employed in surveillance applications for detecting human presence by segmenting out human figure.12 shows the frame without human and figure 13 shows the frame with human intervention.

Although imaging of removed organs and tissues can be performed for medical reasons.2.Camera Image 2 8. such procedures are not usually referred to as medical imaging. of ECE.2 Other Fields of Applications Some of the other practical applications of image segmentation are: 8. Dept. but rather are a part of pathology.Colour Image Segmentation Using FPGA Figure 13 .1 Medical Imaging       Locate tumors and other pathologies Measure tissue volumes Computer-guided surgery Diagnosis Treatment planning Study of anatomical structure Medical imaging is the technique and process used to create images of the human body (or parts and function thereof) for clinical purposes (medical procedures seeking to reveal. VJCETPage 55 . diagnose or examine disease) or medical science (including the study of normal anatomy and physiology).

Dermatology and wound care are two modalities that utilize visible light imagery. VJCETPage 56 . Medical imaging is often perceived to designate the set of techniques that noninvasively produce images of the internal aspect of the body. psychology. In this restricted sense. etc. 5 billion medical imaging studies had been conducted worldwide.) under investigation. of ECE. Many of the techniques developed for medical imaging also have scientific and industrial applications. Measurement and recording techniques which are not primarily designed to produce imagessuchas electroencephalography (EEG). can be seen as forms of medical imaging. modelling and quantification are usually the preserve of biomedical engineering.g.Colour Image Segmentation Using FPGA As a discipline and in its widest sense. it is part of biological imaging and incorporates radiology (in the wider sense). Ele ctrocardiography (EKG) and others. investigative radiological sciences. As a field of scientific investigation. Diagnostic radiography designates the technical aspects of medical imaging and in particular the acquisition of medical images. nuclear medicine provides functional assessment. radiography). image acquisition (e.e. nuclear medicine. In the clinical context. Up until 2010. medical physics or medicine depending on the context: Research and development in the area of instrumentation. "Visible light" medical imaging involves digital video or still pictures that can be seen without special equipment. endoscopy. Research into the application and interpretation of medical images is usually the preserve of radiology and the medical sub-discipline relevant to medical condition or area of medical science (neuroscience. although some radiological interventions are performed by radiologists. for human pathological investigations). medical imaging constitutes a sub-discipline of biomedical engineering. cardiology. (medical)thermography. but which produce data susceptible to be represented as maps (i. While radiology is an evaluation of anatomy. magnetoencephalography (MEG).g. containing positional information). Radiation exposure from medical imaging in 2006 made up about 50% of total ionizing radiation exposure in the United States.psychiatry. The radiographer or radiologic technologist is usually responsible for acquiring medical images of diagnostic quality. "invisible light" medical imaging is generally equated to radiology or "clinical imaging" and the medical practitioner responsible for interpreting (and sometimes acquiring) the image is a radiologist. Dept. medical physics and computer science. medical photography and microscopy (e.

The RF pulse makes them (only the one or two extra unmatched protons per million) spin at a specific frequency. The pulse makes the protons in that area absorb the energy needed to make them spin in a different direction. producing a detectable signal which is spatially encoded. or "nuclear magnetic resonance (NMR) imaging" scanner as it was originally known. muscle and fat. resulting in images of the body. to the 2+ Tesla coils of an MRI device. From the high energy photons in X -Ray Computed Tomography. VJCETP 57 ¥ ¤ . But on the electromagnetic and radiation level. The term non-invasive is a term based on the fact that following medical imaging modalities do not penetrate the skin physically. these modalities alter the physical and chemical environment of the body in order to obtain dat . in a specific direction. they are quite invasive. uses powerful magnets to polari e and excite hydrogen nuclei (single proton) in water molecules in human tissue.Colour Image Segmentation Using FPGA medi l imagi g can be seen as t e sol tion of mat ematical inverse problems. of ECE. In the case of ultrasonography the probe consists of ultrasonic pressure waves andechoes inside the tissue show the internal structure. The MRI machine emits an RF (radio frequency) pulse that is specifically binds only to hydrogen. the probe isXray radiation which is absorbed at different rates in different tissue types such as bone. In the case of projection radiography. The ¤§ ¦ D p . This is the ³resonance´ part of MRI. This means that cause (the properties of living tissue) is inferred from effect (the observed signal). a Figure14 . The system sends the pulse to the area of the body to be examined.A brain MRI Representation A magnetic resonance imaging instrument (MRI scanner).

Since Because CT and MRI are sensitive to different tissue properties. collected through an RF antenna. MRI does not involve the use of ionizing radiation and is therefore not associated with the same health hazards. called the gradient field(s).Colour Image Segmentation Using FPGA particular frequency of resonance is called the Larmour frequency and is calculated based on the particular tissue being imaged and the strength of the main magnetic field. tomographic. there is well-identified health risks associated with tissue heating from exposure to the RF field and the presence of implanted devices in the body. Like CT. Dept. such as pace makers. there are no known longterm effects of exposure to strong static fields (this is the subject of some debate. X-rays must be blocked by some form of dense tissue to create an image. because it is so ubiquitous and returns a large signal. because MRI has only been in use since the early 1980s. which may be considered a generalization of the single-slice. For example. MRI traditionally creates a two dimensional image of a thin "slice" of the body and is therefore considered a tomographic imaging technique. In CT. In MRI. especially in the clinical setting. 8. called the static field. of ECE. concept. present in water molecules. VJCETPage 58 . the proton of the hydrogen atom remains the most widely used. One of the ways to do this is by comparing selected facial features from the image and a facial database. However. a weaker time-varying (on the order of 1 kHz) field(s) for spatial encoding. see 'Safety' in MRI) and therefore there is no limit to the number of scans to which an individual can be subjected. These risks are strictly controlled as part of the design of the instrument and the scanning protocols used. This nucleus. while any nucleus with a net nuclear spin can be used. in contrast with X-ray and CT. Modern MRI instruments are capable of producing images in the form of 3D blocks. MRI uses three electromagnetic fields: a very strong (on the order of units of Teslas) static magnetic field to polarize the hydrogen nuclei. and a weak radio (RF) field for manipulation of the hydrogen nuclei to produce measurable signals.2. Unlike CT. It is typically used in security systems and can be compared to other biometrics such as fingerprint or eye iris recognition systems. so the image quality when looking at soft tissues will be poor. the appearance of the images obtained with the two techniques differ markedly.2 Face Recognition A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame from a video source. allows the excellent soft-tissue contrast achievable with MRI.

this sensor technology is not susceptible to electrostatic discharge damage. of ECE. an algorithm may analyze the relative position. and active capacitance). is known as the touch surface. Pattern based algorithms compare the basic fingerprint patterns (arch. size. and/or shape of the eyes. passive capacitance. nose. These features are then used to search for other images with matching features. a dirty or marked finger is difficult to image properly. the template contains Dept. To do this. providing a sort of compressed face representation. The light reflected from the finger passes through the phosphor layer to an array of solid state pixels (a charge-coupled device) which captures a visual image of the fingerprint. ultrasonic. or features. 8. A probe image is then compared with the face data. Optical fingerprint imaging involves capturing a digital image of the print using visible light. For instance. This requires that the images be aligned in the same orientation. Other algorithms normalize a gallery of face images and then compress the face data. whorl. The top layer of the sensor. In a pattern-based algorithm. VJCETPage 59 . One of the earliest successful systems is based on template matching techniquesapplied to a set of salient facial features. in essence. unlike capacitive sensors. and jaw. Fingerprints are one of many forms of biometrics used toidentify individuals and verify their identity. from an image of the subject's face.3 Fingerprint Recognition Fingerprint recognition or fingerprint authentication refers to the automated method of verifying a match between two human fingerprints. a specialized digital camera. This type of sensor is. This article touches on two major classes of algorithms (minutia and pattern) and four sensor designs (optical. and loop) between a previously stored template and a candidate fingerprint. it is possible for an individual to erode the outer layer of skin on the fingertips to the point where the fingerprint is no longer visible. where the finger is placed. A scratched or dirty touch surface can cause a bad image of the fingerprint.Colour Image Segmentation Using FPGA Some facial recognition algorithms identify faces by extracting landmarks. For example. the algorithm finds a central point in the fingerprint image and centers on that. Also. It can also be easily fooled by an image of a fingerprint if not coupled with a "live finger" detector. Beneath this layer is a light-emitting phosphor layer which illuminates the surface of the finger. However. cheekbones. A disadvantage of this type of sensor is the fact that the imaging capabilities are affected by the quality of skin on the finger.2. only saving the data in the image that is useful for face detection.

and calibration."The main categories into which MV applications fall are quality assurance.4 Machine Vision Machine vision (MV) is a branch of engineering that uses computer vision in the context of manufacturing. robot guidance. pattern recognition. and lighting that has been designed to provide the differentiation required by subsequent processing. the first step in the MV process is acquisition of an image. Techniques used in MV include: thresholding (converting an image with gray tones to black and white). edge detection. there was little standardization in the processes used in MV. lenses.While the scope of MV is broad and a comprehensive definition is difficult to distil. MV processes are targeted at "recognizing the actual objects in an image and assigning properties to those objects--understanding what they mean. typically using cameras. VJCETPage 60 . Put another way. The candidate fingerprint image is graphically compared with the template to determine the degree to which they match. the analysis of images to extract data for controlling a process or activity. matching. blob extraction.2.MV software packages then employ various digital image processing techniques to allow the hardware to recognize what it is looking at. As of 2006. size. sorting. Dept. barcode reading. 8. and orientation of patterns within the aligned fingerprint image. material handling. and template matching (finding.Colour Image Segmentation Using FPGA the type. and/or counting specific patterns). optical character recognition.. gauging (measuring object dimensions).a "generally accepted definition of machine vision is '.Nonetheless. of ECE.. segmentation.

Dept.Colour Image Segmentation Using FPGA Chapter 9 MERITS AND DEMERITS 9. Higher accuracy due to the combination of different algorithm. As the image size increases the programing become more complex andlengthy. Higher processing time. 9.1 ADVANTAGES 1. of ECE. 3. To implement image processing In 3 planes we need higher FPGA series like vertex series. Low cost as we have used Spartan FPGA. VJCETPage 61 . 2. 2.2 LIMITATIONS 1. Since hardware is FPGA as technology increase software part can be updated with ease. 3.

we were able to broaden the horizon of our knowledge. We are so happy presenting our project.Colour Image Segmentation Using FPGA Chapter 10 SCOPE 10.1 FUTURE EXPANSION 1. colour image segmentation using FPGA successfully. 10. 2. of ECE. The project was implemented section by section and the desired output of each one was verified. To develop a Real time surveillance system. With the successful completion of our project.2 CONCLUSION The colour image segmentation algorithm was implemented successfully on FPGA Spartan 3E kit. VJCETPage 62 . Implementation of the algorithm in 3 planes. Dept.

B. Harris operator is used to improve theexact position of point feature. and Dynamics. of ECE. Nicole.´J.K.V. 1990. 10(5): p. Dzielski. J.V. p. Wertz. The Research onStereo Matching Algorithm based on Region-growth. and Dynamics.toronto. Quebec City. S.. 483-491. Abbrev. ³Study on the MATLAB segmentation image segmentation.. VJCETPage 63 . and J.  [5] Bergmann. 13(1): p.Colour Image Segmentation Using FPGA Chapter 11 BIBLIOGRAPHY  [1] Jun Tung ³Colour Image Segmentation based on Region Growing Approach´ [Xi'an Shiyou University]  [2] Yining Deng.mathworks. in press. and B.     www.  [7] Tanygin. Control.  [8] Lee.20(4): p. Name Stand. 39(1): p. 153-155  [9] Palimaka.edu www. 1987. 625-632.edaboard.Y.com www.xilinx.K. Levy.  [6] Bergmann.  [10] Peck. Walker. fast computation of matching value forbinocular stereo vision. Que.ACell Image Segmentation of Gastric Cancer Based onRegion-Growing and Watersheds. 1997. 1992: Hilton Head Island. computer vision and imageunderstanding Dynamics.com www. 2002. E. S.com Dept. Journal of Guidance. 21-26.: Univelt Inc.. B. Journal of Guidance. and J. 99-103. image dense stereo matching by technique of regiongrowing. Manjunath and Hyundoo Shin ³ Colour Image Segmentation´ [University of Caloifornia]  [3] Olivier Faugeras ³Image Segmentation ± A Perspective´  [4] R. E. A. Control. burlton. and B.A.cs. M.2002.

Sign up to vote on this title
UsefulNot useful