You are on page 1of 5

COLOUR IDENTIFYING AND PICKING ROBOT WITH OPENCV

Premkumar.S, Sreeja Krishnan, Shyama.A Department of Electronics and Communication Engineering Sree Buddha College of Engineering, Pattoor s_premkumar999@yahoo.com, sreejakrishnan.sk@gmail.com, shyama.gsu@gmail.com Saritha N.R Assistant Professor Department of ECE Sree Buddha College of Engineering, Pattoor nr.saritha5440@gmail.com

Abstract- Robotics is a fascinating and fast developing field of engineering, robots is replacing humans in many field of complicated task in industries. The researches and development in the robotics field is increasing day by day. This paper aims to accompany with the robotics development community in the world. This paper is a colour identified object picking robot which can track and pick object based on their colour. In this paper the colours are identified using colour segmentation methods in image processing. The robot is a wired robot connected to a desktop computer via a parallel port and USB port. We can give the colour to be picked and the webcam in the robot is allowed to take snaps and using colour segmentation technique the object is identified and the robot is directed to pick the object. Intels powerful and most latest computer vision library called OpenCV is used here as the platform for image processing and control. Keywords Robotics, image processing, Open CV. I. INTRODUCTION A robot is a reprogrammable, multifunctional manipulator designed to move material, parts, tools, or specialized devices through various programmed motions for the performance of a variety of tasks. It is usually an electromechanical machine that can perform tasks automatically guided by a program or circuitry. Robots are often equipped with a variety of sensors depending upon its task that allow them to collect information about their environment. Some robots require some degree of guidance, which may be done using a remote control, or with a computer interface. A robot is a machine that senses the world, processes the sensor information with a computer and then does something in response to that information. The science and technology that deals with study of robots, their design, manufacture, and application is known as Robotics. Robotics is the branch of technology that deals with the design, construction, operation and application of robots and computer systems for their control, sensory feedback, and information processing. Robotics is, to a very large extent, all about system integration, achieving a task by an actuated mechanical device, via an intelligent integration of components,

many of which it shares with other domains, such as systems and control, computer science, character animation, machine design, computer vision, artificial intelligence, cognitive science, biomechanics, etc. The robot control software used here is OpenCV. The main function of robot control software is the motion control of a robot. OpenCV (Open Source Computer vision Library) is a library of programming functions mainly aimed at real time computer vision, developed by Intel and now supported by Willow Garage. It focuses mainly on real-time image processing. The motion of robots manipulator joints, the tool or the gripper can be described in different coordinate systems. These coordinate systems are used for the realization of several control functions, including off-line programming, program adjustment, coordination of the motion of several robots or a robot and additional servo drives, jogging motion, copy of programs from one robot to another, etc. II. ALGORITHM FOR COLOUR IDENTIFICATION In the most general sense, a colour model is an abstract mathematical model describing the way colours can be represented as tuples of numbers, typically as three or four values or colour components (e.g. RGB and CMYK are colour models). When this model is associated with a precise description of how the components are to be interpreted, for example viewing conditions, the resulting set of colours is called colour space. In practice, colours take values between 0 and 255 (8 bits) which correspond to different colour intensity values. Colour spaces are a basis for the intensive realm of a colour image understanding as they produce information about object location for processing purposes. In this robot we need to identify different colours in realtime speed and must get good efficiency in different lighting conditions, for this purpose the most easy and efficient colour model is the HSV(Hue, Saturation and value) colour model. The colour is determined by the hue value, each colour have different hue values, saturation means the purity of the colour determined by the hue value and the value determines the intensity of the colour. The algorithm is implemented to identify a target colour in an image by HSV threshold method. Here each pixel of the image is taken and the Euclidean distance between the target HSV values and the HSV values of each pixel of the snapshot is measured.

This Euclidean distance is then threshold with a reference value which is calibrated at different lighting conditions. A binary image is created to store the result and the corresponding pixel satisfying the threshold condition in the binary image is taken HIGH. After producing the binary image the centroid of the object is measured and based on that position the robot is moved to track the object. The algorithm is as follows: Step 1: Read the Target colour to be picked from the user Step 2: Initialize the pixel pointer to the first pixel. Step 3: Initialize a binary image with same resolution of the webcam image. Step 4: Take the Euclidean distance of the HSV values of the target colour and the HSV values of the current pixel in the pointer. Step 5: Compare the Euclidean distance with a threshold value(calibrated value by testing the algorithm in different light conditions) Step 6: If the Euclidean distance of the pixel in the image meets the threshold conditions the corresponding pixel in the binary image is taken HIGH. Step 7: Increment the pixel pointer. Step 8: If the pixel not reached the last pixel goto step 4. Step 9: After the end of image is reached, we get an image with white colour in the portion where the target is located, find the centroid and we get the location of the object. III. DESIGN OF THE PAPER A. Construction The complete design of the paper includes the following components Desktop computer installed with OpenCV and Dev C++ USB webcam DC geared motor with L293 driver to drive the rover Robotic arm made with scrap PVC pipes and V3003 servo motors 8051 platform for arm control IR object detector Parallel port interface The software part of the robot is done in the popular C compiling software-Dev Cpp and the powerful image processing library-OpenCV from Intel Technologies. The image processing section consist of thresholding the image with a target colour and find the position of the object and send the controlling signals to the robot via the parallel port. The complete mechanical assembly of the robot is designed in Auto CAD and made with scarp PVC pipes which are stretched into sheets and cut into required parts. A webcam,8051 platform, manipulator arm, IR proximity sensor and the motor driver are attached to the robot. The webcam is connected to the PC via USB port and the other circuitry is connected to the parallel port.

A. Block diagram

Figure 1: Block Diagram of the robot This is the block diagram of the robot. The paper is not completely in the embedded form, the entire robotic system is connected with a desktop computer system via parallel port and USB port. The controlling section of the robot is connected with the parallel port and the image analyzing part is connected with the USB. The power supply for the circuits in the robot is also provided by the USB port. The entire image processing is done on the desktop computer using OpenCV library provided from Intel corporation. The image processing is implemented in C++ using Devcpp IDE. The webcam fitted on the robot connected with the USB port and realtime video is streamed to the OpenCV and the objects that having the target colour is thresholded and the position of the object is identified and based on the position observed the controlling sequence is applied at the parallel port. About the parallel port interface, the entire controlling and the object detection system is connected with the parallel port with a 25 pin-D male connector to the system parallel port. The 4-bit MSB of the data port is used for the driving of the robot. This 4-bit MSB of data port is buffered with L293D motor driver and then to the DC 100RPM geared motor. Another one bit of the data port is utilized for the picking information transfer to the robotic arm control board when the target is reached. The arm control section comprises of 8051 platform to accept the control information for the robotic arm from the parallel port and control the arm to pick the target object. The robotic arm is made of servo motors which can be precisely controlled by a microcontroller. Before issuing the picking signal it must be ensured that the object is in the vicinity of the arm and this is ensured by means of a proximity detection system. Here we are using IR proximity detection because the required range to which the object to be detected I only 5 to 10cm.

A. Working The robot is connected to the PC via the parallel port and the USB port. The power for the board on the robot is taken from the USB and also the battery fitted in the robot. First the robot reads which colour is to be picked from the user and then robot is allowed to take the snapshot of the objects infront of the robot and send it to the image processing section. Here the snapshots resolution is detected and a binary image of the same resolution is created in the program. Then using the algorithm mentioned in section II will produce a binary image with white colour at the position of the target object and the rest portion black. The centroid of this image is calculated and we get the centre coordinate(x,y) of the object. When the x coordinate is greater than half value of the total resolution in x direction, the robot is needed to move towards right, and also when the x coordinate is less than half the value of the total resolution in x direction the robot is needed to be moved towards left. For this movement we are transferring the information to the robot via the parallel port. The robot reads this pattern and control the geared motor of the robot according to the information. During this movement toward the target we must ensure that weather the robot reached the target or not. For that we are providing an IR proximity detector in front of the robot. The feedback from these sensors are continuously monitored in the program. When the feedback system inform that the object have reached the image processing algorithms are stopped and the robot is made to stop and the computer will command the manipulator control of the robot to pick the object.

Figure 2: Webcam and object detector The IR proximity sensor consist of an IR source, photo diode with a buffer transistor and a comparator. The comparator output is fed to the status port of the parallel port, continuous checking of the status port ensure that the object is not reached. After reaching the object the robot is made stopped and the picking information is transferred to the 8051 through the data port. Since the program is developed in the C++, this can be easily ported to any higher end embedded platform and can make the different separate circuits used here to an integrated form. A. Algorithm Step 1: Start Step 2: Initialize the parallel port values and initialize webcam Step 3: Get the colour of object needed to be taken Step 4: Stop the robot Step 5: Take snapshot with the webcam Step 6: Initialize pixel pointer to first pixel of the snapshot image Step 7: Initialize a binary image with same resolution of the snapshot image Step 8: Take Euclidean distance of H,S and V values of the selected pixel with the target H,S & V values. Step 9: If the Euclidean distance meet the threshold condition the corresponding pixel of the binary image is taken HIGH. Step 10: If the pixel pointer not reached end of image, then increment the pixel pointer and goto Step 8 Step 11: Calculate the centroid(x,y) of the binary image Step 12: If x is greater than half the value of total x direction resolution then move the robot right Step 13: If x is less than half the value of total x direction resolution then move the robot left. Step 14: If x<0 stop the robot Step 15: If there is no feedback from IR Sensor go to step 5 Step 16: If the IR feedback acknowledgment received the trigger the manipulator control circuitry. Step 17: Pick the object Step 18: Stop

IV. RESULTS AND DISCUSSION We implemented the colour identifying and picking robot using OpenCV and the performance of the robot is found in real-time speed. During the testing we used 5 different colours both primary and secondary colours, the robot was able to identify each clour and pick them in different lighting conditions. The complete mechanical assembly of the robots are made with scrap PVC pipes and thus reduced the cost of mechanical assembly to just 1$.

Figure 3: Inside top view of the robot

Software result of the robot is as follows:

Figure 7: Detection of green object

Figure 4: Target coloured object- stream from webcam

Figure 8: Detection on orange object

Figure 5: Position of target and status of the parallel port

Figure 9: Detection of yellow object V. CONCLUSION In this paper we have implemented the robot that can trace and pick both primary and secondary coloured objects in real-time speed. The paper is developed with intels powerful and latest computer vision library called OpenCV and compiled the program using Dev C++ .With this the robot can detect any coloured object as per the requirement of the user within a range of 5 meters. This range can be extended by using a wireless link instead of the wired parallel port. After the implementation of the paper we have found that it gives an efficient real-time performance with a very high processing speed of about 20 frames per second even in normal light conditions. This paper can be modified in the future as fruit plucking robot, industrial sorting robot and other colour based tracking applications.

Figure 6: Detection of blue object

VI. REFERENCE Book Reference [1] Rafael C.Gonzalez, Richard E.Wood, Digital Image Processing, 3rd edition, Pearson Education Inc., U.S.A., 1997 [2]S Jayaraman, S Esakkirajan, T Veerakumar, Digital Image Processing, 4th edition, Tata McGraw Hill Education Private Limited, New Delhi Journal paper reference [3]AvinashC. Kak, Guilherme N. DeSouza, Robotic Vision: What Happened to the Visions of Yesterday?,Robot Vision Laboratory [4]C. Amoroso, E. Ardizzone, V. Morreale, P. Storniolo, A New Technique for Color Image Segmentation [5] The Machine VisionToolbox, A MATLAB Toolbox for Vision and Vision-Based ControlIEEE Robotics & Automation Magazine [6] Jinwei Yu, Jian Song, Xueyan Sun, Tiezhong Zhang, Design and Experiment of Distance Measuring System with Single Camera for Picking Robot Conference paper reference [7] HeikkiPylkko, JukkaRiekki, JuhaRoning, Real-Time ColorBased Tracking via a Marker Interface, inProc. 2001 IEEEInternational Conference on Robotics &Automation Seoul, Korea. May 21-26, 2001 [11] M.P.N. Rathnayake, K.J. Samarasekera, S.C. Rajapakshe, E.W.M.P.W.D.C.B. Wikramasinghe,H.B.M.U.L.B Mahagedara, M.G.C.R. Samaranayake, K.M.M.W.N.B. Narampanawe, Design and Implementation of Autonomous Robot with Solid Object Identification Algorithm, 2010 5th International Conference on Industrial and Information Systems, ICIIS 2010, Jul 29 - Aug 01, 2010, India [12] Luciano C. Lulio, Mario L. Tronco, Arthur J. V. Porto, Pattern Recognition Structured Heuristics Methods for ImageProcessing in Mobile Robot Navigation, 2010 IEEE/RSJ International Conference onIntelligent Robots and SystemsOctober 18-22, 2010, Taipei, Taiwan [13]Yong Huan. Yang, Yu Lin. Xu, Xin.Li, Wan Mi. Chen, and Yan Kai. Chao, Stereo Vision-Based Arm Control for Service Robot, 2011 Chinese Control and Decision Conference (CCDC)

[8] J. Zheng, K.P. Valavanis, J. Gauch, Object Extraction For Color Robotic Vision Systems Proc. of the 1991 IEEE International Symposiumon lntelligent Control, 13 - 15 August 1991.Arlinglon, Virgmla. U SA [9] Motoki Takagi, Yoshiyuki Takahashi, Shinichiro Yamamoto, Hiroyuki Koyama, Takashi Komeda, Vision Based Interface and Control of Assistive Mobile Robot System, in Proc. of the 2007 IEEE 10th International Conference on Rehabilitation Robotics, June 12-15, Noordwijk, The Netherlands [10] Salman Valibeik and Guang-Zhong Yang, Segmentation and Tracking for Vision Based Human Robot Interaction, 2008 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology

You might also like