You are on page 1of 10

Using MALTAB Image Processing Tool for Live Video Processing

Arun Dayal Udai


Department of Mechanical Engineering Birla Institute of Technology, Mesra

One of the greatest gifts, which God has gifted us, is our eye. The processing of this high-resolution image acquired by our eye is done at an unimaginably high speed so that our motor functions can react to it in real time. Human vision processes 40% in eye and remaining 60% in our brain. So, our eyes together with our brain form a vision sensor. We are probably still wandering around the technologies incorporated in our vision system through various approaches, to imitate few capabilities of this multifunction natural device that is done by our eyes unknowingly. One of the easiest ways to incorporate some of the vision functions, to our highly specialized robot, is by using software tools like MATLAB Image Processing Tools. It can be used in conjunction with real time video capture devices, like our web cameras to acquire live video images and process them, using a high end system through MATLAB. In the current article I have used this tool to do some of the functions that are required very commonly by robots to have in various robot competitions. This demonstration of the tool is expected to be a beginning of learning towards any real application robot like a mars exploration robot, which uses artificial intelligence along with vision capabilities. Some of the commonly required functions, which a moving robot should have, may be listed as; It should move in a guided path to complete some targets, like the one done by a line following robot or a vehicle moving in a lane can do, it should be able to understand any external command sequences given remotely through radio, sound or light signals like traffic lights in roads, and finally it should interpret any obstructions coming in front of it and take corrective action to protect itself and the surroundings. The list as discussed above is not exhaustive, actual functionalities may not fit here. Obviously, a robot is not expected to solve a hell traffic signal like this! Procedures required to do such tasks, through MATLAB Image Processing Tool is demonstrated here.

Detecting coloured balls


Balls are one of the best testing objects to work with robots in a controlled environment. It can represent objects to handle or an obstruction, which a robot has to pick-an-place or remove from its way. A soccer robot locates a ball to play with it. A different robot may be assigned to locate balls of different sizes in the arena and plan a path for manipulation. An example of such tasks is demonstrated through the following code, where, image consisting of balls are provided and the code find the number of balls, the maximum sized ball and its location. The code is well commented for easy understanding, and is explained in this section.

%% Statistical Image Processing clc; clear; % Clear workspace and the output screen %% Read Images and display img = imread('balls3.bmp'); j = size(img); % Get size of the image in j(column, row) imtool(img); % Display image in analysis tool %% Convert to binary image d(x, y) = 0 or 1, Segmentation Process for k = 1:j(1) % scan through x coordinates for l = 1:j(2) % scan through y coordinates % If red and green colour intensity > 160, for the given ball if img(k,l,1) > 160 & img(k,l,2) > 160 d(k,l) = 1; % mark this pixel as intensity 1 or white else d(k,l) = 0; % make this pixel as intensity 0 or black end end end % imshow(d); % view the resultant image %% Alternative way d = img(:,:,1) > 155 & img(:,:,2) > 155; % imshow(d); % view the resultant image % Wrong Way % BW_img = im2bw(img); % imshow(BW_img); %% Counting approximate number of balls [label, num] = bwlabel(d,8); % Use 4 for poor images % Display all the regions with 1 in d, with different colour imagesc(label); % Display different labelled regions %% Using region properties stats = regionprops(label, 'basic'); max_area = max([stats.Area]); biggest = find([stats.Area]==max_area) stats(biggest).Centroid stats(biggest).Area keepers = find([stats.Area] > 300); % Correcting number of balls num = length(keepers)

The above code opens the image file balls3.bmp and displays the image in image analysis tool, where one can move the cursor over the displayed image and observe the RGB (Red, Green, Blue) colour values for any pixel. On analysis it is found that, any pixel within the ball region has red and green values above 160, where as any pixel outside this region has significantly less of these values. Taking this as a criterion, the given image is segmented to binary image of same dimensions, which consists of 1s at the location where there is ball and 0s at the remaining locations. Two different approached is used to perform segmentation task. The given pictures are as shown in fig. 1 as below:

Fig. 1: Images of arena with balls Images formed after segmentation, is shown in Fig. 2 as follows.

Fig.2: Images formed after segmentation and labelling Counting number of clusters containing 1s in the binary image may give approximate numbers of balls in the image. This can be done by bwlabel command in Image processing tool that counts and labels all such areas containing 1 in the binary image. Such technique may fail with the images having similar coloured patch on the surface texture, or with images with balls that are touching each other. One of the very powerful commands in image processing tool is regionprops, which

fetches the properties of all labelled regions in binary image, like their areas in pixels, centroids, eccentricity etc. Based on the area or eccentricity one can easily extract the region of interest. A circular feature should have a low eccentricity. This method works well with clear images taken from top.

Detecting lanes on road


Most of the robot competitions require a robot to move on a road with lanes. A robot is required to follow the lane and do some specific tasks. Detecting and working with lanes becomes challenging when it comes to a real situation, with complex surroundings or with poor visibility and road conditions. In the current discussion a typical road is snapped from different angles as shown in fig.3, with robot lying in the left lane, which it has to follow. The code detects the left lane and the centre lanes, which may be extended to instruct the robot based on different situations. Now that functionality of Image processing tool is clear, the code is remarked at each stage of its functions to explain its functionality.

Fig. 3: Different views of the road %% Code for Lane detection clc; clear; %% Load Raw Image File img = imread('Pic_left.jpg'); imshow(img);

% Clear workspace and screen

% Read image and show

%% Crop Region of Interest cropped = imcrop(img, [1 200 640 480]); %% Segment by auto thresholding, converts the image to binary image level = graythresh(cropped); BW_img = im2bw(cropped,level); %% Clean up binary image, removing areas having pixels less than 100 lanes = bwareaopen(BW_img, 100); % For clearing border objects (May be required) % lanes = imclearborder(lanes); %% Trace region boundaries, without holes and label them (return L) % B is (row, coloumn) i.e. (y, x) matrix for labelled regions [B, L] = bwboundaries(lanes, 'noholes'); numregions = max(L(:)); % imshow(label2rgb(L)); %Show labelled regions (different colours) %% Feature extraction get all region properties

stats = regionprops(L, 'all'); %% Eccentricity - Extracts Lanes shapes = [stats.Eccentricity]; % rectangular lanes will have high eccentricity, which are kept keepers = find(shapes>0.98); %% Major Axis Length - Separates side lane and centre lanes, by % major axis length measurements lanes = [stats.MajorAxisLength]; side_lane = find(lanes > 175); centre_lane = find(lanes <= 175); % Left Lane separation, (lanes left to centre line of image approx.) for index = 1:length(side_lane) if stats(side_lane(index),1).Centroid(1) < 300 left_lane = index; end end detected_lanes = [left_lane centre_lane]; %% Display lanes, by drawing boundaries % Centre Lanes imshow(cropped) for index = 1:length(centre_lane) outline = B{centre_lane(index)}; line(outline(:,2),outline(:,1),'Color','g','Linewidth',2) end % Left Lanes for index = 1:length(left_lane) outline = B{left_lane(index)}; line(outline(:,2),outline(:,1),'Color','r','Linewidth',2) end

Fig.4: Stages of image processing, cropped, binary, labelled and final images The above code requires to be tuned for different roads and lighting conditions. If the lane tracking system is connected to control any external controller boards, the board can be instructed through serial port interface as follows:

% Serial port demonstration s = serial('COM5'); %define serial port object set(s,'BaudRate',115200); %set baudrate fopen(s); %open serial port fwrite(s, 'S'); %Write a character S to serial port fclose(s) %Close serial port delete(s) %delete serial port object clear s %clear workspace An alternative approach for Lane detection uses Hough Transform for detecting lines on the scene. Any line other than the lanes on the roads can be distinguished by morphological treatment. The procedure reads the colour image and converts it to a greyscale image for edge detection by canny edge detection algorithm. Once the image with edges of the lanes is formed, the lines contained in it can be segregated by Hough transform. The Hough function implements the Standard Hough Transform (SHT). The SHT uses the parametric representation of a line: rho = x*cos(theta) + y*sin(theta) The variable rho is the distance from the origin to the line along a vector perpendicular to the line. theta is the angle between the x-axis and this vector. The SHT is a parameter space matrix whose rows and columns correspond to rho and theta values respectively. A code for the above procedure is as follows which is commented for easy understanding.
%% Clears Workspace clc; clear; %% Read Image and Display img = imread('Pic_center_high.jpg'); subplot(2,2,1), imshow(img);title('Original Image') %% Crop Image (get area of interest) and display cropped = imcrop(img, [1 110 640 480]); subplot(2,2,2), imshow(cropped);title('Cropped Image'); %% Convert to grayscale image % Converting to NTSC black-and-white equivalent (May use rgb2grey) % b = rgb2gray(cropped); b = .30*cropped(:,:,1) + .59*cropped(:,:,2) + .11*cropped(:,:,3); subplot(2,2,3), imshow(b);title('Grayscale Image') %% Use Grayscale image for edge detection % Get threshold value level = graythresh(cropped); BW = edge(b,'canny',level); subplot(2,2,4), imshow(BW);title('Only Edges') %% Use binary edges for Standard Hough transform % SHT uses parametric line, rho = x*cos(theta) + y*sin(theta) % H -> The Hough transform matrix, NRHO-by-NTHETA. [H,Theta,Rho] = hough(BW); % Identify peaks in Hough Transform Peaks = houghpeaks(H,5,'threshold',ceil(0.3*max(H(:)))); % Extract lines based on Hough transform lines = houghlines(BW,Theta,Rho,Peaks,'FillGap',30,'MinLength',40); % Plot lines figure(2), imshow(cropped), hold on xlabel('X'); ylabel('Y'); title('Cropped Image Showing Lines'); max_len = 0; for k = 1:length(lines)

xy = [lines(k).point1; lines(k).point2]; plot(xy(:,1),xy(:,2),'LineWidth',2,'Color','green'); % Plot beginnings and ends of lines plot(xy(1,1),xy(1,2),'x','LineWidth',2,'Color','yellow'); plot(xy(2,1),xy(2,2),'x','LineWidth',2,'Color','red'); % Determine the endpoints of the longest line segment len = norm(lines(k).point1 - lines(k).point2); if (len > max_len) max_len = len; xy_long = xy; end end % Highlight the longest line segment plot(xy_long(:,1),xy_long(:,2),'LineWidth',2,'Color','cyan');

Detecting Traffic Signals


Traffic lights and signals are part and parcel to roads. If a robot is designed to move on road, it is expected to understand various traffic lights and signals. Understanding traffic lights is simple, as it involves merely detection of light colour and shape which is normally circular. Detecting, light signals becomes challenging if it has a similar coloured objects or lights nearby or in its background, poorly visible due to bad weather, or a different type of light altogether (an LED light for example). Some of these cases are shown in the Fig.5 as follows. Detecting traffic signal is similar to detecting a coloured ball as demonstrated before. The code below demonstrates detecting a red traffic signal in all the above cases. A robot may be communicated through serial port to stop the vehicle on such a situation.

Fig.5: Different cases of traffic lights

%% Image processing for Traffic Signals clc; clear; %% Read Images and display img = imread('traffic_light_red.jpg'); imtool(img); % read and open image for analysis %% Convert it to binary image based on RGB value of red colour d = img(:,:,1) > 180 & img(:,:,2) < 100 & img(:,:,3) < 100; imshow(d); % view the resultant image %% Counting approximate number of lights label areas with 1s using 4 %interconnected pixels [label, num] = bwlabel(d,4); % imagesc(label); %% Using region properties, discard images with less than 500 pixels area stats = regionprops (label, 'basic'); keepers = find([stats.Area] > 500); % Correcting number of red lights num = length(keepers) %finding the largest red light and its location max_area = max([stats.Area]); biggest = find([stats.Area]== max_area); stats(biggest).Centroid % Can also be segregated based on size or eccentricity property

So, far we have been using image processing techniques only to get some of the basic things done, a higher level processing includes implementation of neural network techniques for traffic signals recognition. A hybrid artificial intelligence techniques are also in use where the conditions more complex.

Live Video Processing


A video-capturing device like that of our web camera or handycam grabs 10 to 30 frames of images per second. These images are similar to that taken by a normal picture camera. So, when it comes to processing, the techniques involved remain the same as we do with normal images. The data from video stream continuously updates the images to be loaded for processing. The image-processing algorithm should be significantly fast so as to handle the high rate of incoming data. A slow processing may cause frame overrunning and thereby may cause loss of information. The video processing code as follows, starts with live video data acquisition from the attached video input device, and the code continuously processes frames of images updated through the video capture device. A video input object vid is created through MATLAB video acquisition command that takes adapter name, device ID and frame format supported by the device as input parameters. These can be queried by invoking command imaqhwinfo at the MATLAB command prompt. The frame size is kept much below the actual capacity of the device, so as to process faster. This can be judged by the amount of data required for any particular requirement and the speed of the imageprocessing algorithm. Once the object is created, different parameters can be set to operate in continuous capture mode. In the given code video trigger mode is set to manual, and it will keep repeating trigger thereafter. After each trigger it will fetch one frame, which is checked by the loop before reading the frame and continuing with the processing of the frame. The algorithm for image

processing is coded within this loop, and the actions may be taken depending on the conditions prevails in the live video image. As the loop operates continuously, a provision to exit out of the loop may be given. Before ending the code it is important to video data from the memory, delete and clear the video object from the workspace. This closes the video input device and enables access to the video device by any other application.
clc; % Create video input object = videoinput(adaptorname, deviceID, format) vid = videoinput('winvideo', 1, 'RGB24_320x240'); set(vid,'FramesPerTrigger',1); set(vid,'TriggerRepeat',Inf); triggerconfig(vid,'manual'); start(vid); trigger(vid); pause(1); while (vid.FramesAvailable >= 0) % Obtain most recent acquired data (one sample) pdata = peekdata(vid,1); % Auto Thresholding Image Acquired - Converting to B&W (2D Image) level = graythresh(pdata); BW = im2bw(pdata,level); % Display processed Image imshow(BW); % Algorithm for Image processing goes here % ======================================== % Image processing algorithm ends pause(0.05); flushdata(vid,'triggers'); end flushdata(vid); delete(vid); clear vid;

Beginners are advised to use existing image processing algorithms implemented in the MATLAB image processing tool through different commands, rather than coding for the same. This makes the processing relatively faster, and the code runs at its optimised performance. References: 1. Image Acquisition Toolbox User Guide, 1984-2008 The MathWorks, Inc. 2. Image Processing Toolbox User Guide, 1984-2008 The MathWorks, Inc.

You might also like