Professional Documents
Culture Documents
For example, OpenCV images come in BGR image format, while regular ROS images are in the
more standard RGB encoding. OpenCV_bridge provides a nice feature to convert between them.
Also, there are many other functions to transfer images to OpenCV variables transparently.
To learn how to use OpenCV, you will use the RGB camera of this turtlebot3 robot.
Note that this turtlebot is in a very strange environment. On the floor, there is a yellow path painted
and some stars of different colours.
What you will have to do is make the robot move in this environment, following the yellow line. For
that, we will divide the work into the following phases:
Get images from the ROS topic and convert them into OpenCV format
Process the images using OpenCV libraries to obtain the data we want for the task
Move the robot along the yellow line, based on the data obtained
Let's start.
1. Get Images from a ROS topic and show them with OpenCV
Before doing anything, create a new package called my_following_line_package with dependency
in rospy, and also a launch and scripts directory.
The first step will be to get the images from a ROS topic, put them into the OpenCV format, and
show them in the Graphical Tool.
Here you have an example of how to do it:
**line_follower_basics.py**
In [ ]:
#!/usr/bin/env python
import roslib
import sys
import rospy
import cv2
import numpy as np
from cv_bridge import CvBridge, CvBridgeError
from geometry_msgs.msg import Twist
from sensor_msgs.msg import Image
class LineFollower(object):
def __init__(self):
self.bridge_object = CvBridge()
self.image_sub = rospy.Subscriber("/camera/rgb/image_raw",Image,self.camera_callback)
def camera_callback(self,data):
try:
# We select bgr8 because its the OpneCV encoding by default
cv_image = self.bridge_object.imgmsg_to_cv2(data, desired_encoding="bgr8")
except CvBridgeError as e:
print(e)
def main():
line_follower_object = LineFollower()
rospy.init_node('line_following_node', anonymous=True)
try:
rospy.spin()
except KeyboardInterrupt:
print("Shutting down")
cv2.destroyAllWindows()
if __name__ == '__main__':
main()
**END line_follower_basics.py**
Here there are several elements to comment on:
In [ ]:
import cv2
import numpy as np
from cv_bridge import CvBridge, CvBridgeError
These imports are the basics necessary to work with images in ROS.
You have OpenCV2 library (cv2). Why the 2? Because there is already a version 3.
You have the numpy library, which makes the matrix and other operations easy to work with, and
CV_Bridge, which allows ROS to work with OpenCV easily.
In [ ]:
self.image_sub = rospy.Subscriber("/camera/rgb/image_raw",Image,self.camera_callback)
The subscriber to the image topic. This topic publishes information of the type
**sensor_msgs/Image. Execute the following command to see what all the different variables
are inside this message type:
Execute in WebShell #1
In [ ]:
std_msgs/Header header
uint32 seq
time stamp
string frame_id
uint32 height
uint32 width
string encoding
uint8 is_bigendian
uint32 step
uint8[] data
You can extract data from certain variables by doing as follows:
Execute in WebShell #1
In [ ]:
In [ ]:
In [ ]:
In [ ]:
try:
# We select bgr8 because its the OpenCV encoding by default
cv_image = self.bridge_object.imgmsg_to_cv2(data, desired_encoding="bgr8")
except CvBridgeError as e:
print(e)
Retrieve the image from a ROS topic and store it in an OpenCV variable. The var data is the one
that contains a ROS message with the image captured by the camera.
In [ ]:
cv2.imshow("Image window", cv_image)
cv2.waitKey(1)
This will open a gui, where you can see the contents of the variable "cv_image." This is essential to
see the effects of the different filters and cropping of the image afterwards.
In [ ]:
cv2.destroyAllWindows()
This command will close all the image windows when the program is terminated.
When executing this program, you will have to see the image by clicking on the Graphical
Tools icon:
Then, launch the code using the command below, and test that it actually works.
Execute in WebShell #1
In [ ]:
# We get image dimensions and crop the parts of the image we dont need
# Bear in mind that because its image matrix first value is start and second value is down limit.
# Select the limits so that they get the line not too close, not too far, and the minimum portion possible
# To make the process faster.
height, width, channels = cv_image.shape
descentre = 160
rows_to_watch = 100
crop_img = cv_image[(height)/2+descentre:(height)/2+(descentre+rows_to_watch)][1:width]
So why this values and not other values? Well, it depends on the task. In this case, you are
interested in lines that aren't too far away from the robot, nor too near. If you concentrate on lines too
far away, it won't follow the lines, but it will just go across the map. On the other hand, concentrating
on lines that are too close won't give the robot time to adapt to changes in the line.
It's also vital to optimise the region of the image as a result of cropping. If it's too big, too much data
will be processed, making your program too slow. On the other hand, it has to have enough image to
work with.
At the end, you will have to adapt it to each situation.
Do Exercise U2.3 to test the effects of different values.
Once you have it, use the example code given in a python terminal or create a tiny program that
uses numpy as np and cv2 to convert it to HSV.
In the example given, the color of the line is HSV = [[[ 34 255 255]].
Once you have it then you have to select an upper and lower bound to consider the region of the
cone base that you will consider Yellow. Bigger the region more gradients of your picked colour will
be accepted. This will depend on how your robot detects and the colour variations in the image and
how critical is mixing similar colours.
cv2.waitKey(1)
OpenCV allows you to draw a lot of things over the images, not only geometric shapes. But in this
case, a circle will suffice.
In [ ]:
More Info: http://docs.opencv.org/2.4/modules/core/doc/drawing_functions.html
And then, finally, you show all of the image variables with their titles:
3. Move the TurtleBot3 based on the position of the Centroid
In [ ]:
error_x = cx - width / 2;
twist_object = Twist();
twist_object.linear.x = 0.1;
twist_object.angular.z = -error_x / 5000;
rospy.loginfo("ANGULAR VALUE SENT===>"+str(twist_object.angular.z))
# Make it start turning
self.movekobuki_object.move_robot(twist_object)
This control is passed on a Proportional control. This means that it will oscillate a lot and probably
have an error. But, it is the simplest way of moving the robot and get the job done.
It always gives a constant linear movement and the angular Z velocity depends on the difference
between the centroid center in X and the center of the image.
To move the robot, you can use this module :
**move_robot.py**
In [ ]:
#!/usr/bin/env python
import rospy
from geometry_msgs.msg import Twist
class MoveKobuki(object):
def __init__(self):
def clean_class(self):
# Stop Robot
twist_object = Twist()
twist_object.angular.z = 0.0
self.move_robot(twist_object)
self.shutdown_detected = True
def main():
rospy.init_node('move_robot_node', anonymous=True)
movekobuki_object = MoveKobuki()
twist_object = Twist()
# Make it start turning
twist_object.angular.z = 0.15
rate = rospy.Rate(5)
ctrl_c = False
def shutdownhook():
# works better than the rospy.is_shut_down()
movekobuki_object.clean_class()
rospy.loginfo("shutdown time!")
ctrl_c = True
rospy.on_shutdown(shutdownhook)
if __name__ == '__main__':
main()
**END move_robot.py**
Here you have an example of how all of this code would be put together:
**follow_line_step_hsv.py**
In [ ]:
#!/usr/bin/env python
import rospy
import cv2
import numpy as np
from cv_bridge import CvBridge, CvBridgeError
from geometry_msgs.msg import Twist
from sensor_msgs.msg import Image
from move_robot import MoveKobuki
class LineFollower(object):
def __init__(self):
self.bridge_object = CvBridge()
self.image_sub = rospy.Subscriber("/camera/rgb/image_raw",Image,self.camera_callback)
self.movekobuki_object = MoveKobuki()
def camera_callback(self,data):
try:
# We select bgr8 because its the OpneCV encoding by default
cv_image = self.bridge_object.imgmsg_to_cv2(data, desired_encoding="bgr8")
except CvBridgeError as e:
print(e)
# We get image dimensions and crop the parts of the image we don't need
# Bear in mind that because the first value of the image matrix is start and second value is down limit.
# Select the limits so that it gets the line not too close and not too far, and the minimum portion possible
# To make process faster.
height, width, channels = cv_image.shape
descentre = 160
rows_to_watch = 100
crop_img = cv_image[(height)/2+descentre:(height)/2+(descentre+rows_to_watch)][1:width]
cv2.imshow("Original", cv_image)
cv2.imshow("HSV", hsv)
cv2.imshow("MASK", mask)
cv2.imshow("RES", res)
cv2.waitKey(1)
error_x = cx - width / 2;
twist_object = Twist();
twist_object.linear.x = 0.1;
twist_object.angular.z = -error_x / 5000;
rospy.loginfo("ANGULAR VALUE SENT===>"+str(twist_object.angular.z))
# Make it start turning
self.movekobuki_object.move_robot(twist_object)
def clean_up(self):
self.movekobuki_object.clean_class()
cv2.destroyAllWindows()
def main():
rospy.init_node('line_following_node', anonymous=True)
line_follower_object = LineFollower()
rate = rospy.Rate(5)
ctrl_c = False
def shutdownhook():
# works better than the rospy.is_shut_down()
line_follower_object.clean_up()
rospy.loginfo("shutdown time!")
ctrl_c = True
rospy.on_shutdown(shutdownhook)
if __name__ == '__main__':
main()
**END follow_line_step_hsv.py**
**Exercise 3.2**
Create a python script inside your package with the provided code. Execute it and see how it
performs.
Try some improvements:
Lower the speed of the robot to see if it works better. Change the linear and the angular
speeds.
Change the behavior of the robot, maybe create a recovery behavior for when it loses the
line.
**END Exercise 3.2**
**Exercise 3.3**
Try changing the color to be tracked. Try to track the three different star colors. Use the color picker
for getting the exact RGB color for:
RedStar
GreenStar
BlueStar
Here you have an example of how it should look with the blue star:
Also, try changing the values of the upper and lower bounds to the following options and see the
results:
LOOSE color detection: lower_yellow = np.array([0,50,50]), upper_yellow =
np.array([255,255,255])
STRICT color detection: lower_yellow = np.array([33,254,254]), upper_yellow =
np.array([36,255,255])
Here you have an example of what you should see when changing the lower and upper:
LOOSE color detection:
See that in the loose version, all the colors that are green and yellow are detected. While in the
strict, you can see that even in the yellow line, there is a part that is cut off because it's slightly
different to the camera sensor.
**END Exercise 3.3**
**Exercise 3.4**
Try executing the code with different values in the descentre and the rows to watch and see how
the follow line performs.
Try the following values:
descentre = 0, rows_to_watch = 20. In this case, you control the center of the image and a
very small piece.
descentre = 0, rows_to_watch = 200. In this case, you are controlling nearly all of the lower
part of the image.
descentre = 200, rows_to_watch = 20. In this case, you are controlling only a small fraction
of the lower part of the image.