You are on page 1of 7

Journal of Physics: Conference Series

PAPER • OPEN ACCESS You may also like


- A science-driven method for determining
Robot intelligent grasping experimental platform morphological parameters of prosthetic
hands
combining Jetson NANO and machine vision Bai-Yang Sun, Xuan Gong, Cai-Hua Xiong
et al.

- Co-optimization of robotic design and skill


To cite this article: Xiujuan Bao et al 2022 J. Phys.: Conf. Ser. 2303 012053 inspired by human hand evolution
Bangchu Yang, Li Jiang, Guanjun Bao et
al.

- Investigation on a soft grasping gripper


based on dielectric elastomer actuators
View the article online for updates and enhancements. S Pourazadi, Huythong Bui and C Menon

This content was downloaded from IP address 101.50.111.46 on 17/08/2023 at 07:34


AIITA-2022 IOP Publishing
Journal of Physics: Conference Series 2303 (2022) 012053 doi:10.1088/1742-6596/2303/1/012053

Robot intelligent grasping experimental platform combining


Jetson NANO and machine vision

Xiujuan Bao1, Lianhui Li2*, Weiqiang OU3 and Lu Zhou1


1
College of Artificial Intelligence, Nankai University, Tianjin, China
2
Tianjin Tonglian Electric Co., Ltd. Tianjin, China
3
Shenzhen Hiwonder Technology Co., Ltd. Shenzhen, China
* Corresponding author: 419028836@qq.com

Abstract. According to the requirements of the intelligent manufacturing experimental


teaching project, this experimental platform develops a robot intelligent grasping experimental
platform based on Jetson NANO and vision. The platform adopts the Jetson NANO controller
based on Google open source machine learning framework TensorFlow + Keras, cooperates
with the computer vision library OpenCV to develop the machine vision algorithm, and
realizes the grasping and handling of materials based on a 6-DOF articulated robot, it can
support open project-based experimental teaching. The experimental platform integrates the
knowledge of robot control, machine vision, mechatronics and other aspects of courses, which
can train students' ability to solve complex engineering problems and stimulate students'
creative thinking. And the experimental platform has good openness. In the experimental
teaching, it only needs to specify the objectives to be achieved without restricting the specific
implementation methods. Students can independently design and test the algorithm program of
each link in the visual positioning system through software, and deeply experience the relevant
theoretical knowledge and practical methods through hands-on experiments.

1. Introduction
Intelligent grasping is an important link in intelligent manufacturing production. With the wide
application of Intelligent Manufacturing in industry, the demand for robot intelligent grasping system
is growing [1], but the traditional manipulator grasping is realized by teaching method or analysis
method, and the robustness is not good. In recent years, based on the open source technology route,
the machine vision algorithm integrating the deep learning algorithm continues to develop [2]. In this
project, we use the machine vision algorithm to identify and locate the target, use the Jetson NANO
control board to complete the grasping path planning and the coordination of multiple mechanical
arms, realize the closed-loop control of the servo motor and the PWM proportional control of DC
actuator, and control the mechanical arm to rotate according to the angle of object block to complete
the material grasping action. The Jetson NANO control board is adopted in the project to integrate the
intelligent identification, path planning, cooperative operation, system interconnection, grasping
execution and other links in the intelligent manufacturing process, so as to improve the accuracy and
rapidity of material grasping, which has important engineering practical significance and practical
value.

Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution
of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
Published under licence by IOP Publishing Ltd 1
AIITA-2022 IOP Publishing
Journal of Physics: Conference Series 2303 (2022) 012053 doi:10.1088/1742-6596/2303/1/012053

2. Experimental hardware composition


The intelligent grasping experimental platform is mainly composed of vision system, control system
and manipulator body. The hardware composition of the experimental platform is shown in Figure 1.

Conveyor belt
Color digital camera

Jetson NANO the 6-DOF series manipulator


Figure 1. The hardware composition of the experimental platform

2.1. The vision system


The camera supports MJPG format output and adjustable focal length, which can provide 480P, 30FPS
high-definition and low-latency images with almost without occupying computing resources. USB free
drive connection, combined with the computer vision library OpenCV, it can realize the functions of
face recognition, object recognition, image classification, feature learning, color recognition, visual
inspection, label recognition, QR code recognition, bar code recognition and so on. In this platform,
AI intelligent color sorting can intelligently sort objects with specific colors. Image processing adopts
color threshold binarization based on LAB color space, and realizes the extraction and identification of
object color through operations such as erosion and expansion.

2.2. Jetson NANO


Jetson NANO is a GPU computing platform launched by Nvidia after the success of TX2 and Xavier
[3-4]. The control board contains a 128-core NVIDIA Maxwell architecture GPU, the CPU is a 4-core
ARM Cortex-A57 MPCore processor, and about 1GB of memory is shared with video memory. It
adopts the detachable design of the core board, which can be easily integrated in various embedded
applications, and the power consumption is also very low (only 5W for low power consumption mode).
With the help of JetPack (Jetson development kit), more abundant AI computing functions can be
realized by using GPU under Ubuntu system, such as high-performance deep learning, sensor-driven
development, computer vision development, etc. Its specific configuration parameters are shown in the
following Table 1.
Table 1. The specific configuration parameters of Jetson NANO.
Features Jetson NANO developer kit
GPU 128 Core Maxwell
472 GFLOPs(FP16)
CPU 4 core ARM A57 @1.43GHz
Memory 4 GB 64 bit LPDDR4 25.6 GB/s
Storage 16 GB eMMC
WiFi/BT External WiFi Dongle bundled
USB3 Ports 1 USB 3.0
2x USB 2.0
1x USB2.0 Micro-B

2
AIITA-2022 IOP Publishing
Journal of Physics: Conference Series 2303 (2022) 012053 doi:10.1088/1742-6596/2303/1/012053

2.3. Tandem robotic arm


The body of the 6-DOF series manipulator is about 1.35Kg, and the load is about 250g. It adopts 7.5V
3A 5.5DC power adapter, and is equipped with 4 single-axle intelligent bus steering gear, 1 intelligent
bus steering gear and 1 single axle burn- resistant bus steering gear. The accuracy of the steering gear
is 0.3 °. The steering gear has convenient connection, high precision and large torque. It supports angle
readback voltage feedback and temperature feedback to ensure that the manipulator is always in good
condition. The programming environment is the Wondercode software, which supports Python
programming.

3. Software design

3.1. Overall technical route


Software design is the core of the intelligent grasping experimental platform [5], which mainly needs
to realize the image recognition of the target and the motion control of the manipulator. Here, taking
the sorting of object blocks based on color recognition as an example, its simple flow chart is shown in
Figure 2. The experimental platform collects the work-piece image through the visual system, obtains
the center point of the image, converts the color space, and uses Gaussian blur to find the color
dictionary, corrodes, expands, finds and draws the contour, then finds and draws the color block center
point, and sends the position data to the handling process. Then, the control program realizes the
movement of the manipulator, rotates the manipulator according to the angle of the identified work-
piece, and realizes the process of grasping and placing according to the color classification.
Initialization

Machine vision Motion control

Image Acquisition Read target information

Image preprocessing Trajectory planning

Target recognition algorithm training Motor Trajectory Calculation

Extract target information complete grab

target information conversion No


Information waiting
received?

Yes
Enter the next round of grabbing
Figure 2. Simple flow chart of the system

3.2. Target detection


The flow chart of target detection is shown in Figure 3.
We use the camera to capture real-time images, and convert the video to grayscale video in
OpenCV. The width and height of the video can be set to 640*480. Then, the image is calibrated

3
AIITA-2022 IOP Publishing
Journal of Physics: Conference Series 2303 (2022) 012053 doi:10.1088/1742-6596/2303/1/012053

through the camera internal parameter matrix and distortion coefficient, we can use
cv2.COLOR_BGR2HSV realizes the conversion between BGR and HSV space. The BGR color space
is generated by different degrees of superposition based on the three basic colors of Red / Green / Blue
in the Cartesian coordinate system. The HSV space is a representation method of points in RGB color
space in the inverted cone. HSV is Hue / Saturation / Value, the value range of H is [ 0, 179], the value
range of S is [ 0, 255], and the value range of V is [0,255].

Image acquisition Image calibration Color space conversion

Gaussian blur Lookup color dictionary Corrosion

Expansion Find and draw contours Extract target information

Figure 3. Flow chart of target detection

Next, the image is segmented based on morphology. Gaussian blur is used to remove noise, and
cv2.GaussianBlur() is used to remove high-frequency components in the image. Then, the cavity part
in the image is filled by Corrosion and Expansion to eliminate the influence of reflection.
The image must be binarized before looking for the drawn contour. Looking for the contour is like
looking for white objects on a black background. A binary image is used to draw the outline [6].
Before finding the contour, threshold processing or Canny boundary detection should be carried out.
We use the function cv2.find Contours () to find contours. Function cv2.drawContours () can be used
to draw contours, which can draw any shape according to the provided boundary points. After the
contour of the object is accurately extracted, the material can be accurately framed. Finally, the
coordinates of the center of gravity of the object can be obtained by calculating the origin moment and
center distance of the material. Then, the coordinate system is transformed into the manipulator
coordinate system by using the translation and rotation of the matrix.

3.3. Motion control


The flow chart of the manipulator motion control is shown in Figure 4. According to the image data
processed by OpenCV, the arc between the target object and the manipulator is calculated by the x, y,
z coordinates of the target object, and then it is transformed into the angle that each steering gear
needs to rotate, and the calculated value is returned. At the same time, the rotation time of the steering
gear is transformed into PWM wavelength, so as to control the rotation of the servo steering gear,
complete the grab of the target block and the reset of the manipulator.
The serial manipulator is organically connected by multiple joints. According to the D-H
representation, the parameters of each joint angle, connecting rod offset, connecting rod length and
connecting rod angle of the manipulator can be expressed. The grasping action can be taken by
longitudinal grasping. Assuming that the plane coordinates of the identified block are (Xi, Yi ), and its
target position are (Xj, Yj). the grasping path of the manipulator can be planned as follows: 1) The
manipulator gripper moves to the top of the block, and the coordinates are (Xi, Yi, Zm) ,where Zm is the
set position in the vertical direction[2]. 2) The gripper jaw vertically descends to (Xi, Yi, Zp1 ),  and
Zp1 is the set grasping plane in the vertical direction. 3) Clamp the jaws to grab the object block. 4)
The gripper moves vertically and return to (Xi, Yi, Zm). 5) The gripper moves to above the specified
target position of the block, where is (Xj, Yj, Zm). 6) The gripper jaws drop vertically to (Xj, Yj, Zp2) .7)

4
AIITA-2022 IOP Publishing
Journal of Physics: Conference Series 2303 (2022) 012053 doi:10.1088/1742-6596/2303/1/012053

Open the gripper to complete the block placement of the object block. 8) The gripper rises vertically
and returns to (Xj, Yj, Zm). 9) Manipulator reset.

Read target information

No
Is it in place? Waiting

Yes
Trajectory planning

Inverse Kinematic Position Solution

Trajectory calculation of each motor

Complete grab

Manipulator reset

Figure 4. Flow chart of motion control

4. Operational results and innovative expansion of experimental projects


The experimental platform can realize the communication between each module through debugging.
After hundreds of tests, 100 % of the color of the object block is accurately identified, and the
recognition time is less than 5 seconds, and the grasping efficiency reaches about 96 %. The
manipulator can send the object block to the corresponding area according to the planned path to
complete the sorting, grabbing and stacking operations according to the given instructions, the actual
effect of the grasping motion is shown in Figure 5. In this paper, the effective control of the
manipulator is realized based on the basic image processing, and the sorting target is completed.

Figure 5. The actual effect of the grasping motion

The construction of the experimental platform is based on the open purpose, which is used for the
practical teaching of undergraduates. The experimental project can not only complete the sorting based
on color, but also expand the realization of more functions on this platform. For example, the target
recognition of the image can also optimize the recognition algorithm, continue to explore the
recognition of objects with different shapes (such as garbage recognition and classification), and also
explore the realization of the functions such as arm movement and object label recognition. The
structure of the manipulator can also be improved, the path planning of the manipulator can be better

5
AIITA-2022 IOP Publishing
Journal of Physics: Conference Series 2303 (2022) 012053 doi:10.1088/1742-6596/2303/1/012053

optimized, and the overall human-computer interaction interface of the system can be developed to
further simulate the intelligent manufacturing environment of the factory.

5. Conclusion
The robot intelligent grasping experimental platform based on Jetson NANO and machine vision is
intended to be open to undergraduates majoring in automation, intelligent science and technology in
the College of Artificial Intelligence of our university. It can directly support the experimental
teaching of multiple core professional courses: experiments on image recognition, Gaussian blur,
corrosion, expansion, face detection, etc. in the course of "machine vision technology"; the forward
and inverse kinematics analysis and steering gear control in the course of "introduction to robotics";
and the systematic project operations such as image recognition, color block sorting, tracking and
grasping, intelligent palletizing in the course of "Intelligent Professional Practice".
The experimental platform built in this paper can well simulate the real manufacturing production
line scene. Closely related to the intelligent collaboration theme, it can meet the students’ learning and
practical needs. Through this project, students can better understand the principle of Industrial 4.0
system, integrate the comprehensive knowledge of electronic information, automation, mechanical
electronics and computer technology, and from point to surface, exercise students’ learning ability and
project development ability, and improve their innovation ability.

Acknowledgments
Research supported by Nankai University's 2021 Self-made Experimental Teaching Equipment Project
“Robot intelligent grasping experimental platform combining Jetson NANO and machine vision”
(21NKZZYQ04), and the Fourth Teaching Reform Research Project of the Teaching Steering
Committee of Automation Specialty “Reform and practice of diversified teaching methods of
automation experimental and practical courses under the background of New Engineering” (202115).

References
[1] Tian Hao, “Research and Design of Intelligent Sorting System of Production Line Based on
Machine Vision” [D], Liaoning University of Technology,2021.
[2] Dong Jingchuan, Zhang Chengjun, Wang Yicheng, etc., “Experiment on visual localization for
robotic intelligent grasping task” [J], Experimental Technology and Management, Vol.37,
No.3, pp.56-59, Mar. 2020.
[3] Ke Xin, “Embedded blind visual assistant system based on Jetson Nano” [D], GuiZhou
university,2021.
[4] Huang Kun,” Realization of real time tracking system for humanoid robot assisted walking space
orbit based on Jetson Nano” [D], Chongqing University of Posts and
Telecommunications,2021.
[5] Bao Guangxuan , Huang Jiacai , Li Yao, etc., “Vision-based Intelligent Sorting System for
Parallel Robots” [J], Journal of Nanjing Institute of Technology (Natural Science Edition),
Vol.19,No.1, pp7-11, 2021.
[6] Fang Guodong , Gao Junwei , Zhu Chen-xi , Kong Deshuai, “Intelligent Sorting System for
Manipulator Based on Machine Vision” [J], Instrument Technique and Sensor, No.12,
pp72-76+81,2020.

You might also like