Professional Documents
Culture Documents
p0180
Yoshida, T., Fukao, T., and Hasegawa, T.
Paper:
This paper proposes a fast method for detecting job, it is difficult to employ workers stably. However, if
tomato peduncles by a harvesting robot. The main ob- farmers cannot secure enough workers, they are not able
jective of this study is to develop automated harvest- to harvest their crops at the proper time. Therefore, the
ing with a robot. The harvesting robot is equipped main objective of this study is to develop a robot for auto-
with an RGB-D camera to detect peduncles, and an mated harvesting to be used as a temporary worker during
end effector to harvest tomatoes. It is necessary for busy times.
robots to detect where to cut a plant for harvesting. Various studies have dealt with the application of agri-
The proposed method detects peduncles using a point cultural robots. Irie et al. proposed an asparagus harvest-
cloud created by the RGB-D camera. Pre-processing ing robot [1, 2]. Their robot measures whether asparagus
is performed with voxelization in two resolutions to re- is tall enough for harvesting with a 3D sensor. They also
duce the computational time needed to calculate the proposed a mechanism for a robotic arm and an end effec-
positional relationship between voxels. Finally, an en- tor to grasp and cut the asparagus. Bachche et al. proposed
ergy function is defined based on three conditions of a design and model of a gripper and cutting system for a
a peduncle, and this function is minimized to identify robotic arm with 5 degrees of freedom to harvest sweet
the cutting point on each peduncle. To experimentally peppers in horticultural greenhouses [3]. In the proposed
demonstrate the effectiveness of our approach, a robot design, both the gripping and cutting operations were per-
was used to identify the peduncles of target tomato formed using a single servo motor to avoid complications
plants and harvest the tomatoes at a real farm. Using in the system. Kondo et al. proposed components, manip-
the proposed method, the harvesting robot achieved ulators, end effectors, and visual sensors for fruit harvest-
peduncle detection of the tomatoes, and harvested ing robots adapted for use with tomatoes, cherry toma-
tomatoes successfully by cutting the peduncles. toes, strawberries, and grapes [4]. Chen et al. proposed a
harvesting humanoid robot system to pick tomatoes, and
a visual cognition approach that enables the robot to har-
Keywords: harvesting robot, peduncle detection, point vest tomatoes [5]. Yaguchi et al. proposed the design and
cloud processing, voxel processing development of an autonomous tomato harvesting robot
equipped with a rotational plucking gripper, which was
robust in estimating errors [6]. Hayashi et al. proposed
1. Introduction a strawberry-harvesting robot composed of a cylindrical
manipulator, end-effector, machine vision unit, storage
Agriculture in Japan has recently been faced with a unit, and traveling unit [7, 8]. Tokunaga et al. proposed
shortage of workers. This serious problem is due to the an algorithm and designed an integrated circuit for recog-
aging population, and a decrease in the number of agri- nition of circular patterns in binary image based template
cultural workers. However, this presents an opportunity matching with a modified matching degree [9]. Their pro-
for technology development in the agricultural industry. posed system is part of the vision system for a watermelon
With advances in smart agriculture such as information harvesting robot. Yongsheng et al. proposed a mechani-
and communication technology, or robotics technology, cal vision system to design a robot that can automatically
farmers can earn greater profits with less labor. recognize and locate apples for harvesting [10]. Arima
There are busy agricultural periods in which more et al. proposed a cucumber harvesting robot using a vi-
workers are required than usual. During the harvest sea- sual sensor, manipulator, end effector, and traveling de-
son in particular, it is necessary for farmers to secure ad- vice [11]. Monta et al. proposed a robotic vision method
ditional workers because the manpower required is the using cameras, and a 3D vision system using a laser sen-
greatest during this time. Because harvesting is a seasonal sor for a tomato harvesting robot and a cucumber harvest-
© Fuji Technology Press Ltd. Creative Commons CC BY-ND: This is an Open Access article distributed under the terms of
the Creative Commons Attribution-NoDerivatives 4.0 International License (http://creativecommons.org/licenses/by-nd/4.0/).
Fast Detection of Tomato Peduncle with a Harvesting Robot
300
Back Ground
250 Tomato
Support Vector
200
Blue
150
100
50
Layer t+2
a b c d
Layer t+1
Layer t
Fig. 9. Directions between voxels that belong to two differ-
ent layers.
Rail
End effector
The second condition is that the target layer is close to
the end effector. This is because the end effector can not
cut the peduncle if the identified cutting point is far away
from it because the end effector may catch on an obstacle.
Robot arm
The third condition is that the target layer is close to
a vertical orientation. The direction of the approach of
the end effector should be as direct as possible relative to
the peduncle, because the end effector can only move in a
limited range of angles.
Next, an energy function to determine if a target layer
satisfies these three conditions reasonably is defined as
in Eq. (1), where l is the layer index, V (l) is the set of
voxels which belong to layer l, and λ and γ are weighting Fig. 10. Proposed harvesting robot and representative field
coefficients. used for experiment.
λ
E(l) = n V (l) + ∑ vv1
n V (l) v ∈V (l)
3. Experiments
1 − ŷy · (w
w − v ) if v joins w
+γ min
v ∈V (l), w ∈V (l+1) ∞ otherwise. For the experiments, the robot was tasked with iden-
tifying the cutting points of target tomato peduncles to
. . . . . . . . . . . . . . . . . . . . (1)
harvest target tomatoes at a real farm. Fig. 10 shows
The first term in Eq. (1) presents the first condition re- our experimental field and the proposed harvesting robot
garding the size of a target layer, where n() denotes the equipped with a robot arm and an end effector. Most of
number of elements in a set. For rapid calculation, this al- the tomato peduncles are on the rails as shown in Fig. 10.
gorithm uses the number of elements to calculate the size The robot faces one side of a row of tomatoes while mov-
of a target layer. ing on the rails between the rows.
The second term in Eq. (1) presents the second condi- First, the robot was placed in front of a target tomato
tion on the distance between a target layer and the end as the initial position. The robot then observes a tomato
effector, where v are voxels that belong to the set V (l) with the RGB-D camera, and subsequently identifies the
and 1 is the L1 norm. The L1 norm is used to quickly cutting point of the tomato as described above. Finally,
calculate the distance between a target layer and the end the robot inserts its end effector to the cutting point with-
effector. out dropping the tomato.
The third term in Eq. (1) presents the third condition Figures 11 and 12 show the results of this experiment.
with regard to the direction of the layer, where ŷy is a ver- Figs. 11(a) and 12(a) show the input point cloud data for
tically upward unit vector. Fig. 9 shows the directions one frame as captured by the RGB-D camera. Figs. 11(b)
between voxels that belong to two different layers. The and 12(b) show output voxels based on the point clouds in
algorithm selects the direction between voxels that is clos- Figs. 11(a) and 12(a). They also show cutting points iden-
est to the vertically upward direction. For example, in the tified by the proposed method. Both cutting points were
case of Fig. 9, the algorithm selects direction b. identified as points belonging to the peduncles that could
The energy function is digitized with the introduction be approached by the end effector. However, Fig. 12
of the layers in this algorithm. The minimum of the value shows two tomato bunches, but only one cutting point.
calculated for all layers is considered the minimum value This is because the algorithm outputs the point that is
of the energy function. closest to the end effector. Figs. 11(c) and 12(c) show
Cutting point
Cutting point
(b) Output voxels and a cutting point (b) Output voxels and a cutting point
(c) Energies of the search layers (c) Energies of the search layers
Fig. 11. Case 1 of the experimental results. Fig. 12. Case 2 of the experimental results.
Trial No. 1 2 3 4 5 6 7
Number of tomatoes 4 4 4 4 2 2 2
Number of successes 4 4 4 4 2 2 1
Name:
Takeshi Yoshida
Affiliation:
Assistant Professor, Research Organization of Name:
Science and Technology, Ritsumeikan Univer- Takaomi Hasegawa
sity
Affiliation:
Project Assistant Manager, Robotics System De-
velopment Department, Denso Corporation
Address:
1-1-1 Noji-higashi, Kusatsu, Shiga 525-8577, Japan
Brief Biographical History:
2015- Assistant Professor, Aoyama Gakuin University
2017- Assistant Professor, Ritsumeikan University Address:
Main Works: 1-1 Showa-cho, Kariya, Aichi 448-8661, Japan
• robotics, computer vision Brief Biographical History:
Membership in Academic Societies: 2006- Denso Corporation
• The Robotics Society of Japan (RSJ) Main Works:
• The Society of Instrument and Control Engineers (SICE) • robotics
• Information Processing Society of Japan (IPSJ)