You are on page 1of 15

The International Journal of Advanced Manufacturing Technology

https://doi.org/10.1007/s00170-020-06226-5

ORIGINAL ARTICLE

Machine vision system for quality inspection of beans


Peterson Adriano Belan 1 & Robson Aparecido Gomes de Macedo 1 & Wonder Alexandre Luz Alves 1 &
José Carlos Curvelo Santana 2 & Sidnei Alves Araújo 1

Received: 6 July 2020 / Accepted: 7 October 2020


# Springer-Verlag London Ltd., part of Springer Nature 2020

Abstract
This paper presents a machine vision system (MVS) for visual quality inspection of beans which is composed by a set of software
and hardware. The software was built from proposed approaches for segmentation, classification, and defect detection, and the
hardware consists of equipment developed with low-cost electromechanical materials. Experiments were conducted in two
modes: offline and online. For offline experiments, aimed at evaluating the proposed approaches, we composed a database
containing 270 images of samples of beans with different mixtures of skin colors and defects. In the online mode, the beans
contained in a batch, for example, a bag of 1 kg, are spilled continuously on the conveyor belt for the MVS to perform the
inspection, similar to what occurs in an automated industrial visual inspection process. In the offline experiments, our approaches
for segmentation, classification, and defect detection achieved, respectively, the average success rates of 99.6%, 99.6%, and
90.0%. In addition, the results obtained in the online mode demonstrated the robustness and viability of the proposed MVS, since
it is capable to analyze an image of 1280 × 720 pixels, spending only 1.5 s, with average successes rates of 98.5%, 97.8%, and
85.0%, respectively, to segment, classify, and detect defects in the grains contained in each analyzed image.

Keywords Machine vision . Automatic inspection . Visual quality . Grains . Beans

1 Introduction product’s market price. Thus, visual quality inspection is a


task of great importance for most agricultural products
Beans are a legume cultivated in almost all countries with tropical [3–5]. However, it is very common for such tasks to be carried
and subtropical climate and are of great importance in human out manually, which can be time-consuming increasing oper-
nutrition since they consist in a food of significant nutritional ating costs, in addition to being subject to human error and
value [1]. The largest world producers of beans are Myanmar, presenting difficulties in standardizing results [6–9].
India, Brazil, the USA, Mexico, and Tanzania, in that order, The quality inspection of Brazilian beans, for example, is
which are responsible for approximately 60% of the world pro- conducted manually following a set of standards and proce-
duction of grain [2]. Brazil is the world’s largest producer and dures of the Brazilian Ministry of Agriculture, Livestock and
consumer of common beans (Phaseolus vulgaris L.). In fact, Supply, BMALS, and can be briefly described as follows: a
beans and rice form the basis of the diet of the Brazilian people. sample of 1 kg is extracted from a lot of beans and is divided
An extremely important issue in the commercialization of into four equal subsamples of 250 g, one of which is intended
agricultural grains, both in relation to the purchase and the for visual inspection. Then, the foreign matter and impurities
sale, is their visual quality, which is assessed from properties are separated using a circular sieve with holes of 5 mm of
such as color, shape, and size, which generally impact the diameter and finally a visual inspection of the subsample (after
extraction of foreign matter and impurities) is performed to
determine the group, class, and type of the product according
* Sidnei Alves Araújo to the reference values established by BMALS [10]. The
saraujo@uni9.pro
group is related to the botanical species (Phaseolus vulgaris
1
Informatics and Knowledge Management Post Graduate Program,
L.—Group I; and Vigna unguiculata (L)—Group II), the class
Nove de Julho University - UNINOVE, Rua Vergueiro, 235/249, depends on the skin color of the grains contained in the
Liberdade, São Paulo, SP 01504-001, Brazil inspected subsample (black, white, colored, or mixed) and
2
Federal University of ABC, Av. dos Estados, 5001, Bangu, Santo the type is determined according to the defects found.
André, SP 09210-580, Brazil Among the defects found in bean grains are broken, bored
Int J Adv Manuf Technol

by insect, moldy, burned, crushed, chopped, sprouted, wrin- only grains of rice and lentils were considered. For both
kled, stained, and discolored [10, 11]. The main problems grains, the obtained average accuracy was close to 95%.
associated with the current bean quality inspection process The authors of [23] proposed a CVS to identify six varieties
are the high probability of occurrence of errors and the diffi- of beans grown in Italy from attributes such as size, shape,
culty for standardizing the results. color, and texture of the grains, which achieved a success rate
In this context, a computer vision system (CVS) capable of of 99.56%. The same authors conducted other experiments,
spatially locating each grain in a sample, classifying it accord- reported in [24], taking into account fifteen varieties of Italian
ing to the color of its skin and identifying defects, constitutes beans, where they achieved a success rate of 98.49%.
an important practical application to assist in the product qual- In the work [25], image processing techniques were
ity inspection process. In fact, computer vision systems repre- employed to evaluate changes in the color of beans during
sent an important alternative for the automation of visual in- storage in order to correlate these changes with a phenomenon
spection tasks, since they can save time and industrial costs, called “hard to cook grains.” The results showed the existence
and can improve the quality of the products. Because of this, of the investigated correlation and confirmed the efficiency of
computer vision can be considered as one of the keys for the approach proposed by them.
achieving industry 4.0 [12]. In [31], the authors proposed a CVS to identify ten varieties
In the last two decades, many works have been presented in of beans grown in Iran, which employs a multilayer
the literature proposing the development of computer systems perceptron artificial neural network (MLP-ANN) to classify
to assist visual inspection quality of several agricultural grains the grains from color features. However, different from most
such as rice [3, 13–19]; coffee bean [20–22]; beans [7, 9, of the works addressing bean classification, the results obtain-
23–28]; and other grains [8, 13, 29, 30]. Such works corrob- ed in this work were expressed only in terms of sensibility
orate the importance of the topic addressed in this work. (96%) and specificity (97.1%).
This work presents the development of a machine vision sys- In [32], the authors proposed a method for fast quality
tem (MVS), composed by a set of hardware and software, which control of red beans, which first maps the pixels belonging
is applicable to the process of visual quality inspection of beans. to the grains, extracting them from the blue background, and
Although the software of the proposed MVS has been designed then classifies those pixels using an algorithm based on the
to deal with Brazilian beans (class and type determination), it can fuzzy C-means clustering (FCM), to determine the mixture of
be adapted to deal with beans from other countries. other types of commercial beans as well as low discolored red
The main contributions of this work are (i) development of beans, in the analyzed sample. In their experiments, they re-
a fast and robust approach to segment grains capable of spa- port accuracies varying from 10 to 96% depending on the
tially locating each grain, even in the cases where there are mixture of colors and textures of the grains contained in the
many grains glued to each other (touching grains); (ii) devel- analyzed sample.
opment of approaches based on artificial neural networks and The works [7, 13, 23–25, 32] show the need and the im-
convolutional neural networks for classification and detection portance of computer systems for bean inspection. However,
of defects; (iii) design and development of an equipment for the systems developed in these works do not spatially localize
visual quality inspection of Brazilian beans; and (iv) compo- the grains. In addition, the proposed approaches for segmen-
sition of an image database that was used in the experiments tation fail in images with “glued” grains (touching grains). For
and will be available in the literature so that other researchers this reason, in the experiments carried out in these works, the
can evaluate and compare their methods. grains are placed purposely spaced from each other before the
image acquisition to facilitate the segmentation [7, 13, 23, 24],
or only previously segmented grains are processed [25, 31], or
2 Related works the grains are not isolated from each other before performing
the classification [32]. This limitation hinders the applicability
In the literature, there are several works proposing the use of of such systems in industrial processes of visual inspection.
computer vision systems (CVS) for analysis and classification In [33], the authors proposed a CVS that uses cross-
of seeds and grains. However, there are very few works deal- correlation-based granulometry in the segmentation step
ing with automatic visual inspection of beans. and a k-means-based algorithm that evaluate the predomi-
In [7], the authors proposed a CVS for classification of nant color of each grain, classifying it into three possible
beans based on the color and size of grains using morpholog- classes (Carioca, Mulatto, or Black). Although the segmen-
ical operations, color histograms, and an artificial neural net- tation algorithm proposed in this work is able to solve the
work. The classification success rate obtained in this work was problem of touching grains, the computational time spent
90.6%. in this step makes unfeasible the practical application of
In [13], a CVS to classify seeds, beans, and cereals is pre- the proposed CVS, which takes 18 s on average to process
sented. However, in the experiments conducted in this work, an image of 800 × 600 pixels (6 s for pre-processing, 11 s
Int J Adv Manuf Technol

for segmentation, and 1 s for grain classification). In such (capable of spatially localizing the grains even if they are
work, the success rates of 99.88% and 99.99% were ob- glued together) with low computational cost, aiming to pro-
tained, respectively, in the segmentation and classification vide the development of systems that can be employed in
steps. To the best of our knowledge, this is the work that industrial processes of visual inspection. It can also be noted
presents the best success rates in the tasks of segmentation that none of the reported works explores the detection of de-
and classification of beans. fects in the beans, perhaps due to the difficulty of extracting
The authors of [26] proposed a CVS to classify beans that features in very small regions of interest. These findings have
employs two pre-processing steps, the first based on the motivated the development of the segmentation and defect
nearest neighbor algorithm to remove the background and detection approaches proposed in this work.
the second using the ultimate grain filter (UFG) technique It is worth mentioning that the results reported above were
[34] to remove possible residual noise from the previous step. obtained in experiments using sets of images previously ac-
Then, they applied the watershed transform (WT) to segment quired, which we call in this work “offline mode.” Only [33]
the grains and the algorithm proposed in [33] for classifica- conducted experiments also in “online mode,” in which the
tion. The proposed CVS takes 3.8 s to process an image of 800 grains of a batch are spilled continuously on the conveyor belt
× 600 pixels, obtaining the success rates of 98.87% for seg- to be inspected by the CVS. Obviously, in this mode, the
mentation and 99.95% for classification. The reduction in results obtained (overall success rate of 91.33%, including
processing time was presented by the authors as a contribution segmentation and classification) were much lower than those
of this work. achieved in the same work in offline mode (99.88% and
In [27], a CVS for classification of the three beans most 99.99% in the segmentation and classification steps).
consumed in Brazil (Carioca, Mulatto, and Black) is present- Regarding the physical devices used in experiments to
ed. In the segmentation step, the watershed transform (WT) evaluate the computer vision systems for inspecting grains,
was combined with heuristics that analyze the size of seg- in the literature, there are simple mechanisms to control light-
ments (connected components), being able to separate them ing conditions during image acquisition [7, 13, 25]; low-cost
in two or more grains or join one or more segments that pos- prototypes composed by grain input box, conveyor belt, and
sibly belong to the same grain. The classification step employs image acquisition chamber as those proposed by [33, 36]; and
a multilayer perceptron artificial neural network (MLP-ANN), mechatronic devices for image acquisition such as those pre-
which was trained using a set of color features extracted from sented in [37, 38]. The only complete equipment found in the
the grains. According to the authors, the proposed CVS is literature was proposed by [18], for inspection of rice grains.
capable of processing an image of 1280 × 720 pixels in ap- According to the authors, the equipment is capable of carrying
proximately 1 s with hit rates of 97.39% and 99.14% for out all stages of the process of inspecting the visual quality of
segmentation and classification, respectively. the product, from peeling to classification, which is based on
In [28], improvements were made in the CVS proposed in the shape and size of the grains. Regarding performance, the
[27], adding the cross-correlation granulometry (GCC) tech- authors report only that the equipment proved to be 31.3%
nique in the segmentation step. In these works, GCC is more efficient than manual inspection, considering the time
employed only in cases where the segmentation by WT fails. spent in the analysis and the classification accuracy. They also
Such improvements allowed decreasing the processing time to mention that the average time to process each image is 7.33 s.
less than 1 s with success rates above 99%. Although the The lack of specific equipment for visual quality inspection
approaches proposed in such works have allowed a drastic of beans has motivated the development of the equipment that
reduction in processing time, they have not managed to repro- composes the MVS proposed in this work.
duce the success rates obtained by [33].
In [35], the authors proposed a CVS for automatic bean
classification. To this end, they evaluated the performance of
support vector machine (SVM) and random forest (RF) algo-
3 Proposed approaches for segmentation,
rithms in classifying local and global features extracted from
classification, and detection of defects
the segmented grains, which are combined using a bag-of-
of beans
feature (BoF) method. According to the authors, the best ac-
3.1 Segmentation
curacy (98.5%) was achieved using RF and global features.
Although the authors mention that the beans are segmented
Consider as input a RGB color image denoted by a function
using the WT algorithm, they do not present results obtained
in the segmentation step. I RGB : D→K that maps a rectangular grid D⊆ℤ 2 into a set of
As can be perceived, in the most recent research that deals color intensities K ¼ f0; 1; …; 255g3 ; being 3 the number of
with visual inspection of beans, investments have been made bands of RGB color space. Consider also that each pixel of
especially in the segmentation step to obtain a robust approach IRGB is represented by , whose values
Int J Adv Manuf Technol

and denote the horizontal and vertical coordinates of operation, ee is the structuring element, and Igray is the image
. in which the opening operation is applied.
The first step of segmentation consists in converting the The next step is the application of the watershed transform
input RGB image (IRGB) to the CIELab color space generating (WT), using distance transform (Euclidean distance), to seg-
ILab. Then, using a pre-processing step proposed by [33] each ment the grains in Igray. As WT is very susceptible to the
pixel of ILab is mapped to two gray levels in Igray, in which problem of super-segmentation [39], we use some information
typical bean colors are represented by the darker tone (black) such as average grain size and distance between the centers of
and the typical background colors are represented by the ligh- two grains to join two or more fragments erroneously divided
ter tone (white). However, differently from [33] in which a by WT. In other words, the joining (J) of two or more com-
training is done for each image using the k-NN algorithm ponents is done when the distance between their centers is less
(with k = 1), we employed a lookup table (LUT) to compute than the average size of a grain and the sum of their areas is
this mapping more efficiently. Once LUT is created, it is pos- less than or equal to the minimum area of the average grain
sible to describe the mapping process to remove the back- size. Such values are predefined based on all images of the
ground of the image by Eq. 1. employed database.
Finally, the cross-correlation granulometry (CCG) tech-
ð1Þ nique, described in [33] and mathematically formulated in
Eq. 3, is applied in regions of Igray where WT was not able
to segment correctly the grains. These regions are identified
In the sequence, the morphological opening operator (Eq. by connected components considered too large (at least 1.6
2) is applied in Igray to remove small connected components, times of the average grain size). However, instead of the 162
considered as noise. templates originally used in [33], we employed a set of 72
  elliptical templates (T p ) defined from 4 scales and 18 rotation
A ¼ δee εee I gray ð2Þ
angles at each scale. Some of them are illustrated in Fig. 1.
where δ is the dilation operation, ε represents the erosion

ð3Þ

where CC(Igray) is the connected component extracted from (B). The proposed approach employs a multilayer perceptron
Igray, NCC is the normalized cross-correlation operation, and (MLP) ANN with the following architecture: 20 neurons in
T is the template belonging to the set of templates T p. the input layer, a hidden layer with 45 neurons, and three
The working of segmentation approach, including all steps neurons in the output layer to indicate the class of a segmented
of processing, is shown in Fig. 2. grain.
As the segmentation using CCG requires a lot of process- The feature vector v presented in the MLP input layer is
ing time as demonstrated in [33], we first conduct a previous composed by 20 average values of RGB and CIELab color

segmentation using WT and the heuristic to joining fragments components v ¼ R; G; B; a; b; extracted from pixels over
of grains and then apply CCG to refine the segmentation. 4 circular projections starting from the center of a grain, as
Thus, our approach achieved a result very close to that illustrated in Fig. 3. This way of feature extraction was based
achieved in [33], with a much lower processing time (0.1 s, on the work of [40]. In addition, disregarding the component L
in average). from CIELab color system increases the invariance of the
feature vector to changes in lighting.
3.2 Classification The MLP training was carried out using a set of 400 vectors
extracted from images not belonging to the datasets employed
The grain classification consists in labeling each grain from to evaluate the classification approach. Other configuration
the segmented image (Iseg) to one of the three classes consid- parameters adopted in the training were stop criteria = number
ered in this work, that is, Carioca (C), Mulatto (M), or Black of epochs (1000) or error 10–5 and learning rate = 0.05.

Fig. 1 Some examples of elliptical templates (T p ) employed in the proposed segmentation approach
Int J Adv Manuf Technol

Fig. 2 Segmentation steps. a


Input image (IRGB), b mapped
image (Igray), c result of
segmentation by WT, d
application of CCG + joining (J)
grains step, and e final result of
segmentation approach (Iseg)

(a) (b)

(c) (d)

(e)

Fig. 3 Architecture of MLP


employed for beans classification
Int J Adv Manuf Technol

Fig. 4 Final result of


classification approach

In summary, the purpose of classification approach is to


1 1
7
extract a vector of features vi from each grain of Iseg, consid-
diff signðA; BÞ ¼ ∑ A − B ð4Þ
ering the RGB and CIELab color spaces, and map it into a i¼1 mi mi

class ci ∈ {C, M, B}, generating an output image similar to that mAi ¼ −sign hAi :log hAi ð5Þ
shown in Fig. 4. 
Although the classification of grains may seem easy, some mBi ¼ −sign hBi :log hBi ð6Þ
mistakes can occur in this task because there are many cases
where hAi ; hBi are, respectively, the seven invariant Hu mo-
where brown streaks cover a considerable area of Carioca
ments calculated from the analyzed object A (connected com-
bean, making it very similar to Mulatto bean, as can be ob-
ponent extracted from the segmented image) and the object B
served in Figs. 2, 3, and 4.
representing a standard grain.
After each grain in the segmented image is analyzed, those
3.3 Detection of defects identified as broken (diff _ sign > t _ diff _ sign) are labeled, as
shown in Fig. 5.
The defect detection consists of analyzing each grain of the Obviously, the grain signature could be extracted using
segmented image (Iseg) and labeling those that have one or other descriptos such as curvature scale space (CSS) and the
more of the three defects considered in this work. The ap- Fourier transform. However, calculating Hu’s monuments is
proaches for detecting such defects are described in Sections much simpler and faster. In addition, Hu’s monuments are
3.3.1, 3.3.2, and 3.3.3 below. invariant to scale and rotation, essential for the purpose ex-
plored in this work.

3.3.1 Broken beans


3.3.2 Beans bored by insect
As we have known, the shape of a whole bean grain is very
close to an ellipse. Thus, the detection of a broken bean can be The Hough transform for circle detection (HTC) [42] was
done using some algorithm that extracts the signature of a used to detect the holes caused by the insect called
segmented CC and compares it with the signature extracted Acanthoscelides obtectus. The HTC is implemented in the
from the image of an ellipse or from an image of a grain OpenCV image processing library.1 Firstly, a median blur
considered as standard. filter is applied to Igray with a kernel of 3 × 3 pixels aiming
The analysis of the broken grain signature, described to increase the success of the HTC that is applied in the
by Eqs. 4 to 6, is conducted using the seven invariant sequence.
moments of Hu [41]. Basically, we compute the differ- To avoid problems of noise in the detection of the holes,
ence between signature (diff _ sign) of the grain under maximum and minimum radii are calculated and adopted as
analysis and a standard grain. If diff _ sign is greater than parameters of the HTC. These values are easy to obtain since
a threshold (t _ diff _ sign), then the grain is considered as the holes have homogeneous sizes, as can be seen in Fig. 6a.
broken. In our experiments, we adopted t _ diff _ sign =
1
0.4 (obtained experimentally). https://opencv.org/
Int J Adv Manuf Technol

and the other 53 dedicated to the task of object detection. In


the training phase, we adopt 3000 epochs as stop criteria.

4 Developed machine vision system

The proposed MVS is composed of a set of software and


hardware (low-cost equipment) capable of automatically clas-
sifying the most consumed beans in Brazil (Carioca, Mulatto,
and Black) and detecting three of the main defects: broken,
bored by insect, and moldy.

4.1 Software

The software, including the algorithms for segmentation, clas-


Fig. 5 Example of broken bean detection
sification, and detection of defects, was developed in C/C++
language using the Microsoft Visual Studio 2015 platform
Finally, the holes detected by the proposed approach are and the OpenCV image processing library. The system inter-
marked on the output image with a red circle, as showed in face, illustrated in Fig. 8, was built using tools from the
Fig. 6b. Microsoft .NET framework platform, which contains re-
sources that facilitate the creation of layouts. We opted for
the C/C++ language because we already had some algorithms
3.3.3 Moldy beans written in this language and also because it will facilitate, in
terms of portability, the execution of the software in different
For detecting moldy grains, a convolutional neural network operating systems.
(CNN) was employed. It was trained using a dataset com- The command buttons on the interface are divided into
posed by 5920 subimages (central regions of the grains with blocks and allow the user to control the motor that moves
30 × 30 pixels, as showed in Fig. 7) extracted from healthy the conveyor belt, turn the camera on/off, control the lighting
and defective grains. This dataset was divided into two parts, inside the image acquisition chamber, trigger processing in
being 4000 subimages for training and the remaining 1920 for offline or online mode, and save images (input image and
validation of the training. images generated in all processing steps). Finally, the process-
The CNN employed in this task was implemented using the ing results are shown in the right side of the interface.
YOLOv3 framework [43], specially designed for object de- The developed software incorporates the approaches de-
tection, which was adjusted in order to receive images of 30 × scribed in Section 3, as illustrated in the block diagram
30 pixels in the input layer. The CNN YOLOv3 (You Only showed in Fig. 9. The first block (acquisition) represents the
Look Once) is composed by 106 layers, 53 for the backbone, capture of the input image by the camera, the second block

Fig. 6 Example of bored beans


detection. a Bored beans and b
detected bored beans

(a) (b)
Int J Adv Manuf Technol

Fig. 7 Examples of images used


for CNN training/validation. a
Moldy grains and b healthy grains

(a)

(b)

(pre-processing) represents the step of mapping the approximately 1300 lux, which was established after carrying
foreground/background pixels of the input image using the out experiments considering different lighting intensities.
built LUT, and the remaining blocks represent the steps of Also attached to the lid of the acquisition chamber is a camera
segmentation, classification, defect detection and displaying OV2710 CMOS Full-HD - 1080p. This camera is placed at
the processing results. the distance of 40 cm to the belt, as suggested by [44] and
provides high-resolution images (1920 × 1080 pixels), neces-
4.2 Hardware sary for inspecting the moldy defect. It is worth mentioning
that for all other approaches the images are resized to 1280 ×
The hardware (Fig. 10) consists of an equipment devel- 720 pixels, in order to reduce the processing time.
oped with low-cost electromechanical materials such as a To ensure stability in the image acquisition process,
table made in structural aluminum that includes an image avoiding possible vibrations of the conveyor belt, an Easy
acquisition chamber, a conveyor belt controlled by a servo Servo Motor controlled by an Atmega 328P microcontroller
motor, a feeding mechanism (input box where the grains was employed in the equipment. It provides a precise speed
to be inspected are spilled), and a box for discharging the adjustment and is responsible for controlling the lighting sys-
inspected grains. tem and sending a signal for processing the images when the
The equipment was developed to automate the visual qual- equipment is operating in continuous mode.
ity inspection of beans, allowing that this process be carried The total cost of the materials that make up the equipment,
out continuously (online mode). It was also used to acquire the including the employed full-HD camera, was US$1207.26.
images that compose a database used in the offline
experiments.
For the construction of the image acquisition chamber that 5 Experimental setup
is fixed to the equipment, white matte acrylic was employed.
The lighting is produced by LEDs and distributed uniformly For conducting the experiments to evaluate the proposed ap-
on the lid of the chamber, producing an illuminance of proaches for segmentation, classification, and detection of

Fig. 8 Interface of the software that composes the proposed MVS


Int J Adv Manuf Technol

Fig. 9 Sequence of steps


performed by the developed
software

defects, we composed a database containing 270 images The database is composed of 4 sets named DS1, DS2, DS3,
of samples of beans with different mixtures of skin colors and DS4, detailed in Table 1. As in [33], for acquisition of the
and defects. In this work, we named these experiments as images, we considered beans from Group I, belonging to 3
offline mode. classes/subclasses (Carioca, Black, and Mulatto) bought from
different trade brands, being the first two the most consumed
beans in Brazil according to [45]. DS1 and DS2 are composed
of 100 images of 1920 × 1080 pixels each, divided into 10
subsets. The DS3 is composed of 50 images of 1920 × 1080
pixels, divided into 5 subsets. In each subset, the images have
the same quantities of Carioca, Mulatto, and Black beans.
It is important to remember that only the approach for de-
tecting the moldy defect uses the images with resolution of
1920 × 1080 pixels. For all other approaches, the images are
resized to 1280 × 720 pixels, in order to optimize the process-
ing time.
The images from DS1, DS2, and DS3 (available for down-
load at http://www.saraujo.pro.br/beans) were used to
evaluate the segmentation and classification approaches,
while the DS4, detailed in Table 2, was employed to
evaluate the approaches for detection of defects.
To evaluate the proposed MVS (hardware and software),
we conducted experiments in the online mode, in which the
beans contained in a batch are spilled continuously on the
conveyor belt for the MVS to perform the inspection
(counting the number of grains, determining the percentage
Fig. 10 Developed equipment of mixture of beans according to their skin colors and the
Int J Adv Manuf Technol

Table 1 Distribution of beans in DS1, DS2, and DS3

Subset Number of images in each Number of Carioca beans in each Number of Mulatto beans in each Number of Black beans in each
subset image image image

DS1 DS2 DS3 DS1 DS2 DS3 DS1 DS2 DS3 DS1 DS2 DS3

1 10 10 10 100 20 100 0 20 0 0 60 0
2 10 10 10 95 20 95 5 60 5 0 20 0
3 10 10 10 95 60 95 0 20 0 5 20 5
4 10 10 10 90 40 90 5 40 5 5 20 5
5 10 10 10 85 40 85 10 20 10 5 40 5
6 10 10 – 85 20 – 5 40 – 10 40 –
7 10 10 – 80 30 – 10 30 – 10 40 –
8 10 10 – 85 30 – 15 40 – 0 30 –
9 10 10 – 85 40 – 0 30 – 15 30 –
10 10 10 – 70 34 – 15 33 – 15 33 –
Total 100 100 50 870 334 465 65 333 20 65 333 15

quantity of each defect). As well as in [33], we prepared lots 6 Results


(batches) of 1000 grains with different mixtures of beans, as
follows: Lot 1 (930 Carioca and 70 Black beans) and Lot 2 In this section, the results obtained in the experiments per-
(400 Carioca, 300 Mulatto, and 300 Black beans). In addition, formed in offline and online modes are presented and
we considered a lot containing 250 grains (100 healthy, 50 discussed. The offline experiments (using the composed data-
broken, 50 bored, and 50 moldy beans) specially prepared to base) served to evaluate the developed approaches while the
test the software in detecting defects. Some videos generated online experiments aimed at the evaluation of the proposed
during the online experiments are available at www.saraujo. MVS (hardware and software).
pro.br/beans.
The validation of the results obtained was done manually, 6.1 Offline experiments
through visual observation of the processed images. As in the
most works presented in Section 2, the success rate was used 6.1.1 Segmentation and classification
to evaluate the performance of the proposed approaches.
However, it is important to note that a direct comparison of Table 3 shows the success rates and the number of true posi-
the results obtained by our approaches with the results report- tive (TP), false negative (FN), and false positive (FP) cases
ed in the works described in Section 2 may not be fair, since obtained in the evaluation of segmentation and classification
the images from our dataset are different from the images used approaches using DS1, DS2, and DS3.
in such works (acquired by different equipment, under differ- Regarding segmentation, the use of the CCG technique to
ent lighting conditions and with different resolutions). In ad- treat only the regions in which the grains were not correctly
dition, none of the works described in Section 2 investigated segmented by WT and the heuristics for joining grains pro-
the detection of defects in beans. All experiments were per- vided good results, both in terms of the average success rate
formed using a PC Intel Core i7-7500U Quad Core, 2.7 GHz (99.6%) and time of processing (0.8 s), when compared with
processors 16GB RAM. This PC was also used to control the the results obtained by [33] (99.88% and 17 s). In addition, the
equipment in the online mode. success rates (above 99%) obtained by the classification

Table 2 Distribution of defective


grains in DS4 Subset Number of images in Number of Number of Number of Number of
each subset healthy grains broken grains bored grains moldy grains

1 5 100 50 50 0
2 5 100 0 0 30
3 10 100 50 50 30
Total 20 300 100 100 60
Int J Adv Manuf Technol

Table 3 Results obtained by the segmentation and classification approaches in offline mode

Subset Segmentation Classification success rate (%)

TP FN FP Success rate (%)

DS1 DS2 DS3 DS1 DS2 DS3 DS1 DS2 DS3 DS1 DS2 DS3 DS1 DS2 DS3

1 998 999 997 0 1 2 2 0 1 99.8 99.9 99.7 99.5 99.8 99.8


2 997 999 996 1 0 4 2 1 0 99.7 99.9 99.6 99.6 99.7 99.9
3 991 1000 998 3 0 2 6 0 0 99.1 1000 99.8 99.1 99.1 99.4
4 997 995 998 1 0 0 2 5 2 99.7 99.5 99.8 99.3 99.7 99.7
5 998 996 994 1 2 2 1 2 4 99.8 99.6 99.4 98.9 99.8 100
6 999 993 – 0 0 – 1 7 – 99.9 99.3 – 100 99.9 –
7 997 992 – 0 1 – 3 7 – 99.7 99.2 – 99.5 100 –
8 996 997 – 3 1 – 1 2 – 99.6 99.7 – 99.3 99.6 –
9 991 998 – 3 0 – 6 2 – 99.1 99.8 – 98.2 99.1 –
10 997 973 – 2 0 – 1 27 – 99.7 97.3 – 99.4 99.7 –
Total 9961 9942 4983 14 5 10 25 53 7 99.6 99.4 99.7 99.3 99.6 99.8

Table 4 Results obtained by the approaches to detect defective grains in offline mode

Broken beans Bored beans Moldy beans

Grains correctly detected per image Success rate Grains correctly detected per image Success Grains correctly detected per image Success rate
(average) (%) (average) rate (average) (%)
(%)

40 80.0 45 90.0 30 100.0

approach can also be considered a good result. This perfor- With respect to the approach for detection moldy beans, it
mance combined with the low computational cost of the ap- achieved an excellent performance with the use of the RNC,
proach (0.1 s) corroborates its use in the proposed MVS. The obtaining a success rate of 100% in the experiments. The
overall success rate and the time spent by the classification average time spent by the RNC to process an image of 1920
approach proposed in [33] are 99.99% and 1 s. × 1080 pixels is 0.2 s, making the total time of 0.3 s to detect
the three defects considered in this work. It is important to
highlight that for identification of the moldy defect, a single
6.1.2 Detection of defects subimage of n × n pixels is extracted from a grain of an image
with resolution 1920 × 1080 pixels, being n the smallest di-
The proposed approaches for detecting broken and bored de- ameter of the segmented grain.
fects obtained quite satisfactory results. As can be seen in Figure 11 shows part of an image from subset 3 of DS4
Table 4 and Fig. 11, these approaches were not able to detect with the defects detected. They are labeled as follows: broken
all the defective grains, as also observed in the other images “Br” in blue, bored “Bo” in red, and moldy “Mo” in pink.
used in the experiments. However, they detected 80% of bro-
ken grains and 90% of bored grains. 6.2 Online experiments
Regarding the bored defect, in Carioca beans, the detection
is more difficult when compared to Mulatto or Black beans, In this mode, the beans contained in a batch are spilled con-
due to their color and the existence of streaks that can be tinuously on the conveyor belt for the MVS to perform the
formed by curved lines. In some cases, even for the human inspection. Thus, as explained in Section 5, we prepared two
eye it is difficult to detect the holes. The two approaches lots of 1000 grains for evaluating the segmentation and clas-
together require a computational time of approximately 0.1 sification approaches. For each lot, three experiments
s, which corroborates the feasibility to implement them in (inspections) were carried out and the average results obtained
the software of the proposed MVS. are presented in Table 5.
Int J Adv Manuf Technol

considered satisfactory, since that in many cases the broken


grains are glued to whole grains, or even to another broken
grain, leading to the occurrence of failures. Regarding the
errors in the detection of bored grains, in many cases even
for human vision, they are difficult to be detected, especially
in Carioca beans for the reasons already explained in
Section 6.1.2. Finally, regarding the detection of moldy
grains, there were cases of defective grains that are confused
with Mulatto grains due to the similarity of their skin colors
and textures. This failure can be made even by a human op-
erator trained to identify bean defects.

6.3 Processing times

Fig. 11 Example of result obtained in the detection of defects The processing times spent by the proposed approaches
showed in Table 7 were extremely important to enable the
development of the MVS presented in this work.
As can be seen in Table 5, the results of segmentation and Considering the steps of segmentation, classification, and
classification of grains are lower than those obtained in offline detection of defects, each acquired image was processed in
mode. This occurs mainly due to the decrease in image quality approximately 1.2 s in offline mode and 1.5 s in online mode.
since the grains are in movement at the moment the image is It is important to highlight that the time spent in online mode is
acquired and also due to the existence of grains placed on the higher due to the larger amount of grains in each processed
borders of two consecutive images that are not processed, as image in this mode, which may contain more than 200 grains.
showed in Fig. 12. The equipment has been adjusted to ac- It should be noted that this time could be even shorter using
quire the images avoiding this last problem, but it is difficult to the parallelism resources provided with the use of GPU
prevent it. One way to overcome this problem is in our future (Graphics Processing Unit). However, this could compromise
research plans. the use of the software on embedded platforms, which is one
Based on Table 5, it is possible to observe that the exper- of our proposals for future work.
iments with segmentation and classification in continuous
mode showed excellent results, in terms of success rates, when
compared with the results obtained in [33] (overall success 6.4 Discussion of the results
rate of 91.33%, including segmentation and classification).
Regarding the detection of defects, we conducted three Based on the results described in Sections 6.1, 6.2, and 6.3, the
experiments with a lot of beans containing 100 healthy, 50 MVS proved to be effective for the tasks of segmentation,
broken, 50 bored, and 50 moldy grains, as described in classification, and detection of the main defects (broken,
Section 5. In these experiments, the MVS was capable to bored, and moldy) contained in the beans most consumed in
detect on average 82% of broken, 88% of bored, and 85% of Brazil (Carioca, Mulatto, and Black). It should be noted that
moldy grains, as shown in Table 6. only the classification approach is specific for Brazilian beans.
Although the percentage of defect detection seems low, the Obviously, this approach can be easily adapted to classify
performance of proposed MVC in online mode can be other types of beans just retraining the MLP-ANN.

Table 5 Average results obtained by the segmentation and classification approaches in online mode

Lot 1 Lot 2

Number of grains correctly Classification success rate (%) Number of grains correctly Classification success rate (%)
segmented segmented

Carioca 897 920 904 97.5 389 397 410 98.8


Mulatto 9 10 13 – 300 285 290 97.2
Black 71 68 62 95.7 296 287 300 98.1
Total 977 998 979 97.4 985 969 1000 98.1
Int J Adv Manuf Technol

Fig. 12 Two consecutive acquired images in online mode

Table 6 Average results obtained in detection of defects in online mode

Broken beans Bored beans Moldy beans

Grains correctly detected Success rate Grains correctly detected Success rate Grains correctly detected Success rate
(average) (%) (average) (%) (average) (%)

41 82.0 44 88.0 41 85.0

To evaluate the capacity of the equipment in the inspection respectively, requiring 0.8 s to process an image in offline
of grains, experiments were performed varying the speed of mode and 1.1 s in online mode, including the pre-processing
the conveyor belt, from which was observed that the best step. The classification approach demonstrates a good perfor-
results were obtained with speeds equal to or less than 0.04 mance, considering the overall success rates (99.6% offline
m/s. These experiments showed that the higher the speed mode and 97.8% in online mode) and computational cost of
employed, the lower the success rates in grain segmentation. approximately 0.1 s in both modes. The results obtained in the
Obviously, if the segmentation fails, the further steps will be detection of broken, bored, and moldy beans showed that the
compromised. developed approaches are also adequate, since they achieved
Adopting the ideal speed (0.04 m/s), the time window be- overall success rates of 90% and 85%, respectively, in offline
tween the acquisitions of two consecutive images is approxi- and online modes. In both modes, these approaches require
mately 8 s. Thus, in online mode, this is the time that must be only 0.1 s to detect broken and bored grains and 0.2 s to detect
considered to compute the number of images, and, conse- moldy grains.
quently, the amount of grains that can be inspected by the The results obtained by the proposed approaches, in terms
MVS in an interval of time. Fox example, to inspect 1 kg of of success rates and computational costs, combined with the
beans (≅ 4000 grains), it is required 160 s. low monetary cost for assembling the equipment demonstrate
Regarding segmentation, the proposed approach presented the technical and economic feasibility of the proposed MVS.
an excellent result in all aspects, since it obtained the success From the point of view of practical application, the pro-
rates of 99.6% and 98.5% in offline and online experiments posed MVS may be more suitable for use in laboratories of

Table 7 Time spent by each proposed approach to process an image in offline and online modes

Approach Processing time (s)

Offline Online

Segmentation 0.8 1.1


Classification 0.1 0.1
Defects detection Broken beans 0.04 0.04
Bored beans 0.06 0.06
Moldy beans 0.2 0.2
Total time spent 1.2 1.5
Int J Adv Manuf Technol

quality inspection to assist in the processes of visual inspec- Funding This work was supported by the FAPESP–São Paulo Research
Foundation (Proc. 2017/05188-9), and by the CNPq–Brazilian National
tion of grains. Considering that the analysis of the quality of
Research Council (research scholarship granted to S. A. Araújo, Proc.
Brazilian beans is performed on 250 g of beans, according to 313765/2019-7).
BMALS procedures, the proposed MVS would take approx-
imately 40 s to analyze a sample.
References
7 Conclusions and future works 1. Somavilla M, de Oliveira VR, Storck CR (2011) Centesimal and
mineral composition of beans when defrosted in a microwave oven.
Discip Sci Saúde 12(1):103–114
In this work, computational approaches for segmentation,
2. Coêlho J (2017) Grain production - beans, corn and soybeans
classification, and detection of three of the main defects found Setorial Notebook ETENE, Fortaleza. [Online]. Available: pp. 1–
in beans (broken, bored, and moldy) were proposed. From 14. https://www.bnb.gov.br/documents/ 80223/4141162/51_graos.
these approaches, we designed a software that together with pdf/42dd9e02-f9fe-10 fc-69ff-314f3c89faf8
the equipment developed gave rise to the MVS for quality 3. Aggarwal AK, Mohan R (2010) Aspect ratio analysis using image
processing for rice grain quality. Int J Food Eng 6(5):1–14. https://
inspection of beans. doi.org/10.2202/1556-3758.1788
The approaches developed, mainly for segmentation and 4. BMALS (2015) Brazilian Ministry of Agriculture, Livestock and
detection of defects, represent important scientific contribu- Supply – Projections of agribusiness Brazil 2014/2015 to 2024/
tions. Regarding segmentation, a fast and robust method was 2025 long term projections. – BMALS, Brasilia, Brasil, p. 133,
Accessed: Apr. 01, 2018. [Online]. Available: http://www.sapc.
proposed, capable of spatially localizing the grains in the an-
embrapa.br/arquivos/consorcio/informe_estatistico/Projecoes_
alyzed image even if they are glued to each other (touching Agronegocio_CAFE_Mapa_2015_2025.pdf.
grains), overcoming existing limitations in many approaches 5. Stegmayer G, Milone DH, Garran S, Burdyn L (2013) Automatic
found in the literature. The defect detection approaches consist recognition of quarantine citrus diseases. Expert Syst Appl 40(9):
in a novelty, in terms of application. In addition, all ap- 3512–3517. https://doi.org/10.1016/j.eswa.2012.12.059
6. Pesante-Santana JA, Woldstad JC (2000) Quality inspection task in
proaches presented in this work are feasible for embedded modern manufacturing quality inspection task in modern
systems since they require low computational time. manufacturing. Ind Manag Syst Eng Fact Public:2260–2263
In the experiments conducted with the image database 7. Kiliç K, Boyaci IH, Köksel H, Küsmenoglu I (2007) A classifica-
(offline), the following success rates were obtained: seg- tion system for beans using computer vision system and artificial
neural networks. J Food Eng 78(3):897–904. https://doi.org/10.
mentation = 99.6%, classification = 99.6%, and defect
1016/j.jfoodeng.2005.11.030
detection = 90.0% (80.0% for broken, 90.0% for bored, 8. Patil NK, Yadahalli RM, Pujari J (2011) Comparison between HSV
and 100.0% for moldy grains) with a processing time of and YCbCr color model color-texture based classification of the
1.2 s. In addition, the experiments in online mode dem- food grains. Int J Comput Appl 34(4):51–57
onstrated the viability of the proposed MVS, since it is 9. Cabral JDD, de Araújo SA (2015) An intelligent vision system for
detecting defects in glass products for packaging and domestic use.
capable of processing an image of 1280 × 720 pixels in an Int J Adv Manuf Technol 77(1–4):485–494
average time of 1.5 s, with success rates of 98.5% in 10. Embrapa (2012) Bean classification manual. Embrapa, Brasilia,
segmentation, 97.8% in classification, and equal to or Brasil, pp. 1–25 [Online]. Available: https://ainfo.cnptia.embrapa.
greater than 82.0% in defect detection. It represents an br/digital/bitstream/item/101039/1/ manualilustrado-06.pdf.
important technological contribution considering that to 11. Sampaio VAM (2016) Grains classification - step by step. Bahia
Farmers and Irrigators Association - Aiba, pp. 1-23. [Online].
the best of our knowledge, it is the first MVS (set of Available: https://aiba.org.br/wp-content/uploads/2017/01/
hardware and software) in the literature dedicated to visu- Cartilha-Classificacao-de-Graos-Versao-Digital.pdf
al inspection of beans. 12. Posada J, Toro C, Barandiaran I, Oyarzun D, Stricker D, de Amicis
Finally, our dataset of images can also be considered an R, Pinto EB, Eisert P, Dollner J, Vallarino I (2015) Visual comput-
ing as a key enabling technology for industrie 4.0 and industrial
important contribution since it will be publicly available for internet. IEEE Comput Graph Appl 35(2):26–40
download contributing to further researches addressing the 13. Aguilera J, Cipriano A, Eraña M, Lillo I, Mery D, Soto A (2007)
thematic “visual inspection of grains.” Computer vision for quality control in Latin American food indus-
In future work, we intend, in the first place, to solve the try, a case study. In: Int. conf. on computer vision (ICCV2007):
workshop on computer vision applications for developing coun-
problems and limitations of the proposed approaches men-
tries, pp 1–11
tioned throughout the work. In the second place, we intend 14. Swati, Chanana R (2014) Grain counting method based on machine
to carry out experiments running the developed software on an vision. Int J Adv Technol Eng Sci 02(08):328–332
embedded platform, such as the Raspberry Pi and, finally, 15. Dubosclard P, Larnier S, Konik H, Herbulot A, Devy M (2014)
incorporating new approaches to the MVS making it able to Automatic method for visual grading of seed food products. Lect
Notes Comput Sci (ICIAR 2014) 8814:485–495. https://doi.org/10.
classify a greater variety of beans, and to detect other defects 1007/978-3-319-11758-4_53
and foreign bodies that can be mixed with the grains, such as 16. Dubosclard P, Larnier S, Konik H, Herbulot A, Devy M (2015)
pieces of branches and leaves and stones, among others. Deterministic method for automatic visual grading of seed food
Int J Adv Manuf Technol

products. In: 4th Int. Conference on Pattern Recognition 30. Siddagangappa MR, Kulkarni AH (2014) Classification and quality
Applications and Methods, pp 1–6 analysis of food grains. J Comput Eng 16(4):01–10
17. Dubosclard P, Larnier S, Konik H, Herbulot A, Devy M (2015) 31. Nasirahmadi A, Behroozi-Khazaei N (2013) Identification of bean
Automated visual grading of grain kernels by machine vision. In: varieties according to color features using artificial neural network.
Twelfth International Conference on Quality Control by Artificial Span J Agric Res 11(3):670–677. https://doi.org/10.5424/sjar/
Vision, vol 9534, pp 1–8. https://doi.org/10.1117/12.2182793 2013113-3942
18. Zareiforoush H, Minaei S, Alizadeh MR, Banakar A, Samani BH 32. Alban N, Laurent B, Martin Y, Ousman B (2014) Quality inspec-
(2016) Design, development and performance evaluation of an au- tion of bag packaging red beans (Phaseolus vulgaris) using fuzzy
tomatic control system for rice whitening machine based on com- clustering algorithm. Br J Math Comput Sci 4(24):3369–3386.
puter vision and fuzzy logic. Comput Electron Agric 124:14–22. https://doi.org/10.9734/bjmcs/2014/12981
https://doi.org/10.1016/j.compag.2016.01.024 33. Araújo SA, Pessota JH, Kim HY (2015) Beans quality inspection
19. Bhat S, Panat S, Arunachalam N (2017) Classification of rice grain using correlation-based granulometry. Eng Appl Artif Intell 40:84–
varieties arranged in scattered and heap fashion using image pro- 94. https://doi.org/10.1016/j.engappai.2015.01.004
cessing. In: Ninth International Conference on Machine Vision 34. Alves, WAL, Hashimoto, RF (2014) Ultimate grain filter. In: IEEE
(ICMV 2016), vol 10341 Icmv 2016, pp 1–6. https://doi.org/10. International Conference on Image Processing (ICIP), pp 2953–
1117/12.2268802 2957. https://doi.org/10.1109/ICIP.2014.7025597
20. Ramos PJ, Prieto FA, Montoya EC, Oliveros CE (2017) Automatic
35. Garcia M, Trujillo M, Chaves D (2017) Global and local features
fruit count on coffee branches using computer vision. Comput
for bean image classification. In: 7th Latin American Conference on
Electron Agric 137:9–22. https://doi.org/10.1016/j.compag.2017.
Networked and Electronic Media (LACNEM 2017), pp 45–50.
03.010
https://doi.org/10.1049/ic.2017.0034
21. Arboleda ER, Fajardo AC, Medina RP (2018) Classification of
coffee bean species using image processing, artificial neural net- 36. OuYang A, Gao R, Liu Y, Dong X (2010) An automatic method for
work and K nearest neighbors. In: Innovative Research and identifying different variety of rice seeds using machine vision
Development (ICIRD), IEEE International Conference on, pp 1–5 technology. In: 2010 Sixth International Conference on Natural
22. Zambrano CEP, Caceres JCG, Ticona JR, Beltrán-Castanón NJ, Computation, vol 1, pp 84–88
Cutipa JMR, Beltrán-Castanón CA (2018) An enhanced algorith- 37. Pearson T (2009) Hardware-based image processing for high-speed
mic approach for automatic defects detection in green coffee beans. inspection of grains. Comput Electron Agric 69(1):12–18
In: 9th International Conference on Pattern Recognition Systems 38. Jayas DS, Singh CB (2012) 15 - Grain quality evaluation by com-
(ICPRS 2018), pp 1–8 puter vision. In: Sun D-W (ed) Computer vision technology in the
23. Venora G, Grillo O, Ravalli C, Cremonini R (2007) Tuscany beans food and beverage industries. Woodhead Publishing, Cambridge,
landraces, on-line identification from seeds inspection by image UK, pp 400–421. https://doi.org/10.1533/9780857095770.3.400
analysis and Linear Discriminant Analysis. Agrochimica 51(4–5): 39. Soille P (2004) Morphological Image Analysis: Principles and
254–268 Applications. Springer-Verlag Berlin Heidelberg, New York, 2nd
24. Venora G, Grillo O, Ravalli C, Cremonini R (2009) Identification edition, 392p. https://doi.org/10.1007/978-3-662-05088-0
of Italian landraces of bean (Phaseolus vulgaris L.) using an image 40. Araújo SA, Kim HY (2011) Ciratefi: an RST-invariant template
analysis system. Sci Hortic (Amsterdam) 121(4):410–418. https:// matching with extension to color images. Integr Comput Aided
doi.org/10.1016/j.scienta.2009.03.014 Eng 18(1):75–90. https://doi.org/10.3233/ICA-2011-0358
25. Laurent B, Ousman B, Dzudie T, Carl MFM, Emmanuel T (2010) 41. Hu MK (1962) Visual pattern recognition by moment invariants.
Digital camera images processing of hard-to-cook beans. J Eng IRE Trans Inf Theory 8:179–187. https://doi.org/10.1109/TIT.
Technol Res 2(9):177–188 1962.1057692
26. Araújo SA, Alves WAL, Belan PA, Anselmo KP (2015) A com- 42. Illingworth J, Kittler J (1987) The Adaptive Hough Transform.
puter vision system for automatic classification of most consumed IEEE Trans Pattern Anal Mach Intell PAMI-9(5):690–698
Brazilian beans. Lect Notes Comput Sci 9475:45–53. https://doi. 43. Redmon J, Farhadi A (2018) YOLOv3: an incremental improve-
org/10.1007/978-3-319-27863-6 ment. arXiv, [Online]. Available: http://arxiv.org/abs/1804.02767.
27. Belan PA, Araújo SA, Alves WAL (2016) An intelligent vision- 44. Anami BS, Savakar DG (2010) Influence of light, distance and size
based system applied to visual quality inspection of beans. Lect on recognition and classification of food grains’ images. Int J Food
Notes Comput Sci (ICIAR 2016) 9730:801–809. https://doi.org/ Eng 6(2):1698. https://doi.org/10.2202/1556-3758
10.1007/978-3-319-41501-7
45. Souza TLPO et al (2013) Common bean cultivars from Embrapa
28. Belan PA, De Macedo RAG, Pereira MA, Alves WAL, Araújo SA
and partners available for 2013. Available: http://ainfo.cnptia.
(2018) A fast and robust approach for touching grains segmenta-
embrapa.br/digital/ bitstream/item/97404 /1/comunicadotecnico-
tion. Lect Notes Comput Sci (ICIAR 2018) 10882:482–489. https://
211.pdf (accessed Mar. 01, 2018).
doi.org/10.1007/978-3-319-93000-8
29. Liu J, Yang WW, Wang Y, Rababah TM, Walker LT (2011)
Optimizing machine vision based applications in agricultural prod- Publisher’s note Springer Nature remains neutral with regard to jurisdic-
ucts by artificial neural network. Int J Food Eng 7(3):1–25. https:// tional claims in published maps and institutional affiliations.
doi.org/10.2202/1556-3758.1745

You might also like