You are on page 1of 23

sensors

Article
Smart Sensing and Adaptive Reasoning for Enabling
Industrial Robots with Interactive Human-Robot
Capabilities in Dynamic Environments—A
Case Study
Jaime Zabalza 1 , Zixiang Fei 2 , Cuebong Wong 2 , Yijun Yan 1 , Carmelo Mineo 1 ,
Erfu Yang 2, *, Tony Rodden 3 , Jorn Mehnen 2 , Quang-Cuong Pham 4 and Jinchang Ren 1
1 Department of Electronic and Electrical Engineering, University of Strathclyde, Glasgow G1 1XW, UK;
j.zabalza@strath.ac.uk (J.Z.); yijun.yan@strath.ac.uk (Y.Y.); carmelo.mineo@strath.ac.uk (C.M.);
jinchang.ren@strath.ac.uk (J.R.)
2 Department of Design, Manufacture and Engineering Management, University of Strathclyde,
Glasgow G1 1XJ, UK; zixiang.fei@strath.ac.uk (Z.F.); cuebong.wong@strath.ac.uk (C.W.);
jorn.mehnen@strath.ac.uk (J.M.)
3 Advanced Forming Research Centre, University of Strathclyde, Renfrewshire PA4 9LJ, UK;
tony.rodden@strath.ac.uk
4 School of Mechanical and Aerospace Engineering, Nanyang Technological University, 50 Nanyang Avenue,
Singapore 639798, Singapore; cuong@ntu.edu.sg
* Correspondence: erfu.yang@strath.ac.uk; Tel.: +44-141-574-5279

Received: 31 January 2019; Accepted: 8 March 2019; Published: 18 March 2019 

Abstract: Traditional industry is seeing an increasing demand for more autonomous and flexible
manufacturing in unstructured settings, a shift away from the fixed, isolated workspaces where
robots perform predefined actions repetitively. This work presents a case study in which a robotic
manipulator, namely a KUKA KR90 R3100, is provided with smart sensing capabilities such
as vision and adaptive reasoning for real-time collision avoidance and online path planning in
dynamically-changing environments. A machine vision module based on low-cost cameras and
color detection in the hue, saturation, value (HSV) space is developed to make the robot aware
of its changing environment. Therefore, this vision allows the detection and localization of a
randomly moving obstacle. Path correction to avoid collision avoidance for such obstacles with
robotic manipulator is achieved by exploiting an adaptive path planning module along with a
dedicated robot control module, where the three modules run simultaneously. These sensing/smart
capabilities allow the smooth interactions between the robot and its dynamic environment, where the
robot needs to react to dynamic changes through autonomous thinking and reasoning with the
reaction times below the average human reaction time. The experimental results demonstrate that
effective human-robot and robot-robot interactions can be realized through the innovative integration
of emerging sensing techniques, efficient planning algorithms and systematic designs.

Keywords: adaptive reasoning; dynamic environments; human-robot interaction; path planning;


robot control; smart sensing

1. Introduction
The recent developments in robotics [1] and autonomous systems [2] have produced new and
world-changing possibilities of integrating robotic systems into many different human activities
and engineering practices. From domestic settings [3] to outer space exploration [4], and spanning
across an endless number of applications in areas such as healthcare [5], non-destructive testing [6],

Sensors 2019, 19, 1354; doi:10.3390/s19061354 www.mdpi.com/journal/sensors


Sensors 2019, 19, 1354 2 of 23

agriculture [7], human recognition [8] and firefighting [9], there is a wide range of smart algorithms
enabling the introduction of robotic systems for advanced activities previously undertaken only by
humans. Indeed, smart robotic systems have the potential to perform faster and more accurately,
to learn and adapt to its environment, and to make intelligent decisions [10].
However, the vast majority of existing industrial robotic systems operate with a predefined series
of tasks that are planned offline. In the case of industrial robotic manipulators, the path taken by the
end effector within a given workspace has proven effective for traditional mass-production processes
based on repetition, but they lack intelligence or perception capability for adapting to changes in the
environment [11]. Consequently, additional research and development efforts are necessary to deploy
smart robotic solutions into dynamically-changing or unstructured environments, particularly for
workspaces shared by independent robots and/or human workers [12].
Therefore, there is a great opportunity for industries to gain a competitive edge through the
implementation of collaborative robotic systems able to interact with humans [13,14]. New and
robust sensing capabilities are needed to provide robotic systems with reasoning and autonomous
thinking [15,16], where these capabilities have to reliably perceive the robot workspace under real
working conditions. There are numerous sensing techniques that can be adopted to perceive the
robot’s surroundings, such as ultrasonic [17] and laser [18]. Nevertheless, machine vision [15] is an
approachable strategy with satisfactory performance and affordable cost, based on the use of optical
cameras with real-time image processing. Additionally, the introduction of new sensing capabilities
requires an appropriate methodology to integrate with other robotic modules such as trajectory
tracking and path planning [19] to effectively process sensing information at the decision-making level,
leading to autonomous reasoning, flexibility and adaptability.
A number of existing implementations of robotic manipulators able to adapt to its environment
to some degree can be found in literatures [10,20,21]. For example, in [20], the mobile robot platform
Care-O-bot 3 was combined with a time-of-flight sensor, working with point clouds. On a different
note, a multisensory system was proposed in [10], including industrial camera, laser and temperature
sensor, where the multiple input was processed by artificial neural networks to control an industrial
robot. Furthermore, in [21], they focused on the path generation in a pre-defined environment
using a Lego Mindstorms EV3 manipulator arm. However, research in this area is still at its infancy,
and opportunities for significant improvements exist for the development of robust manipulator
systems for environments with real working conditions, beyond laboratory settings. For instance,
researchers tend to select expensive sensors [20] for satisfactory accurate measurements, ignoring that
this accuracy can be achieved at signal processing level. Additionally, the introduction of sensing
capabilities in such systems are rough, lacking a smooth integration into the overall system [20,21].
Finally, it is also common to observe a lack of modularity in these integrated systems, where the
designs are focused on particular cases and the different strategies for sensing, operation and control
are linked among them. Consequently, individual functions are not easily interchangeable with other
state-of-the-art technologies without significant implications on the rest of the system.
Following previous work presented in [22], an extension including more detailed analysis and
evaluations is provided for a case study on the development of a robotic manipulator for interactions
with a dynamically-changing environment. The robotic system is provided with sensing capabilities by
means of a low-cost machine vision module, such that it is able to perform pick-and-place operations
through path planning [19] with collision avoidance in real time. The overall system comprises of three
independent and changeable modules: (i) machine vision, (ii) path planning and (iii) robot control,
running in parallel for efficient performance. This design leads to an effectively integrated system with
wide modularity.
The KUKA QUANTEC KR90 R3100 [23], an industrial robotic manipulator found in many
factories, is used for the case study. Nevertheless, the proposed system is transferable to other robots
and industrial applications (e.g., the approach was also tested successfully on a small KUKA KR06
R900 robot [24]). From the experiments, the proposed system is able to efficiently operate under
Sensors 2019, 19, 1354 3 of 23

dynamic conditions, with reaction times faster than the average human reaction time, estimated at
180ms [20]. The results demonstrate the feasibility of the proposed approach for the deployment of
industrial robots into unstructured, frequently changing environments with positive implications for
human-robot interactions. Hence, the main contribution of this work can be stated as the development
of a highly integrated system built up from independent and easily interchangeable modules, leading
to wide modularity for future extensions, implemented, tested and validated on an industrial robot,
which performance has been proven effective with reaction times faster than the human reaction time.
The present manuscript is organized as follows: Section 2 gives an overview of the proposed
system and its design. Sections 3–5 present the machine vision, path planning and robot control
modules, respectively. Then, Section 6 describes the experimental setup and Section 7 evaluates the
performance achieved by the proposed system through simulations and a physical demonstrator,
with concluding remarks drawn in Section 8.

2. System Overview
In this work, an integrated system based on a robotic manipulator is proposed, where the robot
can perform operations in real time under dynamic conditions. Online planning is made to enable
a robotic end effector to perform pick-and-place tasks within a given workspace. Such an online
planning consists of moving the robot to a start (pick) position, pick a given object, transport it to a
given goal (place) position and release it.
Traditionally, this is a manufacturing operation carried out through predefined tasks programmed
offline, as the workspace (environment of the robot) is well structured and fixed. However, the aim
here is to design a system able to work in a dynamic scenario, where the workspace can change
unpredictably at any time. To simulate a dynamic scenario in the experiments, a given obstacle
moving within the workspace is introduced such that it can intercept the trajectory of the robot during
operation. Consequently, the system is required to perceive changes in the environment accurately and
re-plan the robot’s trajectory in real-time in response to potential collisions. This behavior is critical to
robots that must interact with freely changing environments in which other agents (such as humans
and robots) act within the robot workspace.
Advanced perception of the world in robots is made possible by giving them the required sensing
capabilities. This is possible by a sensing module that is responsible for acquiring environmental
information through peripheral devices and data processing. In this work, the sensing strategy adopted
is based on machine vision [15], where optical cameras are used in conjunction with image processing
techniques. The resulting geometric information of the world is then interpreted and applied to
decision-making processes. Here, an online path planner retrieves the geometric obstacle information
and re-plans a valid collision-free path to complete the required pick-and-place task. Finally, the output
from this reasoning process is sent to a controller to execute the path on the physical robot. Trajectory
generation that obeys kinematic constraints of the robot is performed locally within the controller
through an add-on interfacing software.
The proposed system consists of three independent modules: (i) machine vision, (ii) path planning
and (iii) robot control, linked in parallel to form an efficient integrated system with wide modularity
(see Figure 1). All modules work in real-time, and communications maintained across modules.
The machine vision module performs obstacle detection, where dynamically-moving obstacles are
tracked in the robot workspace. The decision-making process is derived from the path planning
module, where the search for optimal, feasible paths for pick-and-place operations is performed based
on input obtained from the machine vision to update the current geometric representation of the
environment. These two modules communicate by TCP/IP sockets [25], where the machine vision
software acts as a server, and the path planning module acts as a client. Finally, the resulting geometric
paths are sent to the robot control module to drive the physical robot along the specified paths through
Dynamic Link Libraries (DLLs). In the following sections, each of these modules will be given in detail.
can change unpredictably at any time. To simulate a dynamic scenario in the experiments, a given
obstacle moving within the workspace is introduced such that it can intercept the trajectory of the
robot during operation. Consequently, the system is required to perceive changes in the environment
accurately and re-plan the robot’s trajectory in real-time in response to potential collisions. This
behavior
Sensors 2019,is19,critical
1354 to robots that must interact with freely changing environments in which other
4 of 23
agents (such as humans and robots) act within the robot workspace.

Sensors 2018, 18, x FOR PEER REVIEW 4 of 24

Figure
Figure 1. Overview
1. Overview
task. Finally, ofofthe
the output the
fromproposed system.process
this reasoning
proposed system. Three
Three ismodules arunning
sent torunning
modules controller simultaneously
to execute the
simultaneously in parallel:
path
in parallel: on the
(i)
(i) machine
physicalvision
machine vision (smart
robot.(smart
Trajectory sensing),
generation
sensing), (ii) path planning
that obeys
(ii) path planning (reasoning,
kinematicdecision
(reasoning, decision
constraints making),
of the
making), androbot and (iii) robot
is performed
(iii) robot control
control
locally (movement
(movement within coordination).
the controller
coordination). through an add-on interfacing software.
The proposed system consists of three independent modules: (i) machine vision, (ii) path
3. Machine Vision(iii)
planning and
Module
robot control, linked in
Advanced perception of the world in parallel
robots tois form
made anpossible
efficient integrated
by givingsystem them with the wide
required
modularity
Machine (see Figure
vision isThis 1). All modules
usedis topossible
provideby work in
thearoboticreal-time,
system and communications
withthatsensory maintained
attributes. for This across
module
sensing capabilities. sensing module is responsible acquiring
is modules.
based on The machine
optical cameras vision module
acquiring performs
frames in obstacle
real time. detection,
The whereare
images dynamically-moving
processed to extract
environmental information through peripheral devices and data processing. In this work, the sensing
obstacles are tracked in the robot workspace. The decision-making process is derived from the path
relevant adopted
strategy information abouton
is based the robot environment,
machine particularly
vision [15], where opticalthe position
cameras areofused
a moving obstacle in
in conjunction its
with
planning module, where the search for optimal, feasible paths for pick-and-place operations is
workspace
imageperformedto
processingavoid collisions.
techniques.
based on inputThe resulting
obtained fromgeometric
the machine information
vision to of the world
update is then
the current interpreted
geometric
The
and applied machine vision
to decision-making
representation is basedprocesses.
of the environment. onThese
threetwo independent
Here, an online
modules stages:
path (i)
communicate frames
planner
by TCP/IP acquisition,
retrieves (ii) image
the geometric
sockets [25], where
processing
obstacle and (iii)
theinformation
machine data
and
vision communication
re-plans
software a valid
acts (see and
Figure
collision-free
as a server, 2).
pathFirstly,
the path the
to complete
planning acquisition
module theacts as astage
required controls
Finally, the
pick-and-place
client.
optical
thecameras
resultingfor capturing
geometric pathsthearevideo
sent tostream,
the robotstating
controlthe acquisition
module to driveframe rate, resolution
the physical robot along and
related
theparameters.
specified pathsSecondly,
throughthe imageLink
Dynamic processing
Librariesstage computes
(DLLs). the acquired
In the following sections,frames
each of tothese
perform
modules will be given in detail.
obstacle detection. In this work, the obstacle detection is based on color with some additional filtering
(by size). Finally, the last stage extracts the obstacle location obtained from image processing as
3. Machine Vision Module
information packages ready to be sent from the machine vision module when requested by the path
Machine vision
planning module. is used
This data to provide the
transmission robotic on
is based system with sockets
TCP/IP sensory [25].
attributes.
These This module
three is are
stages
based on by
implemented optical
threecameras acquiring
independent framesrunning
threads in real time. The images
in parallel. are processed
Consequently, to computation
the extract relevanttimes
informationimage
for acquisition, aboutprocessing
the robot environment, particularly
and communication the accumulate.
do not position of a These
moving areobstacle
explainedin its
in the
workspace to avoid collisions.
following subsections.

2. Implementation
FigureFigure architecture
2. Implementation forformachine
architecture machinevision. Threedifferent
vision. Three different threads
threads running
running in parallel
in parallel
for acquisition, image processing and communication. The input comes from the camera
for acquisition, image processing and communication. The input comes from the camera devices, devices,
while while
the output is stored in an information package to be sent to other modules in the system.
the output is stored in an information package to be sent to other modules in the system.

The machine vision is based on three independent stages: (i) frames acquisition, (ii) image
processing and (iii) data communication (see Figure 2). Firstly, the acquisition stage controls the
optical cameras for capturing the video stream, stating the acquisition frame rate, resolution and
related parameters. Secondly, the image processing stage computes the acquired frames to perform
Sensors 2018, 18, x FOR PEER REVIEW 5 of 24

implemented by three independent threads running in parallel. Consequently, the computation times
Sensors 2019, 19, 1354 5 of 23
for acquisition, image processing and communication do not accumulate. These are explained in the
following subsections.
3.1. Data Acquisition
3.1. Data Acquisition
Among the different acquisition devices available for this case study, low-cost webcams were
adopted Among the different
to conceptually acquisitionthat
demonstrate devices available forofthis
the enhancement case study,
sensing low-cost
capabilities can be webcams
achieved were
at
the image processing level. In this particular case study, two cameras were placed off-board in fixedat
adopted to conceptually demonstrate that the enhancement of sensing capabilities can be achieved
the image
locations, processing
instead of beinglevel. In thison
mounted particular
the robotcase as instudy,
other two cameras
works were placed
[26]. Off-board off-board
cameras simplifyin fixed
the
locations, instead
computation of being
of spatial mounted
coordinates in on the robot
real-time andasenable
in other works [26].
perception Off-board
of the environment.cameras simplify
However,
the scheme
this computationcan createof spatial coordinates
situations in whichinthereal-time
robotic arm and invades
enable perception
the camera’s of field
the of environment.
view and
However, this scheme can create situations in which the robotic arm
would hide potentially moving obstacles. For this reason, two cameras were used, and are henceforth invades the camera’s field of
view and
denoted as would
Cam-1 hide potentially
and Cam-2. The moving
main camera,obstacles.
Cam-1,Forwasthisplaced
reason,overhead,
two cameras were used,
capturing a wideand vieware
henceforth denoted as Cam-1 and Cam-2. The main camera, Cam-1,
of the workspace. Cam-2 was installed as a complementary camera at the side, and oriented in an was placed overhead, capturing
a wide view
orthogonal of the to
direction workspace.
Cam-1. This Cam-2
ensuredwasthat
installed as a complementary
any moving obstacle wouldcamera always at bethe side, and
detected by
oriented in an orthogonal direction to Cam-1. This ensured that any moving
at least one of the cameras, solving the robot intrusion problem for a single-camera setup. The exact obstacle would always
be detected
location of theby twoat low-cost
least onecameras,
of the cameras,
as well as solving the robot intrusion
other considerations probleminfor
are discussed thea experimental
single-camera
setup.
setup The exact
(Section 6). location of the two low-cost cameras, as well as other considerations are discussed
in the experimental setup (Section 6).
3.2. Image Processing
3.2. Image Processing
The frame(s) from the cameras acquired in the previous stage are then processed to perform
obstacleThedetection
frame(s)infrom 2D (athe cameras
constant acquired
height in the previous
is assumed stage are initial
for 3D). However, then processed to perform
offline calibration is
obstacleto
required detection
configure inthe
2D machine
(a constant height
vision is assumed
parameters. foroffline
This 3D). However,
calibration initial
(see offline
Figure 3) calibration
is carriedis
required
out duringtothe configure the machine
system setup, with anvision
expectedparameters. This offline
low frequency calibration (see
for re-calibration Figure 3)
as lighting is carried
conditions
out during the system setup, with an expected low frequency for re-calibration
in the workspace (and related industrial environments) are constant over time. It includes three as lighting conditions
in theFirst,
steps. workspacea given (and related
contour is industrial
defined for environments)
masking the are constant
frames, over time.
removing any Itinformation
includes three steps.
beyond
First,
the a given
robot contourThen,
workspace. is defined
several forcalibration
masking the frames,
points are removing
measured any withininformation
the workspace beyond and thetaken
robot
asworkspace.
a referenceThen, for theseveral calibration
computation pointslocations.
of spatial are measuredThis stepwithin the workspace
is necessary and taken of
for the extraction asa a
reference
given for the
obstacle’s computation
location of spatialalgorithm,
via a projection locations. which
This step is necessary
translates for the extraction
pixel coordinates in the of a given
image to
2Dobstacle’s
real-world location
spatial via a projectionFinally,
coordinates. algorithm, which translates
fine-tuning of parameterspixel(described
coordinatesbelow)in thefor image to 2D
obstacle
real-world
detection spatial coordinates.
is performed using a manualFinally, fine-tuning
adjustment toolof(Figure
parameters (described
3b), with resulting below)
effectsforshownobstacle
in
detectionThis
real-time. is performed using a manual
real-time adjustment allowsadjustment
an easy tuning tool (Figure
to control3b), with range
a wide resulting effects
of noise levelshown
hencein
real-time.
leading This real-time
to robust obstacleadjustment
detection. allows an easy tuning to control a wide range of noise level hence
leading to robust obstacle detection.

(a) (b)
Figure3.3.Calibration
Figure Calibrationused
used
inin the
the machine
machine vision
vision module:
module: (a)(a) Main
Main procedures
procedures for for overall
overall calibration
calibration at
different stages;
at different (b) Control
stages; panel
(b) Control window
panel for adjustment
window of parameters
for adjustment in real
of parameters time.time.
in real

There
Thereisisa remarkable
a remarkable number
number of potential solutions
of potential to implement
solutions obstacle
to implement (object)
obstacle detection.
(object) In theIn
detection.
up-to-date research,
the up-to-date it is possible
research, to find contours,
it is possible descriptorsdescriptors
to find contours, and their combination [27], extensions
and their combination [27],
of the correlation filter for object tracking [28], combination of color and depth (3D) images [29],
Sensors 2018, 18, x FOR PEER REVIEW 6 of 24
Sensors 2019, 19, 1354 6 of 23
extensions of the correlation filter for object tracking [28], combination of color and depth (3D) images
[29], probabilistic approaches to fuse multiple cameras information [30], alignment of hybrid visual
probabilistic
features approaches
to register visibletoand
fuseinfrared
multiple cameras
images information
[31], and even [30],
smartalignment
calibrationof hybrid
proceduresvisual[32,33].
features
to register visible and infrared images [31], and even smart calibration
However, these techniques tend to be complex with expensive computational cost not suitable for procedures [32,33]. However,
these
this techniques tend to be complex with expensive computational cost not suitable for this work.
work.
Therefore,
Therefore, the theobstacle
obstacledetection
detection ininthis
thiswork
workisisperformed
performedbybycolorcolordiscrimination
discrimination[34]. [34].Unlike
Unlike
the traditional Red-Green-Blue (RGB) color space, the Hue-Saturation-Value
the traditional Red-Green-Blue (RGB) color space, the Hue-Saturation-Value (HSV) approach (HSV) approach involves
parametrization
involves including
parametrization not only true
including color true
not only (hue)color
but also
(hue)color
butdepth (saturation)
also color depth and color darkness
(saturation) and
(value) [34], as can be seen in Figure 4. As a result, the HSV color space
color darkness (value) [34], as can be seen in Figure 4. As a result, the HSV color space is much is much more suited
morefor
addressing
suited real-world
for addressing environments
real-world consistingconsisting
environments of light reflections, shadows shadows
of light reflections, and darkened regions etc.
and darkened
Therefore, the real-time image processing workflow involves the following
regions etc. Therefore, the real-time image processing workflow involves the following steps: (i) steps: (i) transformation
from RGB image
transformation fromtoRGB
HSVimage
image, to (ii)
HSV transformation from HSV image
image, (ii) transformation to binary
from HSV image image, by means
to binary image,of
byapplying
means of theapplying
selected HSV color range
the selected HSV thresholds
color range(onethresholds
step binarization),
(one step andbinarization),
(iii) posteriorandtreatment
(iii)
of the binary image, including size and tracking filtering, to avoid the detection
posterior treatment of the binary image, including size and tracking filtering, to avoid the detection of unrelated objects.
ofThis post-processing
unrelated filtering
objects. This is optional (HSV
post-processing binarization
filtering is optional already
(HSV solves obstacle
binarization detection)
already and
solves
simply detection)
obstacle discards the andpotential
simply presence
discards of theunrelated
potentialelements
presenceinofthe binary image
unrelated elementsbased in on
thetheir
binarysize
(number of pixels in the detected region) and position in relation to
image based on their size (number of pixels in the detected region) and position in relation to threshold values empirically
obtained.values
threshold Figureempirically
4 shows anobtained.
example Figureincluding this filtering
4 shows an examplestep.including this filtering step.

(a) (b)
Figure
Figure4.4.Image
Imageprocessing used
processing in inthethemachine
used machinevision
visionmodule:
module:(a)(a)
Main
Mainworkflow
workflowforforimage
image
processing; (b) HSV color space used for obstacle detection (scales available in OpenCV-3.1 library).
processing; (b) HSV color space used for obstacle detection (scales available in OpenCV-3.1 library).

3.3.
3.3. Communication
Communication
Thecommunication
The communication thread
thread is isresponsible
responsiblefor forsending
sendingthe
thelatest
latestextracted
extractedobstacle
obstacleinformation
information
from the machine vision module to other modules by request. As the different
from the machine vision module to other modules by request. As the different modules in the robotic modules in the
robotic system run in parallel simultaneously, the communication thread
system run in parallel simultaneously, the communication thread stores the latest informationstores the latest information
obtained
obtained from
from the
the image
image processing
processing into
into a package
a package andand prepares
prepares it it
forfor sending
sending asas per
per any
any request
request byby
TCP/IP sockets [25]. The information package is shown in Figure 5, where
TCP/IP sockets [25]. The information package is shown in Figure 5, where it is defined by several it is defined by several
bytes
bytes containing
containing the
the position
position coordinates
coordinates inin
2D2D ofof the
the detected
detected obstacle
obstacle (x,(x,
y)y) and
and the
the approximated
approximated
dimensions of the bounding box containing it. The units
dimensions of the bounding box containing it. The units are in cm. are in cm.
Sensors 2019, 19, 1354 7 of 23
Sensors 2018,
Sensors 2018, 18,
18, xx FOR
FOR PEER
PEER REVIEW
REVIEW 77 of
of 24
24

Figure
Figure 5. Schematic representation
5. Schematic
Schematic representation of of the
the information
information package
package sent
sent out
out from
from the
the machine
machine vision
vision
Figure 5. representation of the information package sent out from the machine vision
module. The
module. The heading
The heading is
heading is used
is used to
used to avoid
to avoid communication
avoid communication errors.
communication errors. Each
errors. Each 16-bit
Each 16-bit element
16-bit element is
element is sent
is sent by
sent by two
two 8-bit
by two 8-bit
module. 8-bit
units:
units: Most
Most Significant
Significant Bits
Bits (MSBs)
(MSBs) plus
plus Less
Less Significant
Significant Bits
Bits (LSBs).
(LSBs).
units: Most Significant Bits (MSBs) plus Less Significant Bits (LSBs).
Finally,
Finally, anan important
important consideration
consideration here
here is
is how
how to
to achieve
achieve sensor
sensor fusion,
fusion, given
given that two
that two cameras
two cameras
cameras
Finally, an important consideration here is how to achieve sensor fusion, given that
are used,
are used, with
used, with slight
with slight differences
slight differences in
differences in extracted
in extracted information
extracted information
information due due to
due to sensing
to sensing accuracy.
sensing accuracy. The
accuracy. The strategy
The strategy shown
strategy shown
shown
are
in Figure
Figure 666 addresses
in Figure addresses sensor
sensor fusion
fusion at
at the
the output
output level: information
level: information from
from Cam-1
information from Cam-1 isis always
always used
used if this
if this
this
in addresses sensor fusion at the output level: Cam-1 is always used if
camera
camera detects
detects the
the moving
moving obstacle.
obstacle. When
When Cam-1
Cam-1 cannot
cannot detect
detect an
an obstacle
obstacle (possibly
(possibly due
due to
to robot
robot
camera detects the moving obstacle. When Cam-1 cannot detect an obstacle (possibly due to robot
intrusion), then
then the
intrusion), then the information
information from
from Cam-2
Cam-2 isis used
used instead. While
While simple,
instead. While this
this strategy
simple, this strategy has
has proven
proven
intrusion), the information from Cam-2 is used instead. simple, strategy has proven
effective
effective for
for real-time
real-time applications.
applications.
effective for real-time applications.

Figure
Figure 6. Flowchart for
6. Flowchart for sensor
sensor fusion
fusion in
in real
real time
time [22]. Complementary camera
[22]. Complementary camera (Cam-2)
(Cam-2) takes
takes control
control
when main
main camera
camera (Cam-1)
(Cam-1) is
is not
not able
able to
to detect
detect the
the obstacle.
obstacle.
when main camera (Cam-1) is not able to detect the obstacle.
4. Pick-and-Place Path
4. Pick-and-Place
Pick-and-Place Planning
Path Planning Module
Planning Module
Module
4. Path
Pick-and-place
Pick-and-place taskstasks are
tasks are most
most common
common industrial
industrial operations
operations in in manufacturing,
manufacturing, wherewhere
where aaa
Pick-and-place
component/product is are
moved most
between common industrial
predefined start operations
(pick) and in
goal manufacturing,
(place) locations within
component/product is
component/product is moved
moved between
between predefined
predefined start
start (pick)
(pick) and and goal
goal (place)
(place) locations
locations within
within the
the
the workspace
workspace of of
of the the robot.
the robot.
robot. This This automation
This automation
automation task task
task is is traditionally
is traditionally programmed
traditionally programmed offline,
programmed offline, computing
offline, computing
computing
workspace
predefined paths which link start and goal points. However, this only works well in structured and
predefined paths which link start and goal points. However, this only works
predefined paths which link start and goal points. However, this only works well in structured and well in structured and
static
static conditions.
conditions.
static conditions.
By
By providing
providing thethe robotic
robotic system
system with
with sensory
sensory attributes
attributes suchsuch asas machine
machine vision,
vision, the
the system
system isis
By providing the robotic system with sensory attributes such as machine vision, the system is
now
now able
able to
to interact
interact better
better with
with its
its environment,
environment, adapting
adapting to
to dynamic
dynamic and
and changing
changing conditions
conditions such
such
now able to interact better with its environment, adapting to dynamic and changing conditions such
as
as aaa moving
moving obstacle
moving obstacle in the robot
in the
the robot workspace.
workspace. ThisThis interaction
interaction is enabled via
is enabled
enabled via a path
path planning
planning algorithm
algorithm
as obstacle in robot workspace. This interaction is via aa path planning algorithm
that
that isis able
able to
tointerpret
interpretthe thesensing
sensinginformation
informationandand(re-) plan
(re-) plan a globally optimal, collision-free path for
that is able to interpret the sensing information and (re-) plan aa globally
globally optimal,
optimal, collision-free
collision-free path
path
Sensors
Sensors2018,
2019,18,
19,x1354
FOR PEER REVIEW 8 8ofof24
23

for pick-and-place operations in real time. Hence, when a moving obstacle invalidates an initially
planned path, this
pick-and-place path caninbereal
operations updated quicklywhen
time. Hence, and effectively.
a moving obstacle invalidates an initially planned
path,The
thispick-and-place
path can be updated path planning
quickly andapproach implemented here uses the method of dynamic
effectively.
roadmaps, which is a sampling-based
The pick-and-place path planningreal-time
approach variation
implementedof the here
Probabilistic
uses the Road
methodMaps (PRMs)
of dynamic
method,
roadmaps, proven
which effective in motion planning
is a sampling-based real-timewithin changing
variation of theenvironments [35]. The
Probabilistic Road Mapsdynamic
(PRMs)
roadmaps methodeffective
method, proven is characterized
in motion by planning
an offlinewithin
pre-processing
changingphase and an online
environments [35]. planning and
The dynamic
computation.
roadmaps method is characterized by an offline pre-processing phase and an online planning
and computation.
4.1. Pre-Processing Phase
4.1. Pre-Processing Phase
In this phase, the algorithm creates a mapping between the states sampled in the configuration
space In this phase,
(C-space for the algorithm
short) with the creates
cells ainmapping between
a discretized the states and
workspace, sampled in the configuration
the sampled states are
space (C-space for short) with the cells in a discretized workspace, and
connected with their neighboring states as characterized by PRMs. This phase is carried the sampled states are connected
out as
with their neighboring states as characterized by PRMs. This phase is carried out as follows.
follows.
Firstly, the
Firstly, the robot
robot C-space
C-space isis sampled.
sampled. This Thisinvolves
involvesrandomly
randomlysampling
samplingthetheentire
entireC-space
C-spaceto to
obtain nodes of the roadmap, and this is performed assuming a completely obstacle-free
obtain nodes of the roadmap, and this is performed assuming a completely obstacle-free space. Then, space. Then,
pairsof
pairs ofneighboring
neighboringnodes nodesare areconnected
connectedto toform
formthe
theedges
edgesof ofthe
theroadmap.
roadmap.Neighboring
Neighboringnodes
nodesareare
defined as all those that lie within a predefined radius (r) from a given node (Figure
defined as all those that lie within a predefined radius (r) from a given node (Figure 7). A single node 7). A single node
withinthis
within thisroadmap
roadmaprepresents
representsaasingle
singlerobot
robot configuration.
configuration.Thus Thusaaconnecting
connectingedge
edgebetween
betweentwo two
nodescorresponds
nodes correspondsto toaavalid
validmotion
motionpath
pathbetween
betweentwo twoconfigurations.
configurations.

Figure
Figure7.7.Illustration
Illustrationofofnodes
nodesthat
thatare
areconsidered
consideredneighbors
neighborsofofaanode
nodebeing
beingconsidered,
considered,with
withcurrent
current
node
node(red
(red color), lying within
color), nodes lying withinaaspecified
specifiedradius
radius r (black
r (black color)
color) andand all other
all other nodes
nodes in C-space
in the the C-
space
(gray(gray
color).color).

Thegeometric
The geometricrepresentation
representationof ofthe
theworkspace
workspaceisisthen
thendiscretized
discretizedinto
intouniform
uniformcells,
cells,where
wherethethe
spatialresolution
spatial resolutionavailable
availableisisdependent
dependenton onthe
thecell
cellsize,
size,with
withaasubsequent
subsequenttrade-off
trade-offbetween
betweenfiner
finer
resolutionand
resolution andfaster
fastercomputation.
computation.Increasing
Increasingthethenumber
numberofofcells
cellsincreases
increasesthe
thecomputation
computationtime timeofof
the mapping stage (described below) exponentially.
the mapping stage (described below) exponentially.
Given the sampled
Given sampledC-space
C-spaceand anddiscretized Cartesian
discretized space,
Cartesian a mapping
space, between
a mapping the twothe
between domains
two
is performed.
domains This mapping
is performed. is obtained
This mapping by iteratively
is obtained checking every
by iteratively robot
checking configuration
every associated
robot configuration
with all thewith
associated sampled
all thenodes and nodes
sampled along each edge of
and along theedge
each roadmap.
of theAll workspace
roadmap. cells that collide
All workspace with
cells that
the robot
collide withat the
these configurations
robot are mapped to
at these configurations arethe associated
mapped nodes
to the and edges.
associated nodes Hence duringHence
and edges. online
execution, the roadmap can be updated based on the cells which are occupied
during online execution, the roadmap can be updated based on the cells which are occupied byby obstacles, producing
a graph representation
obstacles, producing a graphof valid motions between
representation robot
of valid configurations
motions across
between robot the entire workspace.
configurations across the
entire workspace.
Sensors 2019, 19, 1354 9 of 23
Sensors 2018, 18, x FOR PEER REVIEW 9 of 24

4.2. Online
Online Phase
Phase
During
During the online phase, the algorithm algorithm retrieves
retrieves the perceived obstacle information information from the
machine
machine vision module by TCP/IP TCP/IP sockets
sockets [25].
[25]. The
The path
path planning
planning modulemodule requests
requests and and receives
immediately the the 80-bit
80-bitpackage
packageshownshownininFigure
Figure5 5withwith the latest information about
the latest information about the position the position of
of the
the moving
moving obstacle.
obstacle. From Fromthisthis package,
package, the the algorithm
algorithm knows
knows the the obstacle
obstacle (x,coordinates
(x, y) y) coordinates withinwithin
the
the workspace
workspace and and the of
the size size of a rectangular
a rectangular boundingbounding box containing
box containing it. Therefore,
it. Therefore, the obstacle
the obstacle is treated is
treated as a box object, providing enhanced clearance between the robot
as a box object, providing enhanced clearance between the robot and obstacle for collision avoidance. and obstacle for collision
avoidance. This information
This information is combined is combined
with the mapping with computed
the mapping computed
offline to createoffline
a graphtorepresentation
create a graph of
representation
the collision-free of regions
the collision-free regions in the C-space.
in the C-space.
The desired start and goal configuration
configuration (which
(which can can change
change at at any
any time)
time) is connected
connected to the
nearest
nearest node
node in inthe
theroadmap.
roadmap.Then, Then,twotwo steps
stepsareare
used to to
used build
build thethenewnew path for for
path the the
robot. First,First,
robot. the
A* algorithm [36] (an extension of the Dijkstra’s algorithm for graph search)
the A* algorithm [36] (an extension of the Dijkstra’s algorithm for graph search) is implemented to is implemented to search
for the for
search shortest route within
the shortest route the graph,
within the finding a path that
graph, finding guarantees
a path no collision
that guarantees with thewith
no collision movingthe
obstacle. Then, B-splines
moving obstacle. smoothing
Then, B-splines [37] is[37]
smoothing used to smoothen
is used to smoothen thetheobtained
obtained path
pathand
andachieve
achieve a
continuous smooth motion.
Once
Once the robot computes the initially planned path, the path planner continues continues to monitor the
obstacle ininthe
theworkspace.
workspace. If any
If any detected
detected changechange
in theinenvironment
the environment invalidates
invalidates a previously
a previously planned
planned
path, thenpath,
a new then
updateda new updated
path path is
is computed computed
using the stepsusing the steps
described above. described above. In this
In this implementation,
implementation, the algorithm
the algorithm is assessed is assessed
for real-time for real-time
performance based performance
on its abilitybased
to plan onpaths
its ability
fastertothan
planhuman
paths
faster than human reaction time, which is approximately 180ms [20]. Human
reaction time, which is approximately 180ms [20]. Human reaction time is taken as reference as robots reaction time is taken
as reference
must react toaschanges
robots mustin thereact to changes
environment in the environment
quicker than that for aquicker than thatwith
safe interaction for ahuman
safe interaction
workers.
with human flowchart
A high-level workers. representing
A high-level this flowchart
real-time representing
path planner thisis real-time path planner
given in Figure 8. is given in
Figure 8.

Flowchart for
Figure 8. Flowchart for dynamic
dynamic path
path planning
planning [22].
[22]. The
Thedashed
dashedbox
boxindicates
indicatesre-planning
re-planningcapability.
capability.

5. Robot Control Module


5. Robot Control Module
So far, a machine vision module has been introduced for providing the robotic system with
So far, a machine vision module has been introduced for providing the robotic system with
sensory capabilities, while a path planner module for decision making in pick-and-place tasks has been
sensory capabilities, while a path planner module for decision making in pick-and-place tasks has
described. However, a third module for robot control is necessary to interface the planning results
been described. However, a third module for robot control is necessary to interface the planning
with real-time position tracking and actuator control.
results with real-time position tracking and actuator control.
Robots have been quite successful in accomplishing tasks in well-known environments like a
work cell within a factory. The much harder problem of a robot acting in unstructured and dynamic
Sensors 2019, 19, 1354 10 of 23

Robots have been quite successful in accomplishing tasks in well-known environments like a
work cell within a factory. The much harder problem of a robot acting in unstructured and dynamic
environments, like those humans normally act and live in, is still an open research area [38]. In such
situations, robots need to adapt their tasks after beginning an initial sequence. In this work, a novel
toolbox, the Interfacing Toolbox for Robotic Arms (ITRA) [39], was used.

5.1. ITRA Toolbox and RSI Interface


ITRA is a cross-platform software toolbox, designed to facilitate the integration of robotic
arms with sensors, actuators and software modules through the use of an external server computer.
It contains fundamental functionalities for robust connectivity, real-time control and auxiliary functions
to set or get key functional variables. ITRA is a C++ based DLL of functions. Due to platform
availability during its development, it is currently focused around KUKA hardware, but can be
extended to handle real-time interfaces on ABB [40] and Stäubli [41] robots. As such, it runs on a
remote computer connected with KRC4 robots through a User Datagram Protocol (UDP/IP) socket.
All the embedded functions can be used through high-level programming language platforms
(e.g., MATLAB, Simulink and LabVIEW) or implemented into low-level language (e.g., C, C# and C++)
applications, providing the opportunity to speed-up flexible and robust integration of robotic systems.
The ITRA is currently compatible with all KUKA KRC4 robots equipped with a KUKA software add-on
known as Robot Sensor Interface (RSI) [42], which was purposely developed by KUKA to enable
the communication between the robot controller and an external system (e.g., a sensor system or a
server computer).
Cyclical data transmission from the robot controller to the external system (and vice-versa) takes
place in parallel to the execution of the KUKA Robot Language (KRL) program. Using RSI makes it
possible to influence the robot motion or the execution of the KRL program by processing external
data. The robot controller communicates with the external system via the Ethernet UDP/IP protocol.
The ITRA takes advantage of the fundamental functionalities of RSI and allows achieving external
control of robotic arms through three different approaches.

5.2. Real-Time Robot Motion Control


Real-time robot motion control can be divided into two sub-problems: (i) the specification of the
control points of the geometric path (path planning), and (ii) the specification of the time evolution
along this geometric path (trajectory planning). Whereas the path-planning sub-problem is always
dealt with by the computer hosting the ITRA, where processing of machine vision data and/or other
sensor data can take place to compute the robot target position, the trajectory planning sub-problem
can be managed by different actors of the system.
In the first approach, referred as KRL-based approach, the trajectory planning takes place at the
KRL module level within the robot controller. The second approach has trajectory planning performed
within the external computer, soon after path-planning, and is referred as Computer-based approach.
The third approach relies on a real-time trajectory planning algorithm implemented into the RSI
configuration. Therefore, trajectory planning is managed by the RSI context and the approach is named
as RSI-based approach.
The KRL-based and the Computer-based approaches enable basic robot external control
capabilities, where the robot has to wait until the current target position is reached in order to go for
the next one. This means that, if a new target point C is stated while the robot is moving from a point
A to a point B, the robot cannot adapt to this change until B is reached, becoming a major problem.
Unlike the KRL-based and the computer-based approaches, the RSI-based approach enables true
real-time path control of KUKA robots based on KRC4 controllers. This approach, which is used
for this work, permits fast online modifications to a planned trajectory, allowing robots to adapt to
changes in the dynamic environment and react to unforeseen obstacles. Whereas the path-planning
takes place in the server computer, trajectory planning has been implemented as an RSI configuration,
Sensors 2019, 19, 1354 11 of 23

employing the second-order trajectory generation algorithm presented in [43]. The approach can
operate in Cartesian-space and in joint-space. While the robot is static or is travelling to a given
position, the computer can send a new target position (together with the maximum preferred speed
and acceleration) through a specific ITRA function. The target coordinates, received by the robot
controller, are used to compute the optimal coordinates of the set point to send to the robot arm drives
through a two-fold algorithm.
On the one hand, the set point is generated to guarantee a smooth transition from the initial
conditions (starting coordinates, velocity and acceleration) towards the final target position. On the
other hand, the algorithm makes sure the evolution of the robot motion is constrained within the given
maximum
Sensors velocity
2018, 18, and acceleration.
x FOR PEER REVIEW Thanks to this approach, the robot motion can be quickly updated
11 of 24
in response to the path planning module (e.g., the robot can adapt to any changes in the workspace
interfering
by the robotits operation).
controller, areThis
usedimplementation
to compute the is hereincoordinates
optimal referred as aofrobot control
the set point module and
to send to theis
schematically represented in Figure 9.
robot arm drives through a two-fold algorithm.

Figure
Figure 9. Robot
RobotSensor
SensorInterface
Interface (RSI)
(RSI) for for real-time
real-time computing
computing ininterpolation
in 4ms 4ms interpolation cycle.moving
cycle. While While
moving fromApoint
from point A toB,point
to point B, thecan
the robot robot can update
update its target
its target position
position from from current
current B to aBnew
to a new
pointpoint
C in
Creal
in real
time.time.

6. Experimental Setup the set point is generated to guarantee a smooth transition from the initial
On the one hand,
conditions
In this(starting
section, coordinates,
the experimentalvelocity andisacceleration)
setup towardsall
described, defining the
thefinal target position.
components of the On the
system
other hand,
including thethe algorithm
robotic makes sure
manipulator, the evolution
workspace, opticalof the robot
cameras andmotion
movingisobstacle,
constrained
among within the
others.
givenThe
maximum velocity
experiments andand acceleration.
related discussionThanks
in this to this approach,
section are specificthe
to robot motion
one case can
study. be quickly
Nevertheless,
updated
one of theinmain
response to the path
advantages of theplanning
proposedmodule
system (e.g., the robot can
is its modularity andadapt to generally
can be any changes in the
applied to
workspace interfering its operation). This implementation is herein referred as a robot control
different applications with no constraints on the type of robot manipulator or sensory devices used or module
and is schematically
the environment represented
in which in deployed
they are Figure 9. in.

6. Experimental
6.1. Setup and Pick-and-Place Elements
Robotic Manipulator
In this
The KR90section,
R3100the experimental
produced setup
by KUKA is described,
was used in thisdefining all [23].
case study the components
This robot isof the system
a 6-axis serial
including the robotic
manipulator, manipulator,shown
with characteristics workspace, optical
in Table cameras
1 and Figure and moving
10 [23]. This obstacle,
robot wasamong
chosen others.
as it is
The experiments
commonly and related
found in industrial discussion
applications. in this
However, the section
proposedare specific
system to implemented
can be one case study. with
Nevertheless,
any other serialone of the main
manipulator advantages
(robots of the proposed
not manufactured by KUKA system
would is its modularity
require and can be
their accompanying
generally applied
controllers). to different
Indeed, the KR06 applications
R900 [24] waswith
alsono constraints
employed on the
during type of robot
preliminary manipulator or
experiments.
sensory devices used or the environment in which they are deployed in.

6.1. Robotic Manipulator and Pick-and-Place Elements


The KR90 R3100 produced by KUKA was used in this case study [23]. This robot is a 6-axis serial
manipulator, with characteristics shown in Table 1 and Figure 10 [23]. This robot was chosen as it is
Sensors 2018, 18, x FOR PEER REVIEW 12 of 24
Sensors 2019, 19, 1354 12 of 23
Axis 1 (speed) ±185 º (105 º/s)
Axis 2 (speed) −5 º to −140 º (101 º/s)
Robot characteristics
Table31.(speed)
Axis +155for
º tothe KUKA
−120 º (107KR90
º/s) R3100.
Axis 4 (speed) ±350 º (292 º/s)
Characteristics Values
Axis 5 (speed) ±125 º (258 º/s)
Sensors 2018, 18, x FOR PEER REVIEW 12 of 24
Working
Axis envelope
6 (speed) ±350 º (28466 º/s)m3
Weight 1121 kg
Axis1 1(speed)
Axis (speed) ±185
±185ºº(105(105º/s)
º/s)
This robot was given the task to perform a series of pick-and-place actions guided by a collision-
Axis2 2(speed)
Axis (speed) −
−55 ºº to −140º º(101
to −140 (101º/s)
º/s)
free motion plan. In order to perform Axis
Axisthe pick-and-place operations,
3 3(speed)
(speed) −120
+155 º to −120 a ºsimple
º(107
(107º/s)box and hook (Figure
º/s)
11) were produced using laser-cutting Axis
Axistechnology,
(speed) with the hook
4 4(speed) ±350
±350acting
ºº(292
(292asº/s)
a simple gripper. Start and
º/s)
goal box poses were defined for which Axis
Axis5the
5(speed) ±plan
planner was used to±125
(speed) 125ºº(258
(258
paths º/s)
º/s)to transfer the box across
various targets. Axis 6 (speed)
Axis 6 (speed)
±350 º (284 º/s)
±350 º (284 º/s)

This robot was given the task to perform a series of pick-and-place actions guided by a collision-
free motion plan. In order to perform the pick-and-place operations, a simple box and hook (Figure
11) were produced using laser-cutting technology, with the hook acting as a simple gripper. Start and
goal box poses were defined for which the planner was used to plan paths to transfer the box across
various targets.

(a)
(b)
FigureFigure
10. KUKA KR90 R3100
10. KUKA robot: robot:
KR90 R3100 (a) General aspect aspect
(a) General of the robotic arm manipulator;
of the robotic (b) Working
arm manipulator; (b) Working
envelope with dimensions comprised during operation. Images obtained from [23].
envelope with dimensions comprised during operation. Images obtained from [23].

This robot was given the task to perform a series of pick-and-place actions guided by a
collision-free motion(a)plan. In order to perform the pick-and-place operations, a simple box and
hook (Figure 11) were produced using laser-cutting technology, with(b) the hook acting as a simple
gripper. Start
Figure 10. and
KUKAgoal boxR3100
KR90 posesrobot:
were (a)
defined foraspect
General whichofthe
theplanner wasmanipulator;
robotic arm used to plan(b)
paths to transfer
Working
the box across various targets.
envelope with dimensions comprised during operation. Images obtained from [23].

Figure 11. Hook and box with handle for pick-and-place tasks. The box dimensions are 10 cm × 5 cm
× 5 cm and was produced using laser-cutting technology.

6.2. Workspace and Moving Obstacles


A workspace for the KR90 R3100 robot can be defined according to its dimensions and
disposition. In the experiments, the workspace was limited to a table with dimensions 160 cm × 110
cm × 85 cmFigure
Figure 11. Hookand
11.Hook andbox
boxwith
(width-breadth-height)with handle
handle
located for pick-and-place
for
next pick-and-place
to tasks.
the base of tasks.
the The box
The
robot box dimensions
(see dimensions
Figure 12).are
The cm ××was
10table 55cm
cm
× 5 cm and was produced using laser-cutting technology.
× 5 cm and was produced using laser-cutting technology.
covered by an old, rough tablecloth, containing different textile traces and rusty/dirty patches. This,

6.2. Workspace and Moving Obstacles


A workspace for the KR90 R3100 robot can be defined according to its dimensions and
disposition. In the experiments, the workspace was limited to a table with dimensions 160 cm × 110
cm × 85 cm (width-breadth-height) located next to the base of the robot (see Figure 12). The table was
along with strong reflections and inconsistent light, simulated a noisy environment. Finally, barriers
were placed along the table perimeter to constrain the moving obstacle inside the workspace.
The moving obstacle was mocked by using a remote controlled car of dimensions 20 cm × 10 cm
× 5 cm2019,
Sensors covered
19, 1354by yellow cardboard (Figure 12b), with a speed estimated at 1 m/s. This color 13 was
of 23
chosen due to its similar tonality to the orange color of the robot, which further increases the challenge
on image processing. Nevertheless, this was effectively addressed by the approach to configuring
6.2. Workspace and Moving Obstacles
HSV parameters during calibration (Figure 3), which subsequently enables any color differences to
A workspace
be identified. for thesafety
Operator KR90 regulations
R3100 robot prevented
can be defined according
human to itsthe
entry into dimensions
workspace andofdisposition.
the robot,
In the experiments,
hence the use of the the workspace
remote wascar
controlled limited to a table
provides with dimensions
reasonable 160 cmthe
dynamics within × environment.
110 cm × 85 cm A
(width-breadth-height)
‘spike’ of 30 cm was placed located nexttotovirtually
on top the baseextend
of the robot (see Figure
the height 12). The table
of the obstacle, whichwas covered by
contributes to
an old, rough
greater demandstablecloth,
on path containing different
correction. This textile traces and
experimental rusty/dirty
setup illustratespatches. This, along
the robustness ofwith
the
strong reflections
proposed system and inconsistent
to various light, simulated
environmental a noisythat
challenges environment. Finally,inbarriers
may be present variouswere placed
real-world
along the table perimeter to constrain the moving obstacle inside the workspace.
scenarios.

(a)

(b)
Figure 12. Workspace table implemented for the robot: (a) General
General overview
overview including
including robot
robot base,
base,
covered table, pick-and-place box and moving obstacle; (b) Moving obstacle covered in yellow color
with a 30 cm ‘spike’ on its top.

The moving
6.3. Cameras Locationwas mocked by using a remote controlled car of dimensions 20 cm × 10 cm
and obstacle
× 5 cm covered by yellow cardboard (Figure 12b), with a speed estimated at 1 m/s. This color was
Two HD Pro AWCAMHD15 cameras (Advent) were chosen for Cam-1 and Cam-2 (Figure 13).
chosen due to its similar tonality to the orange color of the robot, which further increases the challenge
These are low-cost webcams able to capture images with an original resolution of 640 × 480 pixels
on image processing. Nevertheless, this was effectively addressed by the approach to configuring
and a frame rate of 30 fps.
HSV parameters during calibration (Figure 3), which subsequently enables any color differences to be
identified. Operator safety regulations prevented human entry into the workspace of the robot, hence
the use of the remote controlled car provides reasonable dynamics within the environment. A ‘spike’
of 30 cm was placed on top to virtually extend the height of the obstacle, which contributes to greater
demands on path correction. This experimental setup illustrates the robustness of the proposed system
to various environmental challenges that may be present in various real-world scenarios.

6.3. Cameras and Location


Two HD Pro AWCAMHD15 cameras (Advent) were chosen for Cam-1 and Cam-2 (Figure 13).
These are low-cost webcams able to capture images with an original resolution of 640 × 480 pixels and
a frame rate of 30 fps.
Sensors
Sensors 2019,
Sensors2018, 19,
18,x1354
2018,18, xFOR
FORPEER
PEERREVIEW
REVIEW 14ofof
1414 23
of2424

(a)

(a) (b)
(b)
Figure13.
Figure
Figure 13.Disposition
13. Dispositionofofthe
Disposition thecameras
camerasfor
cameras forexperiments:
for experiments:
experiments: (a)
(a)
(a) Relative
Relative
Relative location
location
locationofofof cameras
cameras
cameras with
with relation
relation
with to
relation
totothe
the the robot
robot
robot workspace;
workspace;
workspace; (b)
(b)(b) Close-up
Close-up view
Close-up view
viewof ofofcameras
cameras
cameras attached
attached
attached tocell
to to
cell cell caging.
caging.
caging.

The main
Themain
The camera,
maincamera, Cam-1,
camera,Cam-1, was
Cam-1,was placed
wasplaced
placedinin an
inan overhead
anoverhead location
overheadlocation (Figure
location(Figure
(Figure13), 13),
13),333m
m over
mover the
overthe ground,
theground,
ground,
capturing images
capturingimages
capturing imagesof of the
ofthe workspace
theworkspace table
workspacetable from
fromaaalandscape
tablefrom landscape perspective.
landscapeperspective.
perspective.The The complementary
Thecomplementary camera
complementarycameracamera
Cam-2 was
Cam-2was
Cam-2 placed
wasplaced at a much
placedatataamuch lower
muchlower height
lowerheight of 1.5
heightofof1.5 m,
1.5m, orthogonal
m,orthogonal in direction
orthogonalinindirection to Cam-1,
directiontotoCam-1, and
Cam-1,and therefore
andtherefore
therefore
capturing table
capturingtable
capturing from
fromaaaportrait
tablefrom portrait perspective.
portraitperspective. The
perspective.The selected
Theselected disposition
selecteddisposition
dispositionforfor Cam-2
forCam-2
Cam-2waswas intended
wasintended for
intendedfor the
forthe
the
recovery of any blind points in Cam-1 should the robot intrude Cam-1 0 s field of view.
recovery of any blind points in Cam-1 should the robot intrude Cam-1′s
recovery of any blind points in Cam-1 should the robot intrude Cam-1′s field of view. field of view.

6.4.
6.4. Host
6.4.Host Computer
HostComputer
Computer and
and Related
and Software
Related
Related Software
Software
An Inspiron
AnInspiron
An Inspiron15 15 7000
157000 (quad-core
7000(quad-core Intel
(quad-coreIntel i7)
Inteli7) laptop
i7)laptop (DELL,
laptop(DELL, Round
(DELL,Round Rock,
RoundRock, TX,
Rock,TX, USA)
TX,USA) with
USA)with 16
with16 GB
16GB
GBRAMRAM
RAM
(Windows
(Windows10
(Windows 10 operating
10operating system)
operatingsystem) was
system)was used
wasused in conjunction
usedininconjunction
conjunctionwith with a KR
withaaKR C4
KRC4 controller
C4controller to implement
controllertotoimplement
implementthe the
the
proposed
proposed system.
system. The
The laptop
laptop possesses
possesses 22 USB
USB ports,
ports, one
one for
for each
each camera,
camera,
proposed system. The laptop possesses 2 USB ports, one for each camera, and an Ethernet port for and
and an
an Ethernet
Ethernet port
port for
for
connection to the
connectiontotothe
connection controller.
thecontroller.
controller.
Both
Boththe
Both thepath
the path
path planner,
planner,
planner, implemented
implemented
implemented in MATLAB
ininMATLAB
MATLAB (MathWorks, Natick,
(MathWorks,
(MathWorks, MA, USA),
Natick,
Natick, MA,and
MA, the and
USA),
USA), machine
andthethe
vision, implemented
machine vision,
machine in C++
vision, implemented and called
implemented inin C++ from
C++ and MATLAB
and calledcalled from through
from MATLAB executable
MATLAB through files, were
through executableexecuted
executable files, on the
files, were
were
remote
executed
executed PC.onActuation
on theremote
the remote signals
PC. for to thesignals
PC.Actuation
Actuation controller
signals were
forto
for thesent
tothe through
controller
controller a robot
were
were sentcontrol
sent through
through DLL to interface
aarobot
robot control
control
with
DLLto
DLL the KR C4. The
tointerface
interface with integrated
with theKR
the KRC4.system
C4. was
Theintegrated
The run from
integrated a friendly
system
system Graphical
wasrun
was run fromaaUser
from Interface
friendly
friendly (GUI)User
Graphical
Graphical also
User
developed
Interface on MATLAB
Interface(GUI)
(GUI) also (Figureon
alsodeveloped
developed 14).
on MATLAB(Figure
MATLAB (Figure14).14).

Figure14.
Figure
Figure 14.Main
14. Maincontrols
Main controlsin
controls inthe
in theMATLAB
the MATLABGUI
MATLAB GUIfor
GUI forthe
for thereal-world
the real-worldphysical
real-world physicaldemonstrator.
physical demonstrator.
demonstrator.

7.7.Experiments
ExperimentsEvaluation
Evaluationand
andDiscussion
Discussion
Inthis
In thissection,
section,the
theproposed
proposedsystem
systemwithwiththe
thethree
threedescribed
describedmodules:
modules:(i)
(i)machine
machinevision,
vision,(ii)
(ii)
pick-and-placepath
pick-and-place pathplanning
planningand
and(iii)
(iii)robot
robotcontrol
controlisisapplied
appliedtotoboth
bothsoftware
softwaresimulations
simulationsand
andtotoaa
real-worldphysical
real-world physicaldemonstrator.
demonstrator.Then,
Then,aaperformance
performanceevaluation
evaluationisispresented.
presented.
Sensors 2019, 19, 1354 15 of 23

7. Experiments Evaluation and Discussion


In this section, the proposed system with the three described modules: (i) machine vision,
(ii) pick-and-place pathREVIEW
Sensors 2018, 18, x FOR PEER planning and (iii) robot control is applied to both software simulations 15 and
of 24to
a real-world physical demonstrator. Then, a performance evaluation is presented.
Theproposed
The proposedsystem
systemwas wasassessed
assessedinintwo
twoways.
ways.Firstly,
Firstly,simulations
simulationswere
werecarried
carriedout
outthrough
througha a
simulation platform developed on MATLAB and interfaced via a custom
simulation platform developed on MATLAB and interfaced via a custom GUI. The system was GUI. The system wasthen
then
evaluatedon
evaluated on the real-world
real-worldphysical
physicaldemonstrator.
demonstrator. In both casescases
In both the system performance
the system was assessed
performance was
according
assessed to its behavior
according to its inbehavior
responding to the presence
in responding of dynamic
to the presence obstacles and the
of dynamic computation
obstacles and thetime
computation time required to (re-)plan scenarios. As the obstacle detection is implemented in 2D, for
required to (re-)plan scenarios. As the obstacle detection is implemented in 2D, a constant height a
the moving
constant height obstacle
for theismoving
assumed (30 cmis‘spike’
obstacle on its
assumed (30top).
cm ‘spike’ on its top).

7.1.Simulation
7.1. SimulationTool
ToolAnalysis
Analysis
Priortotodeployment
Prior deploymenttotothe
thereal-world
real-worldsystem,
system,a astudy
studybased
basedon
onsimulations
simulationswas
wasundertaken
undertakentoto
validate the behaviors of the individual components and the overall performance
validate the behaviors of the individual components and the overall performance of the of theproposed
proposed
integrated system. To this end, a platform for simulations, developed in MATLAB,
integrated system. To this end, a platform for simulations, developed in MATLAB, was integrated was integrated
intoaa GUI
into GUI for
for human-computer
human-computerinteractions
interactions(Figure 15).15).
(Figure TheThe
simulated environment
simulated is built
environment is to exactly
built to
match the real-world setup of the physical system, where the robot performs pick-and-place
exactly match the real-world setup of the physical system, where the robot performs pick-and-place path
planning
path withwith
planning motion constrained
motion constrainedto a to
limited workspace
a limited workspace around a table.
around a table.

Figure15.
Figure MATLABGUI
15.MATLAB GUIshowing
showingsimulation
simulationofofreal-world
real-worldenvironment
environmentfor
forthe
theKR90
KR90R3100
R3100robot
robot
performingpick-and-place
performing pick-and-placetasks
tasksacross
acrosstable.
table.

Thesimulation
The simulationprovided
provideda aplatform
platformtotovalidate
validatethe
thebehavior
behaviorofofthetheplanner
plannertotorespond
respondtoto
dynamicobstacles
dynamic obstacles by
by considering
consideringseveral pick-and-place
several pick-and-placescenarios. Several
scenarios. unknown
Several movingmoving
unknown obstacle
trajectories were captured using the machine vision module and used to simulate
obstacle trajectories were captured using the machine vision module and used to simulate the dynamic obstacle
the
in these simulations. The visualisation capability of the GUI enabled tracking of the changes
dynamic obstacle in these simulations. The visualisation capability of the GUI enabled tracking of to planned
the changes to planned motion paths. An example is shown in Figure 16. These simulations were
also used to provide an initial benchmark of the system according to its computational efficiency.
Table 2 shows three different motion path problems for which the computational performance was
measured and the corresponding computation times are reported in Table 3. These times are broken
down according to various functions used in path planning along with the total time. Five trials for
Sensors 2019, 19, 1354 16 of 23

motion paths. An example is shown in Figure 16. These simulations were also used to provide an
initial benchmark of the system according to its computational efficiency. Table 2 shows three different
motion path problems for which the computational performance was measured and the corresponding
computation times are reported in Table 3. These times are broken down according to various functions
Sensors 2018, 18, x FOR PEER REVIEW 16 of 24
used in path planning along with the total time. Five trials for each planning problem were performed
to provide statistical
computation significance.
is different As path,
for each showneven
in Table 3, the computation
presenting is different
some oscillations for each
within path,path.
a given even
presenting
Overall, the computation times are less than 50 ms for all trials, which meets the requirementsms
some oscillations within a given path. Overall, the computation times are less than 50 to
for all trials, which meets the requirements to perform faster than the average
perform faster than the average human reaction time (180 ms) [20] by several folds. human reaction time
(180 ms) [20] by several folds.

(a) (b)
Figure 16.
Figure 16. An
Aninstance
instanceofofdynamic
dynamic path
pathplanning
planning simulation: (a) Lateral
simulation: view;
(a) Lateral (b) Front
view; view.view.
(b) Front The
obstacle (red block) moves inwards towards the robot (path shown as a series of green
The obstacle (red block) moves inwards towards the robot (path shown as a series of green crosses) ascrosses) as it
begins
it beginsexecuting
executingananinitially
initiallyplanned
plannedpath
path(blue
(blueline).
line).As
Asthe
theobstacle
obstacleobstructs
obstructs this
this plan,
plan, the planner
continues to
continues to find a new feasible path, which results in the final executed trajectory shown as red circles
the time
(indicating the time evolution
evolution ofof the
the trajectory).
trajectory).

Table 2. Three planning problems used to assess the computational efficiency of the planner.
Table 2. Three planning problems used to assess the computational efficiency of the planner.
Length
ID IDlength (mm) Start x start y (mm)
start x (mm) Start y start zStart
(mm)z end xEnd x
(mm) End
end y y
(mm) endEnd z
z (mm)
(mm) (mm) (mm) (mm) (mm) (mm) (mm)
1 1643.5 10500 -6400 975 10500 -5200 975
2 1 1994.9 1643.5 1100010500 -6400-6400 975975 10500
10300 -5200
-5200 975
975
3 2 1981.9 1994.9 1020011000 -5700-6400 975975 10300
11200 -5200
-5700 975
975
3 1981.9 10200 -5700 975 11200 -5700 975
Table 3. Computation time (ms) for path planning in simulations.
Table 3. Computation time (ms) for path planning in simulations.
Collision
ID A* path planning Path smoothing Convert Cartesian Total
Collision
checking A* path Path Convert
ID Total
1 Checking
30.3 ± 1.6 1.64planning
± 0.4 smoothing
3.06 ± 0.5 Cartesian
3.12 ± 0.2 38.1 ± 2.5
1 2 17.8 ± 1.3
30.3 ± 1.6 8.001.64
± 0.3± 0.4 3.62 ± 0.4
3.06 ± 0.5 3.72
3.12±±0.40.2 33.138.1
± 2.3
± 2.5
2 3 17.8
38.5 ± 1.3
± 3.3 ± 0.6± 0.3
1.368.00 3.62
2.42 ± 0.4
± 0.2 3.72± ±
2.7 0.50.4 45.033.1 ± 2.3
± 4.0
3 38.5 ± 3.3 1.36 ± 0.6 2.42 ± 0.2 2.7 ± 0.5 45.0 ± 4.0
7.2. Physical Demonstrator Performance
7.2. Physical Demonstrator Performance
Trials on the physical robot showed that the integrated system provided correct behavior in
response toon
Trials the physical
dynamic robot
obstacles. showed
In all thatrobotic
cases the the integrated system provided
arm successfully performedcorrect behavior in
the pick-and-place
response
operationtowithout
dynamic obstacles.
collision. In allpossible,
Where cases thethe
robotic
systemarm successfully
plans performed
a new path thethe
to achieve pick-and-place
task without
colliding with the obstacle. Where this is not possible (for example, when the obstacle would collide
with the goal configuration of the robot), the robot waits until the obstacle is cleared away. Figure 17
provides a real video frame capturing the configurations of the objects in the environment acquired
during experimentation, as well as the acquired frames from the machine vision cameras for the same
time instance.
Sensors 2019, 19, 1354 17 of 23

operation without collision. Two videos are provided as Supplementary files S1 and S2. Where possible,
the system plans a new path to achieve the task without colliding with the obstacle. Where this is
not possible (for example, when the obstacle would collide with the goal configuration of the robot),
the robot waits until the obstacle is cleared away. Figure 17 provides a real video frame capturing
the configurations of the objects in the environment acquired during experimentation, as well as the
Sensors 2018, 18, x FOR PEER REVIEW 17 of 24
acquired frames from the machine vision cameras for the same time instance.

(a) (b)

Figure17.
Figure Synchronizedvideo
17.Synchronized videoframes
framescaptured
capturedduring
duringoperation
operation[22]:
[22]:(a)
(a)AAreal
realvideo
videoframe
frametaken
taken
fromthe
from thesafety
safetyarea
areashowing
showingall
allobject
objectconfigurations
configurationsininthe
theenvironment;
environment;(b)(b)Acquired
Acquiredframes
framesby
by
Cam-1(top)
Cam-1 (top)and
andCam-2
Cam-2(bottom)
(bottom)from
frommachine
machinevision.
vision.

Smoothtransition
Smooth transitionatatswitching
switchingofofpathspathsduring
duringre-planning
re-planningwas wasobserved
observedatatallalltimes.
times.This
Thiswas
was
achieved as a result of the real-time trajectory generation implemented within
achieved as a result of the real-time trajectory generation implemented within the real-time control the real-time control
module.The
module. The successful
successful avoidance
avoidance of of obstacles
obstacles in instances
in all all instancesalsoalso validates
validates the effectiveness
the effectiveness of theof
the machine
machine visionvision module
module where,where,
for for
thisthis environmental
environmental setup,
setup, its its resolution
resolution waswasmeasured
measuredtotobe be
approximately 0.3 cm per pixel. This gives an estimated error of ± 3 cm in perceiving
approximately 0.3 cm per pixel. This gives an estimated error of ±3 cm in perceiving the obstacle pose, the obstacle
pose, which
which provedproved sufficient
sufficient for thefor the experiments
experiments conducted.
conducted. LargerLarger
imageimage resolutions
resolutions can reduce
can reduce the
the localization
localization errorerror at the
at the costcost of increasing
of increasing computation
computation complexity,
complexity, andand thistrade-off
this trade-offisisadjustable
adjustable
dependingon
depending onthe
theapplication
applicationaims.
aims.Additionally,
Additionally,the thespeed
speedofofthe
themoving
movingobstacle
obstaclecancanalso
alsoaffect
affectthe
the
localizationerror,
localization error,although
althoughno noissues
issueswere
werefound
foundwithwiththe
thecurrent
current setup.
setup.
Indeed, the machine vision was able to track the moving
Indeed, the machine vision was able to track the moving obstacle continuously, obstacle continuously,regardless
regardlessofof
any challenging and dynamic conditions such as reflections, shadows
any challenging and dynamic conditions such as reflections, shadows and motion blur. Moreover, and motion blur. Moreover,
theissues
the issuesrelating
relatingtotothe
the robot
robot intruding
intruding thethe field
field of view
of view of Cam-1
of Cam-1 werewere effectively
effectively handled
handled by theby
the complementary Cam-2. Figure 18 shows matching frames
complementary Cam-2. Figure 18 shows matching frames from Cam-1 and Cam-2 during from Cam-1 and Cam-2 during
demonstrations,showing
demonstrations, showingaaparticular
particularinstance
instancein inwhich
whichthethe robot
robot intrudes
intrudes thethe field
field of
of view
view in
in Cam-1
Cam-
1and
andcompletely
completely covers
covers the
the dynamic
dynamic obstacle.
obstacle. AsAs shown
shown in in the
the figure,
figure, Cam-2
Cam-2was wasable
abletotodetect
detectthe
the
obstacle when Cam-1 could not. Hence, the information package (Figure
obstacle when Cam-1 could not. Hence, the information package (Figure 6) corresponding to these 6) corresponding to these
frameswas
frames wasgenerated
generatedfromfromthe theimage
imageof ofCam-2.
Cam-2.
Sensors 2018,
Sensors 19, x1354
2019, 18, FOR PEER REVIEW 1818ofof 24
23

Figure
Figure 18.
18. Frames
Frames from
from aa particular
particular moment
moment during
during demonstration
demonstration in
in which
which the
the robotic
robotic arm
arm blocks
blocks
the
the vision
vision of
of Cam-1
Cam-1 (top)
(top) but
but Cam-2
Cam-2 (bottom)
(bottom) is able to keep tracking the obstacle.

7.3. Computing
7.3. Computing Performance
Performance
The main
The main evaluation
evaluation in in this
this work
work isis based
based on on the
the computation
computation time time ofof the
the system
system and and how
how this
this
affects the
affects the system
system response.
response. In In particular,
particular, two two aspects
aspects of of performance
performance are are evaluated:
evaluated: the the working
working
cycle and system reaction time. Working cycle here refers to the time
cycle and system reaction time. Working cycle here refers to the time it takes for the system to update it takes for the system to
update its output (refresh rate). As the different modules in the system
its output (refresh rate). As the different modules in the system run in parallel, the working cycle is run in parallel, the working
cycle isdetermined
simply simply determinedby the moduleby thewithmodule with the
the highest highest computation
computation time. On thetime. otherOn the the
hand, other hand,
reaction
the reaction time can be stated as the time it takes for the system to start reacting
time can be stated as the time it takes for the system to start reacting after a given stimulus. Therefore, after a given stimulus.
Therefore,
the reactionthe reaction
time requires timestrictly
requires strictly sequential
sequential execution execution
through all through
stages,all stages,
from from
initial initial
cameras
acquisition to final robot movement. In this section, the computation time per working cycle of cycle
cameras acquisition to final robot movement. In this section, the computation time per working each
of each module
module and the and whole theintegrated
whole integrated
system is system is presented.
presented. Then, inThen, the nextin the next section,
section, an evaluation
an evaluation on the
on the system
system reactionreaction
time is time is provided.
provided.
Approximated average computation times for the machine vision and path planning modules
are provided in Figure 19. For machine vision, the main bottleneck was frame acquisition, which took
approximately 45ms to obtain a frame from the cameras. This is due to the frame rate limitations
imposed by hardware. However, 45
Acquisition the image processing took only 14ms
Read package 12 and the communication
(information package preparation) was negligible. It is worth mentioning that the individual
computation
Processing times in 14 each stage of the machine Check vision module do 15
collision not accumulate as they are
implemented in parallel threads, leading to a working cycle of 45ms. On the other hand, each stage
ofCommunication
the path planning<1 module (Figure 19b) are executed in cascade so the total time
Replanning 30 is the sum of the
individual function costs. Note that the reported computation time for path planning corresponds to
the totalTOTAL
time required from receiving 45 sensory information to outputting a new motion plan.57Finally,
TOTAL

in relation to the robot control module, the time required for sending command positional packets was
negligible as 0the update 20 rate of the 40 RSI-based 60 approach is equal 0 to the running
20 frequency
40 of60the RSI
context. Consequently, a new target
Computation position can be set every 4ms if necessary.
time (ms) Computation time (ms)
The total time required for a working cycle of the system is provided in Figure 20, including the
time for the machine vision (a) (45 ms), path planning (57 ms) and robot control (b) (negligible) modules.
As these
Figure modules run simultaneously,
19. Approximated these times
average computation do (a)
times for: notMachine
accumulatevisionandmodulethewith
working cycle was
acquisition,
estimated
image processing and communication stages running simultaneously (times do not accumulate); (b) cycle
at 57 ms. The modularity of the proposed system allows a reduction in the working
withPathrelation to other
planning modulesimilar
withsystems [20] (Figure
read package, 20b). Inand
check collision other words, the
re-planning response
stages running of in
the system can
cascade,
be updated
where the at total
a higher
time is frequency.
the sum ofHowever, note that the working cycle frequency is not the same as
the three stages.
the reaction time, which is evaluated in the next section.
Approximated average computation times for the machine vision and path planning modules
are provided in Figure 19. For machine vision, the main bottleneck was frame acquisition, which took
approximately 45ms to obtain a frame from the cameras. This is due to the frame rate limitations
time can be stated as the time it takes for the system to start reacting after a given stimulus. Therefore,
the reaction time requires strictly sequential execution through all stages, from initial cameras
acquisition to final robot movement. In this section, the computation time per working cycle of each
module and the whole integrated system is presented. Then, in the next section, an evaluation on the
system
Sensors 2019,reaction
19, 1354 time is provided.
Sensors 2018, 18, x FOR PEER REVIEW
19 of 23
19 of 24

imposed by hardware. However, the image processing took only 14ms and the communication
(information package preparation) was negligible. It is worth mentioning that the individual
Acquisition times in each stage of
computation 45 the machine vision module do12not accumulate as they are
Read package

implemented in parallel threads, leading to a working cycle of 45ms. On the other hand, each stage
path planning 14
Processing
of the module (Figure 19b) are executed Check cascade so the15total time is the sum of the
incollision
individual function costs. Note that the reported computation time for path planning corresponds to
Communication < 1 Replanning 30
the total time required from receiving sensory information to outputting a new motion plan. Finally,
in relation to the robot control module, the time required for sending command positional packets
TOTAL 45 TOTAL 57
was negligible as the update rate of the RSI-based approach is equal to the running frequency of the
RSI context. Consequently, a new target position can be set every 4ms if necessary.
The total0time required20 for a40working60 cycle of the system is0 provided20in Figure4020, including 60 the
Computation time (ms) Computation time (ms)
time for the machine vision (45 ms), path planning (57 ms) and robot control (negligible) modules.
As these modules run simultaneously, these times do not accumulate and the working cycle was
(a) (b)
estimated at 57 ms. The modularity of the proposed system allows a reduction in the working cycle
with Figure
Figure 19.19.Approximated
relation Approximated
to other similaraverage
average
systemscomputation
computation times
times
[20] (Figure for:(a)(a)
for:
20b). Machine
Machine
In visionmodule
vision
other words, module withacquisition,
with
the response acquisition,
of the system
can be updated at a higher frequency. However, note that the working cycle frequency is (b)
image
image processing
processing and
and communication
communication stages
stages running simultaneously
simultaneously (times
(times do
donot
notaccumulate);
accumulate);not the
same(b)Path
Path
as planning
the planning
reactionmodule
time,with
module with
whichreadispackage,
read package,
evaluated check collision
collision
in the next and and re-planningstages
re-planning
section. stagesrunning
runninginincascade,
cascade,
where
wherethethe
total time
total is is
time thethe
sum of of
sum thethe
three stages.
three stages.

200
Approximated average computation times for the machine vision and path planning 180 (*) modules
(*) just for reference
are provided in Figure 19. For machine
45
vision, the main bottleneck was frame acquisition, which took
Machine vision
150
approximately 45ms to obtain a frame from the cameras. This is due to the frame rate limitations
57
time (ms)

Path planning
100 90
Robot control <1
57
50
TOTAL 57

0
0 20 40 60 Proposed System Human reaction
Computation time (ms) system in [18] (general)

(a) (b)
Figure
Figure 20.20.Approximated
Approximatedaverage
averagecomputation
computationtimes
timesinvolved
involvedin:in:(a)(a)Working
Workingcycle
cycle
ofof the
the proposed
proposed
systemincluding
system includingmachine
machinevision,
vision,path
pathplanning
planningandandrobot
robot control
control modules
modules running
running simultaneously
simultaneously
(times
(times dodo notnot accumulate);
accumulate); (b) Working
(b) Working cycle comparison
cycle comparison betweenbetween the proposed
the proposed system andsystem and a
a different
different
system system developed
developed in [20],the
in [20], including including
human the human
reaction reaction
time time as a reference.
as a reference.

7.4.
7.4.Reaction
ReactionTime
TimeforforHuman
HumanInteraction
Interaction
Reaction
Reactiontime timeisisthe
themost
mostimportant
importantparameter
parameterininreal-time
real-timecontrol,
control,since
sinceit itmeasures
measuresthe the
promptness of the system, and it can be defined as the latency between
promptness of the system, and it can be defined as the latency between the stimulus and the very the stimulus and the very
start
startofofthe
thereaction.
reaction.Therefore,
Therefore,reaction
reactiontimetimeisiscrucial
crucialfor
forrobots
robotsthat
thatmust
mustpossess
possessreal-time
real-timeadaptive
adaptive
behaviors
behaviors to respond to dynamic changes and/or to interact with humans. Determiningthe
to respond to dynamic changes and/or to interact with humans. Determining theaverage
average
reaction
reaction time for humans is something not straightforward, and different values are suggestedinin
time for humans is something not straightforward, and different values are suggested
literature.
literature.Indeed,
Indeed,according
accordingtoto[44],
[44],the
theaverage
averagereaction
reactiontime
timefor
forhumans
humansdependsdependson onthe
thestimulus,
stimulus,
being
being 250 ms for a visual stimulus, 170 ms for an auditory stimulus, and 150 ms for a hapticstimulus.
250 ms for a visual stimulus, 170 ms for an auditory stimulus, and 150 ms for a haptic stimulus.
Other
Otherreferences
referencessuch suchasas[20]
[20]state
stateaa general
general value
value of 180 ms
of 180 ms for
for the
theaverage
averagehuman
humanreaction
reactiontime.
time.In
Inthis
thiswork,
work,the the sensing
sensing capabilities
capabilities come
come from a machine vision implemented by
from a machine vision implemented by means of optical means of optical
cameras,
cameras, so it seems reasonable to adopt asreference
so it seems reasonable to adopt as referencethe thetime
timerelated
relatedtotovisual
visualstimulus
stimulus(250(250ms).
ms).
However, the more challenging reaction time of 180 ms [20] is also referred
However, the more challenging reaction time of 180 ms [20] is also referred to as reference. to as reference.
Reaction
Reactiontime timehas
hastotobebemeasured
measuredfrom fromthe thestimulus
stimulustotothethestart
startofofthe
thereaction.
reaction.Therefore,
Therefore,unlike
unlike
the previously evaluated working cycle, it includes the whole computational
the previously evaluated working cycle, it includes the whole computational sequence consisting of sequence consisting
perception and related processing, motion planning and actuation such that their individual
Sensors 2019, 19, 1354 20 of 23

Sensors 2018, 18, x FOR PEER REVIEW 20 of 24

of perception and related processing, motion planning and actuation such that their individual
computational costs accumulate. Additionally, the reaction time for robotic systems is not only
computational costs accumulate. Additionally, the reaction time for robotic systems is not only
dependent on the computation times from the different stages but also on the inherent mechanical
dependent on the computation times from the different stages but also on the inherent mechanical
reaction time of the physical robot (intrinsic latency). While this issue is usually not addressed in
reaction time of the physical robot (intrinsic latency). While this issue is usually not addressed in
literature [10,20–21], it has been taken into consideration in this work for a more complete evaluation.
literature [10,20,21], it has been taken into consideration in this work for a more complete evaluation.
Indeed, the reaction time (robot latency) of the RSI-based external control used in this work was
Indeed, the reaction time (robot latency) of the RSI-based external control used in this work
measured 100 times through commanding the robot to move to a target from a static position (with
was measured 100 times through commanding the robot to move to a target from a static position
ITRA running within MATLAB and saving robot feedback positions through the saving thread). The
(with ITRA running within MATLAB and saving robot feedback positions through the saving thread).
timestamp of the first robot feedback positional packet, reporting a deviation greater or equal to 0.01
The timestamp of the first robot feedback positional packet, reporting a deviation greater or equal to
mm from the original home position, was compared with the timestamp taken just before sending
0.01 mm from the original home position, was compared with the timestamp taken just before sending
the target position to the robot. It was found that the resulting reaction time was 30 ms (±3 ms).
the target position to the robot. It was found that the resulting reaction time was 30 ms (±3 ms).
After analyzing the robot intrinsic latency, the reaction time of the proposed system was
After analyzing the robot intrinsic latency, the reaction time of the proposed system was measured
measured to be 146 ms, which consists of frames acquisition (45 ms) and image processing (14ms),
to be 146 ms, which consists of frames acquisition (45 ms) and image processing (14ms), path
path (re-)planning (57 ms), and robot latency (30 ms). This is shown in Figure 21, where the system
(re-)planning (57 ms), and robot latency (30 ms). This is shown in Figure 21, where the system
reaction time is clearly below both the reaction time to a visual stimulus (250ms) and the reference
reaction time is clearly below both the reaction time to a visual stimulus (250ms) and the reference for
for general human reaction time (180 ms).
general human reaction time (180 ms).

250
250
Acquisition 45
200 180
59
time (ms)

Processing 146
150

Path planning 116


100

Robot latency 146


50

0
0 50 100 150 System Human reaction Human reaction
time (ms) reaction (general) (visual)

(a) (b)
Figure
Figure 21.21.Approximated
Approximatedaverage
averagereaction
reaction times:(a)(a)The
times: Theproposed
proposedsystem
systemincluding
includingmachine
machinevision
vision
(acquisition,processing),
(acquisition, processing),path
pathplanning
planningandandrobot
robotlatency
latencyininsequential
sequentialimplementation
implementation(times
(timesdodo
accumulate);
accumulate); (b)(b) A comparison
A comparison between
between the the proposed
proposed system,
system, the general
the general human human reaction
reaction timeand
time [20] [20]
and
the the visual
visual humanhuman reaction
reaction time [44].
time [44].

8.8.Conclusions
Conclusions
Robotic
Roboticsystems
systemsare arebecoming
becomingmore morewidely
widelyadopted
adoptedbybyindustries
industriesfor fortheir
theirmanufacturing
manufacturing
processes.
processes. However, these are typically traditional systems consisting of robots followingpredefined
However, these are typically traditional systems consisting of robots following predefined
tasks
tasksplanned
plannedoffline.
offline.AAneed
needfor fora astrong
strongtechnological
technologicalshift
shifthas
hasbecome
becomeapparent
apparentwithinwithinindustry
industrytoto
move
movetowards
towardsmoremoreintelligent
intelligent and
and autonomous
autonomous systems,
systems, able toto
able interact
interact with
withtheir
theirenvironment.
environment.
InInthis work, a flexible and autonomous intelligent system with
this work, a flexible and autonomous intelligent system with environmental awareness environmental awareness
ready
ready for human/robot interaction is proposed and trialed on a physical
for human/robot interaction is proposed and trialed on a physical demonstrator. It is based on demonstrator. It is
the
based on the integration of three independent modules working in real
integration of three independent modules working in real time: (i) machine vision (smart sensing), time: (i) machine vision
(smart sensing),
(ii) path (ii) path
planning planning
(reasoning, (reasoning,
decision decision
making), and making),
(iii) robotand (iii) robot
control controlcoordination).
(movement (movement
coordination).
Machine visionMachineis based on vision is based placed
two webcams on twooff-board
webcams in placed off-board
fixed locations, in fixed
where their locations,
frames are
where their frames are processed to detect obstacles by color in the HSV space.
processed to detect obstacles by color in the HSV space. During path planning, this information is During path planning,
this information
retrieved from istheretrieved
machinefrom theand
vision machine
used tovision and used
(re-)plan to (re-)plan
a series of motion a series
paths ofto motion paths to
enable pick-and-
enable
place pick-and-place
operations while operations
interacting whilewithinteracting with a dynamically-moving
a dynamically-moving obstacle in the robot obstacle in the robot
workspace. The
workspace. The planner is based on the dynamic roadmaps method, with
planner is based on the dynamic roadmaps method, with the A* algorithm used for subsequent graph the A* algorithm used for
subsequent
search andgraph search
B-splines for and
pathB-splines
smoothing. for Finally,
path smoothing. Finally, ausing
a novel approach novelRSI approach using RSI
was developed forwas
real-
time robot control, where the target position of the robot can be updated every 4ms whilst enabling
real-time trajectory modifications.
Sensors 2019, 19, 1354 21 of 23

developed for real-time robot control, where the target position of the robot can be updated every 4ms
whilst enabling real-time trajectory modifications.
With these modules running simultaneously and communicating to each other by TCP/IP sockets,
an effectively integrated system was achieved that is low-cost, highly integrated and modular. Both
simulations and physical experimentation based on the KUKA KR90 R3100 robot were used to evaluate
the system. From conducted trials, the system consistently executed all computations for a given
working cycle in under 60 ms, which is an improvement to similar cases in literature. Similarly,
the system’s overall reaction time was experimentally determined to be on average 146 ms, which is
indeed below the average human reaction time (180 ms). For future applications, any of the modules
can be easily interchanged with other implementations for perception, planning and acting. A more
dedicated image processing could be implemented for the machine vision, for example, techniques
based on saliency detection and/or deep learning [45–47]. Additionally, obstacle detection can be
extended to 3D, while other sensing capabilities can be adopted (such as laser or ultrasound) and
applied to robotic tasks as required by the application whilst maintaining the modularity and advantage
of the designed system.

Supplementary Materials: The following are available online at https://doi.org/10.15129/4df22803-2cc4-4cce-


9cea-509f88f1b504, Video S1: pick place1.mp4, Video S2: pick place2.mp4.
Author Contributions: Conceptualization, E.Y. and C.M.; methodology, J.Z., Z.F., C.W., Y.Y. and C.M.; software,
J.Z., Z.F., C.W., Y.Y. and C.M.; investigation, J.Z., Z.F., C.W., Y.Y. and C.M.; writing—original draft preparation,
J.Z., C.W. and C.M.; writing—review and editing, J.Z., Z.F., C.W., Y.Y., C.M., E.Y., T.R., J.M., Q.-C.P. and J.R.;
supervision, E.Y., T.R., J.M., Q.-C.P. and J.R.; project administration, E.Y.; funding acquisition, E.Y. and C.M.
Funding: This research was supported by the Advanced Forming Research Centre (AFRC) under its Route to
Impact Programme 2017–2018 funded by Innovate UK High Value Manufacturing Catapult.
Acknowledgments: We would like to thank Remi Christopher Zante, Wenjuan Wang (AFRC, University of
Strathclyde), Stephen Gareth Pierce, Gordon Dobie, and Charles MacLeod (Centre for Ultrasonic Engineering,
University of Strathclyde), and Francisco Suarez-Ruiz (Nanyang Technological University, Singapore) for their
contribution and support towards the completion of this project.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Li, R.; Gu, D.; Liu, Q.; Long, Z.; Hu, H. Semantic scene mapping with spatio-temporal deep neural network
for robotic applications. Cogn. Comput. 2018, 10, 260–271. [CrossRef]
2. Zhao, F.; Zeng, Y.; Wang, G.; Bai, J.; Xu, B. A brain-inspired decision making model based on top-down biasing
of prefrontal cortex to basal ganglia and its application in autonomous UAV explorations. Cogn. Comput.
2018, 10, 296–306. [CrossRef]
3. McGinn, C.; Cullinan, M.; Holland, D.; Kelly, K. Towards the design of a new humanoid robot for domestic
applications. In Proceedings of the 2014 IEEE International Conference on Technologies for Practical Robot
Applications (TePRA), Woburn, MA, USA, 14–15 April 2014.
4. Liu, Y.; Tian, Z.; Liu, Y.; Li, J.; Fu, F.; Bian, J. Cognitive modeling for robotic assembly/maintenance task in
space exploration. In Proceedings of the AHFE 2017 International Conference on Neuroergonomics and
Cognitive Engineering, Los Angeles, CA, USA, 17–21 July 2017.
5. Zhang, S.; Ahn, H.S.; Lim, J.Y.; Lee, M.H.; MacDonald, B.A. Design and implementation of a device
management system for healthcare assistive robots: Sensor manager system version 2. In Proceedings of the
9th International Conference on Social Robotics (ICSR), Tsukuba, Japan, 22–24 November 2017.
6. Mineo, C.; Pierce, S.G.; Nicholson, P.I.; Cooper, I. Robotic path planning for non-destructive testing—A
custom MATLAB toolbox approach. Robot. Comput. Integr. Manuf. 2016, 37, 1–12. [CrossRef]
7. Jasinski, M.; Maczak, J.; Szulim, P.; Radkowski, S. Autonomous agricultural robot—Testing of the vision
system for plants/weed classification. In Automation 2018; Szewczyk, R., Zielinski, C., Kaliczynska, M., Eds.;
Springer: Cham, Switzerland, 2018; Volume 743, pp. 473–483. ISBN 978-3-319-77179-3.
8. Finzgar, M.; Podrzaj, P. Machine-vision-based human-oriented mobile robots: A review. J. Mech. Eng. 2017,
63, 331–348. [CrossRef]
Sensors 2019, 19, 1354 22 of 23

9. Podrzaj, P.; Hashimoto, H. Intelligent space as a framework for fire detection and evacuation. Fire Technol.
2008, 44, 65–76. [CrossRef]
10. Lopez-Juarez, I. Skill acquisition for industrial robots: From stand-alone to distributed learning.
In Proceedings of the 2016 IEEE International Conference on Automatica (ICA-ACCA), Curico, Chile,
19–21 October 2016.
11. Dixit, U.S.; Hazarika, M.; Davim, J.P. Emergence of production and industrial engineering. In A Brief History
of Mechanical Engineering; Dixit, U.S., Hazarika, M., Davim, J.P., Eds.; Springer: Cham, Switzerland, 2017;
pp. 127–146. ISBN 978-3-319-42916-8.
12. Anand, G.; Rahul, E.S.; Bhavani, R.R. A sensor framework for human-robot collaboration in industrial robot
work-cell. In Proceedings of the 2017 International Conference on Intelligent Computing, Instrumentation
and Control Technologies (ICICICT), Kannur, India, 6–7 July 2017.
13. Cherubini, A.; Passama, R.; Crosnier, A.; Lasnier, A.; Fraisse, P. Collaborative manufacturing with physical
human–robot interaction. Robot. Comput. Integr. Manuf. 2016, 40, 1–13. [CrossRef]
14. Wu, Y.; Chan, W.L.; Li, Y.; Tee, K.P.; Yan, R.; Limbu, D.K. Improving human-robot interactivity for
tele-operated industrial and service robot applications. In Proceedings of the 2015 IEEE 7th International
Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and
Mechatronics (RAM), Siem Reap, Cambodia, 15–17 July 2015.
15. Perez, L.; Rodriguez, I.; Rodriguez, N.; Usamentiaga, R.; Garcia, D.F. Robot guidance using machine vision
techniques in industrial environments: A comparative review. Sensors 2016, 16, 335. [CrossRef] [PubMed]
16. Miyata, C.; Chisholm, K.; Baba, J.; Ahmadi, M. A limb compliant sensing strategy for robot collision reaction.
IEEE Asme Trans. Mechatron. 2016, 21, 674–682. [CrossRef]
17. Lee, H.-W.; Wong, C.-Y. The study of the anti-collision system of intelligent wheeled robot. In Proceedings of
the 2017 International Conference on Applied System Innovation (ICASI), Sapporo, Japan, 13–17 May 2017.
18. Ponte, H.; Queenan, M.; Gong, C.; Mertz, C.; Travers, M.; Enner, F.; Hebert, M.; Choset, H. Visual sensing for
developing autonomous behavior in snake robots. In Proceedings of the 2014 IEEE International Conference
on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014.
19. Gasparetto, A.; Boscariol, P.; Lanzutti, A.; Vidoni, R. Path planning and trajectory planning algorithms:
A general overview. In Motion and Operation Planning of Robotic Systems; Carbone, G., Gomez-Bravo, F., Eds.;
Springer: Cham, Switzerland, 2015; Volume 29, pp. 3–27. ISBN 978-3-319-14705-5.
20. Kunz, T.; Reiser, U.; Stilman, M.; Verl, A. Real-time path planning for a robot arm in changing environments.
In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei,
Taiwan, 18–22 October 2010.
21. Wall, D.G.; Economou, J.; Goyder, H.; Knowles, K.; Silson, P.; Lawrance, M. Mobile robot arm trajectory
generation for operation in confined environments. J. Syst. Control Eng. 2015, 229, 215–234. [CrossRef]
22. Zabalza, J.; Fei, Z.; Wong, C.; Yan, Y.; Mineo, C.; Yang, E.; Rodden, T.; Mehnen, J.; Pham, Q.-C.; Ren, J. Making
industrial robots smarter with adaptive reasoning and autonomous thinking for real-time tasks in dynamic
environments: A case study. In Proceedings of the 9th International Conference on Brain Inspired Cognitive
Systems (BICS), Xi’an, China, 7–8 July 2018.
23. KR QUANTEC Extra HA Specifications. Available online: https://www.kuka.com/en-de/products/robot-
systems/industrial-robots/kr-quantec-extra (accessed on 16 October 2018).
24. KR AGILUS Specifications. Available online: https://www.kuka.com/en-de/products/robot-systems/
industrial-robots/kr-agilus (accessed on 16 October 2018).
25. Donahoo, M.J.; Calvert, K.L. TCP/IP Sockets in C Practical Guide for Programmers, 2nd ed.; Morgan Kaufmann
Publishers: San Francisco, CA, USA, 2009; ISBN 9780080923215.
26. Mutlu, M.; Melo, K.; Vespignani, M.; Bernardino, A.; Ijspeert, A.J. Where to place cameras on a snake robot:
Focus on camera trajectory and motion blur. In Proceedings of the 2015 IEEE International Symposium on
Safety, Security, and Rescue Robotics (SSRR), West Lafayette, IN, USA, 18–20 October 2015.
27. Astua, C.; Barber, R.; Crespo, J.; Jardon, A. Object detection techniques applied on mobile robot semantic
navigation. Sensors 2014, 14, 6734–6758. [CrossRef] [PubMed]
28. Ding, G.; Che, W.; Zhao, S.; Han, J.; Liu, Q. Real-time scalable visual tracking via quadrangle kernelized
correlation filters. IEEE Trans. Intell. Transp. Syst. 2018, 19, 140–150. [CrossRef]
Sensors 2019, 19, 1354 23 of 23

29. Han, J.; Pauwels, E.J.; de Zeeuw, P.M.; de With, P.H.N. Employing a RGB-D sensor for real-time tracking of
humans across multiple re-entries in a smart environment. IEEE Trans. Consum. Electron. 2012, 58, 255–263.
[CrossRef]
30. Coates, A.; Ng, A.Y. Multi-camera object detection for robotics. In Proceedings of the 2010 IEEE International
Conference on Robotics and Automation, Anchorage, AK, USA, 3–7 May 2010.
31. Han, J.; Pauwels, E.J.; de Zeeuw, P. Visible and infrared image registration in man-made environments
employing hybrid visual effects. Pattern Recognit. Lett. 2013, 34, 42–51. [CrossRef]
32. Ilonen, J.; Kyrki, V. Robust robot-camera calibration. In Proceedings of the 15th International Conference on
Advanced Robotics, Tallinn, Estonia, 20–23 June 2011.
33. Shu, F. High-Precision Calibration Approaches to Robot Vision Systems. Ph.D. Thesis, University of
Hamburg, Hamburg, Germany, 2009.
34. Abu, P.A.; Fernandez, P. Performance comparison of the Teknomo-Fernandez algorithm on the RGB and
HSV color spaces. In Proceedings of the 2014 International Conference on Humanoid, Nanotechnology,
Information Technology, Communication and Control, Environment and Management (HNICEM),
Palawan, Philippines, 12–16 November 2014.
35. Leven, P.; Hutchinson, S. A framework for real-time path planning in changing environments. Int. J.
Robot. Res. 2002, 21, 999–1030. [CrossRef]
36. Cui, S.G.; Wang, H.; Yang, L. A simulation study of A-star algorithm for robot path planning. In Proceedings of
the 16th International Conference on Mechatronics Technology (ICMT), Tianjin, China, 16–19 October 2012.
37. Boor, C. A Practical Guide to Splines, 1st ed.; Springer: New York, NY, USA, 1978; ISBN 978-0-387-95366-3.
38. Kunz, T. Real-Time Motion Planning for a Robot Arm in Dynamic Environments. Master’s Thesis, University
of Stuttgart, Stuttgart, Germany, 2009.
39. Mineo, C.; Vasilev, M.; MacLeod, C.N.; Su, R.; Pierce, S.G. Enabling robotic adaptive behaviour capabilities
for new industry 4.0 automated quality inspection paradigms. In Proceedings of the 57th Annual British
Conference on Non-Destructive Testing, East Midlands, UK, 10–12 September 2018.
40. ABB, Application Manual—Robot Reference Interface. Available online: https://us.v-cdn.net/5020483/
uploads/editor/aw/bkkb1ykmrxsj.pdf (accessed on 14 November 2018).
41. Stäubli, uniVAL Drive. Available online: https://www.staubli.com/en/robotics/product-range/robot-
software/val3-robot-programming/unival-solutions/unival-plc/ (accessed on 14 November 2018).
42. KUKA. RobotSensorInterface 3.2 Documentation—Version: KST RSI 3.2 V1; KUKA: Augsburg, Germany, 2013.
43. Haschke, R.; Weitnauer, E.; Ritter, H. On-line planning of time-optimal, jerk-limited trajectories.
In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems,
Nice, France, 22–26 September 2008.
44. Grice, G.R.; Nullmeyer, R.; Spiker, V.A. Human reaction time: Toward a general theory. J. Exp. Psychol. General
1982, 111, 135–153. [CrossRef]
45. Liu, Y.; Han, J.; Zhang, Q.; Wang, L. Salient object detection via two-stage graphs. IEEE Trans. Circuits Syst.
Video Technol. 2018. [CrossRef]
46. Zhang, D.; Han, J.; Han, J.; Shao, L. Cosaliency detection based on intrasaliency prior transfer and deep
intersaliency mining. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 1163–1176. [CrossRef] [PubMed]
47. Luan, S.; Chen, C.; Zhang, B.; Han, J.; Liu, J. Gabor convolutional networks. IEEE Trans. Image Process. 2018,
27, 4357–4366. [CrossRef] [PubMed]

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

You might also like