Professional Documents
Culture Documents
Review
ARTICLE INFO
ABSTRACT
Keywords:
Human–robot collaboration Technological progress increasingly envisions the use of robots interacting with people in everyday life. Human–
Collaborative robotics robot collaboration (HRC) is the approach that explores the interaction between a human and a robot, during
Cobot the completion of a common objective, at the cognitive and physical level. In HRC works, a cognitive model is
Machine learning typically built, which collects inputs from the environment and from the user, elaborates and translates
Human–robot interaction
these into information that can be used by the robot itself. Machine learning is a recent approach to build the
cognitive model and behavioural block, with high potential in HRC. Consequently, this paper proposes a
thorough literature review of the use of machine learning techniques in the context of human– robot
collaboration. 45 key papers were selected and analysed, and a clustering of works based on the type of
collaborative tasks, evaluation metrics and cognitive variables modelled is proposed. Then, a deep analysis on
different families of machine learning algorithms and their properties, along with the sensing modalities
used, is carried out. Among the observations, it is outlined the importance of the machine learning algorithms
to incorporate time dependencies. The salient features of these works are then cross-analysed to show trends
in HRC and give guidelines for future works, comparing them with other aspects of HRC not appeared in the
review.
1. Introduction
sector mainly depicts the use of robotic manipulators that share the
Robots are progressively occupying a relevant place in everyday same workspace and jointly work at the realization of a piece of manu-
life [1]. Technological progress increasingly envisions the use of facturing. Such application of robotics is called cobotics and robots that
robots interacting with people in numerous application domains such can safely serve this purpose gain the denomination of collaborative
as flex- ible manufacturing scenarios, assistive robotics, social robots or CoBots (from this point on, they will be referred to as ‘‘cobots’’
companions. Therefore, human–robot interaction studies are crucial in this work) [1].
for the proper deployment of robots at a closer contact with humans. The objective of cobots is the increase of the performance of the
Among the possible declinations of this sector of robotics research, manufacturing process, both in terms of productivity yield and of
human–robot collaboration (HRC) is the branch that explores the relieving the user from the burden of the task, either physically or
interaction between a human and a robot acting like a team in cognitively [3]. Besides, there are also other sectors that could benefit
reaching a common goal [2]. The majority of works of this kind focus from the use of collaborative robots. Indeed, HRC has started to be
on such a team collaboratively working on an actual physical task, at considered in surgical scenarios [4]. In these, robotic manipulators
the cognitive and physical level. In these HRC works, a cognitive are foreseen helping surgeons, by passing tools [5] or by assisting
model (or cognitive architecture) is typically built, which collects them in the execution of an actual surgical procedure [6].
inputs from the environment and from the user, elaborates and Such an interest for this sector of robotics research has led to
translates them into information that can be used by the robot itself. the publications of hundreds of works on HRC [7]. Consequently,
The robot will use this model to tune its own behaviour, to increase recent attempts to regularize and standardize the knowledge about
the joint performance with the user. this topic, through reviews and surveys, have already been pursued.
Studies in HRC specifically explore the feasibility of use of robotic As mentioned, in Matheson et al. [3], a pool of works about the
platforms at a very close proximity with the user for joint tasks. This
∗
Corresponding author.
E-mail addresses: francesco.semeraro@manchester.ac.uk (F. Semeraro), alexander.griffiths@baesystems.com (A. Griffiths),
angelo.cangelosi@manchester.ac.uk (A. Cangelosi).
https://doi.org/10.1016/j.rcim.2022.102432
Received 15 October 2021; Received in revised form 11 July 2022; Accepted 20 July 2022
Available online 10 August 2022
0736-5845/© 2022 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
F. Semeraro et Robotics and Computer-Integrated Manufacturing 79 (2023)
2
F. Semeraro et Robotics and Computer-Integrated Manufacturing 79 (2023)
Fig. 2. Time trends of the collaborative tasks explored in HRC (15 collaborative assemblies, 11 object handovers, 17 object handling and 3 collaborative manufacturing works).
In the majority of these works, the robot recognizes the step of the
sequence the user is currently working at and performs an action to the robot, thanks to the information it gathers, finishes the construction
assist the user in the same sub-tasks. For example, in Cuhna et al. [20], with the remaining blocks.
the user is assembling a system of pipes; the robot has to understand Another categorization in which the cognitive aspect of the inter-
in which step the user is currently at and grab the pipe related to that action still carries most of the focus is object handover. Here, works
step of the assembly. Munzer et al. [32] have the robot help the user focus on a specific part of the interaction, without considering a more
assembling a box, through the understanding of what it is doing. In complex task in the experimental design. In these experiments, user and
other cases, instead, the robot predicts the next step of the assembly robot exchange an item. Most of the times, the robot hands it over to the
process and prepares the scenario for the user to perform it later. In user. It is important for the robot, in this case, to understand the user’s
Grigore et al. [24], the robot predicts the trajectory the human will intentions and expectations in the reception of the object. In Shukla
perform while seeking assistance from the robot, and executes an action et al. [42], the robot needs to understand the object desired by the
accordingly. In other cases, the robot can also perform an action on users among the available ones and hand it over. In different terms,
the object of interest to conclude the process started by the user. In Choi et al. [18] design the robot to adjust to a change in the intentions
Vinanzi et al. [47], the user is building a shape by means of blocks and of the user, while receiving the object. Conversely, there are cases in
which the user hands the object over to the robot [25,31].
3
F. Semeraro et Robotics and Computer-Integrated Manufacturing 79 (2023)
Table 1
Features extracted from the final set of papers. The presence of a ‘‘+’’ in an entry means that the elements are part of a unique framework, while a ‘‘,’’ denotes multiple ones.
For the ‘‘ML method’’ column, the legend is: ANN = Artificial Neural Network, CNN = Convolutional Neural Network, Fuzzy = Fuzzy systems, DQN = Deep Q Network, GMM =
Gaussian Mixture Model, HMM = Hidden Markov Model, MLP = Multi-Layer Perceptron, DNS = Dynamic Neural System, RL = Reinforcement Learning, RNN = Recurrent Neural
Network, LSTM = Long Short Term Memory, VAE = Variational Autoencoder. Besides, for the same column, the numbers refer to which category they belong: A = Reinforcement
learning, B = Supervised learning, C = Unsupervised learning. For the ‘‘Cognitive ability’’ column, the legend is: 1 = Human effort, 2 = Human intention, 3 = Human motion, 4 =
Object of interest, 5 = Robot actuation, 6 = Robot decision making, 7 = Task recognition. For the ‘‘Result’’ column, the legend is: R1 = Accuracy of movement, R2 = Robustness,
R3 = Proof of concept, R4 = High-level performance increase, R5 = Reduction of human workload. For the ‘‘Robotic platform’’ column, the number in brackets highlights the
number of degrees of freedom of the robotic platform used.
Authors ML
Cognitive Sensing Interaction Robotic Human role Robot role Key result
technique
ability task platform
[12] CNN (B) Position of
Vision Object Custom The human positions the tool The robot actuates itself changing System was 60% accurate in
object of
handling orientation of flexible tip adjusting properly according to the
interest (4)
object of interest (R1)
[13] Interactive Decision Vision Collaborative Universal The user has its own part in The robot observes the scene, The system is able to handle
RL (A) making (6) assembly Robots UR the process, but can also understands the assembly process and deviations during action execution
10 (6) actively alter the decisional chooses action to perform (R2)
process of the robot
[14] SARSOP (A) Human trust Vision Object Kinova The human either stays put The robot clears tables from objects The RL model matches human trust
(2) handling Jaco (6) or prevents the robot from to robot’s manipulation capabilities
executing an action, if not (R3)
trusting it
[15] CNN (B) Human Muscular Object Rethink The human jointly lifts an The robot detects the human The robot is successful in the task
intention (2) activity handling Robotics object with the robot and intention and follows its movement, execution (R3)
Baxter (7) moves it to a different keeping the object balanced
position
[16] ANN (B) Human Accelerome- Collaborative Rethink The human holds one end of The robot holds the other end of the The devised system shows smoother
motion (3) try manufactur- Robotics the saw and saws different saw and tracks its movement by performance than the comparison
ing Baxter (7) objects in different ways recognizing parameters approach (R4)
[17] GMM (C) Position of Accelerome- Object Custom The human positions the end The robot detects its own end effector Robot performs positioning with
object of try handling effector of the robot with a and adjusts it with its distal DoFs lower contact forces on target than
interest (4) certain pose and holds the the ones performed by human (R5)
robot in place
[18] Density- Decision Vision Object Robotis The human goes to grab the The robot recognizes change in user’s The robot recognizes change in user’s
matching RL making (6) handover Manipu- object held by the robot, but motions and determines its actions motions and changes its actions
(A) lator-X (7) decides to change way to accordingly accordingly (R3)
grab it at a certain point in
the interaction
[19] RL (A)+ Decision Vision Collaborative Gambit The human gazes at an The robot understands the intention The interaction with the human
Bayes (B) making (6) assembly (6) object and goes to fetch it and goes to reach the same object improved the robot performance (R3)
jointly with the human; if it is not
confident about the action to perform,
it can request help to the human
[20] DNS (B) Task Vision Collaborative Rethink The human builds a system The robot understands the current The model is able to adapt to
recognition assembly Robotics of coloured pipes jointly with step of the assembly sequence and unexpected scenarios (R2)
(7) Sawyer the robot performs a complimentary action; it
(7) also gives suggestions to the user
about next steps it should perform
[21] Q-learning Decision Vision Object Universal The human jointly lifts an The robot detects the human After some episodes, the robot is able
(A) making (6) handling Robots object with the robot and intention and follows its movement, to learn the path performed by the
UR5 (6) moves it repeatedly from keeping the object with a certain human and follow it properly (R3)
point A to point B orientation
[22] Q-learning Decision Vision Object Willow The human holds one end of The robot holds the other end of the The measured force of interactions
(A) making (6) handling Garage a plank with a ball on top plank and moves jointly with the are lowered thanks to the learning
PR2 (7) of it and moves it human to prevent the ball from process (R5)
falling
[23] VAE (C)+ Decision Vision Collaborative ABB YuMi The human packages an The robot understands the actions The algorithm outperforms supervised
LSTM (B)+ making (6) assembly (7) object performed by the human and plan its classifiers (R4)
DQN (A) choices to assist in the assembly
[24] HMM (B) Task Vision Collaborative Rethink The human assembles a chair The robot predicts the next step of Robot prediction capability matches
recognition assembly Robotics the assembly and assists the human human one (R3)
(7) Baxter (7) accordingly
[25] GMM (C)+ Robot Vision Object IIT The human hands over an The robot manipulator moves to The robot performs the task
PI2 (A) motion (5) handover COMAN object to the robot under reach the object respecting every constraint applied to
(4) different conditions it (R4)
[26] ANN (B) Control Accelerome- Object Custom The human grasps a tool The robot adapts its movements to The system performs with higher
variables (5) try handling held by the robot and moves the trajectory imposed by the human safety, compliance and precision (R1)
it to a certain position
[27] ANN (B) Ground Accelerome- Object KUKA The human lifts an object by The robot grabs the object by the The joint force exerted on the human
reaction try handling LWR (7) one end other end and helps the human lifting were noticeably reduced (R5)
force +
centre of
pressure (1)
[28] LSTM (B)+ Human Accelerome- Object FRANKA The human drags the object The robot detects the intention of the Interaction forces are noticeably
Q-learning intention try handling EMIKA held by the robot human and follows the trajectory reduced (R5)
(A) (2), control Panda (7) forced by the dragging of the object
variables (5)
[29] GMM (C) Human Vision Collaborative Willow The human touches pads of The robot has to touch different pads 94.6% of the trials were successful
motion (3) assembly Garage certain colour than the human, avoiding undesired (R3)
PR2 (7) collisions with it
[30] GMM (C) Human Vision Object KUKA The human passes an object The robot, from the opposite end of The system demonstrated high
motion (3) handover LWR (7) to the robot, from one end the table, recognizes what type of accuracy in recognizing the tool from
of the table object is being passed, and receives it human observation (R4)
accordingly
[31] Interaction Human Vision Object KUKA The human is located at an The robot on human’s adjacent side Robot’s positioning error lower than
Probabilistic motion (3) handover LWR (7) end of a table and holds its picks an object from human’s the one registered with standardized
Movement hand to receive an object opposite end of the table and hands methods (R1)
Primitives the object to the human
(C)
[32] Relational Human Vision Collaborative Rethink The human assembles a box The robot classifies the user and its Decision risk lowered thanks to
Action intention assembly Robotics own actions and assists accordingly devised approach (R2)
Process (A) (3), robot Baxter (7)
decisions (6)
(continued on next
4
F. Semeraro et Robotics and Computer-Integrated Manufacturing 79 (2023)
Table 1 (continued).
Authors ML
Cognitive Sensing Interaction Robotic Human role Robot role Key result
technique
ability task platform
[33] RNN (B) Goal target
Vision Collaborative Softbank The human hits some objects The robot recognizes the sequence The robot was able to fulfil the task
(7)
assembly Robotics with a known sequence and replicates it; the last part of the under the setup conditions (R3)
Nao sequence must be done collaboratively
[34] VAE (C)+ Goal image Vision Object Tokyo The human needs to move The robot looks at the human hand, The proposed framework allowed the
LSTM (B) (7) handover Robotics objects from a place to understands the desired object and robot to infer the human goal state
Torobo another hands it over the human and adapt to situational changes (R2)
(6)
[35] Inverse RL Human type Vision Collaborative ABB IRB The human has to polish two The robot recognizes the type of user The system proves to be more
(A) of worker manufactur- (6) sides of an object held by and adjusts itself according to the efficient than a person telling the
(2) ing the robot information robot what actions to perform (R3)
[36] Gaussian Human Vision+ Ac- Collaborative KUKA The human has to polish or The robot orients the surfaces to The system was able to reduce
Process muscular celerometry manufactur- LWR (7) drill surfaces held by the maximize certain parameters of muscle fatigue of human (R5)
Regression activity (1) ing robot interest
(B)
[37] ANN (B) Robot level Vision+ Object KUKA The human has to lift an The robot assists the lifting of an The overall muscular activity of the
of assistance muscular handling LWR object object, recognizing the type of human is reduced (R5)
(6) activity+ Ac- iiwa 14 assistance required
celerometry 820 (7)
[38] Model-based Control Accelerome- Object KUKA The human has to lift an The robot assists the lifting of an The robot fulfils its task, while
RL (A)+ variables (5) try handling LWR object object respecting embedded safety rules (R3)
MLP (B) iiwa 14
820 (7)
[39] GMM (C) Task Accelerome- Object Barrett The human grabs an object After offline demonstrations, the robot Robot shows higher compliance
recognition try handling Technol- already held by the robot moves together with the human’s thanks to the devised solution (R4)
(7) ogy hand to the target position
WAM
(7)
[40] HMM (C) Robot Accelerome- Object Barrett The human holds an object After offline demonstrations, the robot The system improves reactive and
motion (5) try handover, Technol- of interest pours the liquid from a bottle it is pro-active capabilities of the robot
object ogy already holding (R4)
handling WAM
(7)
[41] LSTM (B) Control Accelerome- Object Haptic The human holds a fork and The robot holds a spoon and helps The robot is successful in the task
variables (5) try handling Device uses it to collect food the human scooping the same piece execution (R3)
Touch of food
(3)
[42] Proactive Decision Vision Object KUKA The human gestures to the The robot recognizes the command, The devised system has minimized
Incremental making (6) handover LWR (7) robot for the action and grabs the required object and hands the robot attention demand (R4)
Learning (B) gazes at the object of interest it over the human
[43] RL (A) Decision Vision Collaborative Rethink The human moves pieces on The robot moves other pieces The users found the robot more
making (6) assembly Robotics a surface according to certain according to human’s actions engaging because of the motivations
Sawyer rules following a shared objective, giving given (R3)
(7) reasons for the actions performed
[44] Interactive Decision Vision Collaborative Barrett The human has to move The robot jointly moves other pieces The robot was successful in the task
RL (A) making (6) assembly Technol- blocks from a place to at the same time (R3)
ogy another
WAM
(7)
[45] RL (A) Decision Vision Collaborative Univer- The human has to follow The robot has to act concurrently The required time for task completion
making (6) assembly sal different steps to prepare a with the human in the preparation; it is lower during human–robot
Robots meal communicates the steps it is about to collaboration (R4)
UR10 perform to the human
(7)
[46] kNN (B) Human hand Accelerome- Object KUKA The human has to support The robot, by looking at the human The devised system lowered
pose (3) try handling LWR+ an object, while applying a hand pose, has to provide the biomechanic measurement (R5)
Custom rotation to it required rotation
(32)
[47] Clustering Human Vision Collaborative Rethink The human has to align 4 The robot recognizes human intention The robot achieved 80% average
(C)+ Bayes intention (2) assembly Robotics blocks, with a rule unknown and grabs the remaining needed accuracy in classifying the sequences
(B) Sawyer to the robot blocks being performed (R4)
(7)
[48] GMM (C)+ Human Vision Collaborative Univer- The human has to assemble The robot understands human’s The robot is successful in assisting
HMM (C) motion (3) assembly sal a tower with blocks motion and acts accordingly to assist the human (R3)
Robots during the task
UR5 (6)
[49] Inverse RL Robot Accelerome- Collaborative Staubli The human has to put The robot receives the inputs and The robot matches most of times the
(A) actions (6) try+ assembly TX2-60 concentric objects on top of assists the human in the process, by behaviour expected by the human
muscular (6) each other grasping the proper object; there is (R3)
activity+ no specific order for the assembly
speech steps execution
[50] Extreme Human Accelerome- Object Staubli The human is handing over The robot understands the intent of The approach resulted to be higher
learning intention (2) try+ handover TX2-60 an object to the robot the human and acts accordingly in accuracy than past works (R4)
machine (B) muscular (6)
activity+
speech
[51] DNS (B) Decision Vision Object Rethink The human is assembling a The robot looks at the human, The devised system improves its
making (6) handover Robotics system of pipes understands the step of the process it efficiency with respect to a standard
Sawyer is currently at and passes the pipe approach (R4)
(7) needed by the human
[52] Q-learning Control Accelerome- Object FRANKA The human moves the object The robot keeps holding the object The robot performs better in terms of
(A) variables (5) try handling EMIKA held by the robot along a and follows the trajectory precision of positioning and time
Panda 2D surface efficiency (R1)
(7)
[53] Q-learning Control Accelerome- Object FRANKA The human moves an object The robot follows the trajectory The robot is successful in the task, as
(A) variables (5) try handling EMIKA held by the robot from a forced by the human on the object; if long as the human does not exert
Panda point to another repeatedly; it goes to the third point, it plans high forces on the object (R3)
(7) in case of variation, it moves another trajectory to reach the second
it to a third point point
[54] LSTM (B) Human Vision Collaborative Univer- The human is assembling The robot has to predict the intention The robot achieves the task with
intention (2) assembly sal different pieces together of the human and assist in it lower error in positioning (R1)
Robots accordingly
UR5 (6)
(continued on next
5
F. Semeraro et Robotics and Computer-Integrated Manufacturing 79 (2023)
Table 1 (continued).
Authors ML
Cognitive Sensing Interaction Robotic Human role Robot role Key result
technique
ability task platform
[55] RNN (B) Human pose
Vision Object Univer- The human is working on The robot looks at the human, The robot is successful in the task
(3)
handover sal the assembly of an object predicts its next pose and, according execution (R3)
Robots to that, moves pre-emptively to pick
UR5 (6) up a screwdriver and pass it to the
human
[56] DNS (C) Human Vision+ Object Barrett The human is assembling a The robot, positioned at the side of Optimal settings for best system
intention (2) muscular handover technol- chair the human, predicts the human’s performance were identified (R4)
activity+ ogy intention and hands over the related
brain WAM tool
activity (7)
Another family of works delves more into the physical side of the
human–robot collaboration. In these, information collected from the stages, being particularly specific categories of tasks, but their number
environment is used to extract simpler high-level information than is increasing, as well.
the previous cases. Besides, the same information is also used to
control the actuation of the robot in the physical interaction. In this 3.2. Metrics of collaborative tasks
type of tasks, the exchange of mechanical power with the user is of
the utmost importance. Indeed, in these cases, the robot can exert
In a research study, a goal is pursued through an experimental
higher forces, according to the task. So, further aspects related to the
validation. According to the case, the result can vary in its nature.
safety of the interaction come into play.
However, it is possible to come up with a categorization for them, as
The most complex interaction task of this family is the collaborative
well (see Fig. 3).
manufacturing. Here, the robot applies permanent physical alterations
The first reported set of results are referred to the precision of
to an object in collaboration with the human, as part of a manufacturing
movement of the robot. Here, the good quality of the works was
process. In this case, the exchange of interaction forces between human
assessed by measuring the accuracy of the movement performed by the
and robot is crucial. In Nikolaidis et al. [35] and Peternel et al. [36], the
robot during the interaction. For the type of objectives being pursued
robot has a supportive role: it holds the object and orients it, according
in these papers, such quantity is very crucial. In most cases, they
respectively to the intentions of the user and the parameters of interest
measure the positioning of the robot’s end effector at the end of the
to optimize, to assist the user in the polishing task being performed
movement, usually when it interacts with the object of interest or the
( [36] considers drilling, too). However, in Chen et al. [15], the robot
user, like in object handling or object handover, respectively [12,30]. In
takes a more active role, applying forces jointly with the user, to hold
another case, as the study was related to object handling, the movement
a saw from one of its ends to cut an object.
throughout the interaction was measured [52].
In object handling, the most recurrent interaction in this family, the
In another set of works, robustness is the main factor of interest.
task starts with the robot and the user having already jointly grabbed an
In general terms, this is the capability of robot to handle the col-
object from different sides. Then, they move it together to another point
laboration in case something unpredictable occurs. This can happen
in space, without any of them letting go of it. Of course, it is possible to
when steps of a process or ways of interacting are expected, instead.
gather high-level understanding from the collected information. Roveda
In Cuhna et al. [20], one piece of the pipes needed for the assembly
et al. [37] implemented a robotic system understanding whose type
was purposely removed to evaluate how the robot could handle such
of assistance is required by the user in the interaction and performs it
unexpected scenario. However, it is also possible that the situation is
accordingly. Similarly, in Sasagawa et al. [41], a robot holds a spoon
built so that the robot does not have any prior knowledge about the
and, according to what the user does with a fork, they scoop different
sequence of the steps to be executed [34]. In that case, every action
types of food together, in the related way. However, it is clear how,
performed by the user is an unexpected scenario the robot needs to
in this type of interaction, the physical component is predominant. In
address.
most cases, the robot simply follows the movements of the user in the
A third of the analysed works can be considered as a proof of
displacement of the object [26,28,46]. In more complex versions of this
concept of the designed system. In this sense, for the validation of the
task, the robot, while the object is being displaced, maintains certain
result, no quantitative measurements of the performance are provided.
constraints about it. For instance, in Ghadirzadeh et al. [22], the robot
needs to maintain a rolling object on top of another one it is holding Indeed, the main objective of these papers is to demonstrate that the
in the collaboration. Likewise, Wu et al. [52] have the object move devised systems can work in a way that is considered useful to the
only along a set surface. Two object handling works address the people who interact with the robot. Consequently, if they provide
problem of fine positioning. In these cases, the user drags the robot’s measurements about the outcome of the works, they are of a
end effector inside a region of interest. The robot, then, according to qualitative kind, about the perception users had about the robot
the collected information, adjusts its own end-effector, while still improving the work performance. For example, Chen et al. [14]
being held by the user, to perform the exact position and orientation conclude that most of the times the user can put the required trust on
of it required by the task [12,17]. the robot performing the task alongside it. In Luo et al. [29], the robot
Looking at Fig. 2, the evolution through time of the amount of met the expectations of the user about task completion.
works in the depicted categories of collaborative tasks is reported. It In more than a dozen of cases in the set of selected papers, a proof
is possible to appreciate that collaborative assembly experiments of better performances of the illustrated system is provided by means of
have a constantly increasing trend. This is evidence that robotic comparison with a base case. The latter can be interpreted in multiple
scientists are starting to consider more complex tasks, where the ways. For instance, improvement can be demonstrated by comparing
interplay between decisions of the user and the robot becomes the accuracy of the proposed ML system with the one achieved by a
crucial. Robots are starting to shift from being tools that have to supervised classifier [23,47]. Conversely, in Zhou et al. [56], different
follow user’s movements to proper collaborative entities in the configurations of the same system were tested to highlight the one with
scene. Of course, works related to simpler interactions from a best performances.
cognitive point of view, that is object handling and object handover, Finally, another subset of papers aimed at reducing the human
are not decreasing in their amount. Besides, a steep increase of object physical workload during task completion. A common aspect between
handling works from 2019 and 2020 was registered. Collaborative these papers is the use of biomechanical variables that were
manufacturing works are still at early minimized using the robotic system, to validate the efficacy of the
6
F. Semeraro et Robotics and Computer-Integrated Manufacturing 79 (2023)
proposed
7
F. Semeraro et Robotics and Computer-Integrated Manufacturing 79 (2023)
Fig. 3. Time trends of the metrics employed in the experimental validations (5 related to precision, 4 to robustness, 17 to proof of concept, 13 to performance improvement and
7 to reduction of physical workload).
8
F. Semeraro et Robotics and Computer-Integrated Manufacturing 79 (2023)
task during the interaction. These cognitive variables, mostly
inherited
9
F. Semeraro et Robotics and Computer-Integrated Manufacturing 79 (2023)
Fig. 4. Results from the selected set of papers related to the modelled cognitive variables. In Fig. 4(a), for each circular sector, the related category and its occurrence in the
selected set of papers are shown. Fig. 4(b) shows their time trends. In doing so, the first two categories, the three central ones and the last two were grouped together, according
to them being related to aspects of the robot (22 works), the human user (18 works) or the scene (7 works).
1
F. Semeraro et Robotics and Computer-Integrated Manufacturing 79 (2023)
Fig. 5. Results from the selected set of papers related to the machine learning algorithms (18 cases of reinforcement learning, 12 of unsupervised learning and 23 of supervised
learning). Besides, there have been 1 case of deep learning model in 2018, 3 in 2019 and 8 in 2020. The ‘‘Composite’’ series is related to the works that made use of more than
one model as part of the same work (8 cases in total).
1
F. Semeraro et Robotics and Computer-Integrated Manufacturing 79 (2023)
1
F. Semeraro et Robotics and Computer-Integrated Manufacturing 79 (2023)
Fig. 6. Results from the selected set of papers related to the sensing modality (30 cases of use of vision, 16 of accelerometry, 5 of muscular activity and 3 of others). The ‘‘ Others’’
category is composed of one case of use of brain activity and two of speech. The ‘‘Sensor fusion’’ series represents the trend of using multiple sensing in the same system, for a
total of 5 cases.
1
F. Semeraro et Robotics and Computer-Integrated Manufacturing 79 (2023)
Fig. 7. Comparison of the machine learning techniques exploited in the different categories of collaborative tasks.
1
F. Semeraro et Robotics and Computer-Integrated Manufacturing 79 (2023)
[20].
1
F. Semeraro et Robotics and Computer-Integrated Manufacturing 79 (2023)
Fig. 8. Frequency of occurrence of the ML techniques while modelling specific cognitive variables.
1
F. Semeraro et Robotics and Computer-Integrated Manufacturing 79 (2023)
the space of the workstation. However, mobile manipulators seem not
to be present jointly used with ML in HRC. If they do, the mobile
ability of the robot
1
F. Semeraro et Robotics and Computer-Integrated Manufacturing 79 (2023)
Fig. 9. Frequency of occurrence of the categories of results achieved in the depicted collaborative tasks.
is not used in the experimental validation (see Section 3). Hence, this
other direction of research is suggested to be followed. unsupervised learning tends to be employed more in object handover
Finally, a great research thread in HRC is represented by safety [74]. and to model variables related to human motion. Looking at the types of
Many works are carried out evaluating feasibility of safe applications of results in the collaborative tasks, a good amount of them is still proof of
collaborative robotics, but only few of them makes use of ML in it. For concept, although some trends are starting to appear, like robustness for
instance, Aivaliotis et al. [75] made use of neural networks to estimate collaborative assembly and reduction of physical workload for object
the torque to increase the speed of the robot in reducing the amount handling works.
of force exerted in simulated unexpected collisions between a human Finally, aspects of human–robot collaboration research that did
and a robot. Enabling safe collaborations by means of ML techniques is not fall under the scope of this review were mentioned, regarding
another research worth to pursue [76]. their relationship with machine learning. The use of different robotic
architectures than robotic manipulators, jointly with machine learn-
8. Conclusions ing techniques, is suggested. Other domains and approaches that can
benefit from machine learning are safety and use of digital twin for
In this paper, a systematic review was performed, which led to collaborative applications.
the selection of 45 papers to find current trends in human–robot
collaboration, with a specific focus on the use of machine learning Declaration of competing interest
techniques.
A classification of different collaborative tasks was proposed, mainly No author associated with this paper has disclosed any potential
composed by collaborative assembly, object handover, object handling and or pertinent conflicts which may be perceived to have impending
collaborative manufacturing. Alongside with it, metrics of these tasks conflict with this work. For full disclosure statements refer to
were analysed. The cognitive variables modelled in the tasks were https://doi.org/ 10.1016/j.rcim.2022.102432.
assessed and discussed, at different levels of details.
Machine learning techniques were subdivided in supervised, un- Data availability
supervised and reinforcement learning, according to the learning al-
gorithm used. The use of composite machine learning systems is not No data was used for the research described in the article.
predominant in literature and it is encouraged, especially regarding
deep reinforcement learning. Acknowledgements
Regarding sensing, vision is the most used sensing modality in the
selected set of papers, followed by accelerometry, with few works Francesco Semeraro’s work was supported by an EPSRC CASE stu-
exploiting user’s muscular activity. A very small portion of them uses dentship funded by BAE Systems. Angelo Cangelosi’s work was partially
other sensing modalities. Besides, similarly to the results concerning supported by the H2020 projects TRAINCREASE and eLADDA, the UKRI
machine learning, sensor fusion is not much used in human–robot Trustworthy Autonomous Systems Node on Trust and the US Air
collaboration yet. Force project THRIVE++.
Further analysis was carried out, looking for correlations between
the represented set of works’ features. These were used to give guide- References
lines to researchers for the design of future works that involve the use
of machine learning in human–robot collaboration applications. Among [1] A positioning paper by the international federation of robotics WHAT IS A
these, it was reported that reinforcement learning is predominantly COLLABORATIVE INDUSTRIAL ROBOT?, 2018.
used to model cognitive variables related to the robot itself, while [2] A. Bauer, D. Wollherr, M. Buss, Human–robot collaboration: a survey, Int. J.
Humanoid Robot. 5 (2008) 47–66.
1
F. Semeraro et Robotics and Computer-Integrated Manufacturing 79 (2023)
[3] E. Matheson, R. Minto, E.G.G. Zampieri, M. Faccio, G. Rosati, Human–robot [26] S. Kuang, Y. Tang, A. Lin, S. Yu, L. Sun, Intelligent control for human-robot
collaboration in manufacturing applications: A review, MDPI Robot. 8 (2019) cooperation in orthopedics surgery, in: G. Zheng, W. Tian, X. Zhuang (Eds.), in:
100, http://dx.doi.org/10.3390/robotics8040100. Advances in Experimental Medicine and Biology, vol. 1093, 2018, pp. 245–262,
[4] K.E. Kaplan, Improving inclusion segmentation task performance through human- http://dx.doi.org/10.1007/978-981-13-1396-7_19.
intent based human-robot collaboration, in: 11th ACM IEEE International [27] M. Lorenzini, W. Kim, E. De Momi, A. Ajoudani, A synergistic approach to the real-
Conference on Human-Robot Interaction, 2016, pp. 623–624. time estimation of the feet ground reaction forces and centers of pressure in
[5] M. Jacob, Y.-T. Li, G. Akingba, J.P. Wachs, Gestonurse: a robotic surgical nurse humans with application to Human-Robot collaboration, IEEE Robot. Autom. Lett.
for handling surgical instruments in the operating room, J. Robot. Surg. 6 (2012) 3 (2018) 3654–3661, http://dx.doi.org/10.1109/LRA.2018.2855802.
53–63. [28] W. Lu, Z. Hu, J. Pan, Human-robot collaboration using variable admittance
[6] A. Amarillo, E. Sanchez, J. Caceres, J. Oñ ativia, Collaborative human–robot control and human intention prediction, in: IEEE International Conference on
interaction interface: Development for a spinal surgery robotic assistant, Int. J. Automation Science and Engineering, vol. 2020-Augus, 2020, pp. 1116–1121,
Soc. Robot. (2021) 1–12. http://dx.doi.org/10.1109/CASE48305.2020.9217040.
[7] T. Ogata, K. Takahashi, T. Yamada, S. Murata, K. Sasaki, Machine learning for [29] R. Luo, R. Hayne, D. Berenson, Unsupervised early prediction of human reaching
cognitive robotics, Cogn. Robot. (2022). for human–robot collaboration in shared workspaces, Auton. Robots 42 (2018)
631–648, http://dx.doi.org/10.1007/s10514-017-9655-8.
[8] U. Emeoha Ogenyi, S. Member, J. Liu, S. Member, C. Yang, Z. Ju, H. Liu, Physical
[30] G.J. Maeda, G. Neumann, M. Ewerton, R. Lioutikov, O. Kroemer, J. Peters,
Human-Robot Collaboration: Robotic Systems, Learning Methods, Collaborative
Probabilistic movement primitives for coordination of multiple human–robot
Strategies, Sensors and Actuators, Tech. Rep.
collaborative tasks, Auton. Robots 41 (2017) 593–612, http://dx.doi.org/10.
[9] L.M. Hiatt, C. Narber, E. Bekele, S.S. Khemlani, J.G. Trafton, Human modeling
1007/s10514-016-9556-2.
for human–robot collaboration, Int. J. Robot. Res. 36 (2017) 580–596, http:
[31] G. Maeda, M. Ewerton, G. Neumann, R. Lioutikov, J. Peters, Phase estimation
//dx.doi.org/10.1177/0278364917690592.
for fast action recognition and trajectory generation in human–robot collab-
[10] B. Chandrasekaran, J.M. Conrad, Human-robot collaboration: A survey, in:
oration, Int. J. Robot. Res. 36 (2017) 1579–1594, http://dx.doi.org/10.1177/
Conference Proceedings - IEEE SOUTHEASTCON, vol. 2015-June, 2015, http:
0278364917693927.
//dx.doi.org/10.1109/SECON.2015.7132964.
[32] T. Munzer, M. Toussaint, M. Lopes, Efficient behavior learning in human–robot
[11] D. Mukherjee, K. Gupta, L.H. Chang, H. Najjaran, A survey of robot learn- collaboration, Auton. Robots 42 (2018) 1103–1115, http://dx.doi.org/10.1007/
ing strategies for human-robot collaboration in industrial settings, Robot. s10514-017-9674-5.
Comput.-Integr. Manuf. 73 (2022) 102231. [33] S. Murata, Y. Li, H. Arie, T. Ogata, S. Sugano, Learning to achieve different
[12] M.A. Ahmad, M. Ourak, C. Gruijthuijsen, J. Deprest, T. Vercauteren, E. Vander levels of adaptability for human-robot collaboration utilizing a neuro-dynamical
Poorten, Deep learning-based monocular placental pose estimation: towards system, IEEE Trans. Cogn. Dev. Syst. 10 (2018) 712–725, http://dx.doi.org/10.
collaborative robotics in fetoscopy, Int. J. Comput. Assist. Radiol. Surg. 15 1109/TCDS.2018.2797260.
(2020) 1561–1571, http://dx.doi.org/10.1007/s11548-020-02166-3. [34] S. Murata, W. Masuda, J. Chen, H. Arie, T. Ogata, S. Sugano, Achieving human–
[13] S.C. Akkaladevi, M. Plasch, A. Pichler, M. Ikeda, Towards reinforcement based robot collaboration with dynamic goal inference by gradient descent, in: Lecture
learning of an assembly process for human robot collaboration, in: Procedia Notes in Computer Science (Including Subseries Lecture Notes in Artificial
Manuf., 38, 2019, pp. 1491–1498, http://dx.doi.org/10.1016/j.promfg.2020.01. Intelligence and Lecture Notes in Bioinformatics), vol. 11954 LNCS, 2019, pp. 579–
138. 590, http://dx.doi.org/10.1007/978-3-030-36711-4_49.
[14] M. Chen, H. Soh, D. Hsu, S. Nikolaidis, S. Srinivasa, Trust-aware decision [35] S. Nikolaidis, R. Ramakrishnan, K. Gu, J. Shah, Efficient model learning from joint-
making for human-robot collaboration: Model learning and planning, ACM Trans. action demonstrations for human-robot collaborative tasks, ACM/IEEE Int. Conf.
Hum.-Robot Interact. 9 (2020) http://dx.doi.org/10.1145/3359616, arXiv:1801. Hum.-Robot Interact. 2015-March (2015) 189–196, http://dx.doi.org/10.
04099. 1145/2696454.2696455.
[15] X. Chen, Y. Jiang, C. Yang, Stiffness estimation and intention detection for human- [36] L. Peternel, C. Fang, N. Tsagarakis, A. Ajoudani, A selective muscle fa-
robot collaboration, in: Proceedings of the 15th IEEE Conference on Industrial tigue management approach to ergonomic human-robot co-manipulation, Robot.
Electronics and Applications, ICIEA 2020, 2020, pp. 1802–1807, http: Comput.-Integr. Manuf. 58 (2019) 69–79, http://dx.doi.org/10.1016/j.rcim.2019.
//dx.doi.org/10.1109/ICIEA48937.2020.9248186. 01.013.
[16] X. Chen, N. Wang, H. Cheng, C. Yang, Neural learning enhanced variable admit- [37] L. Roveda, S. Haghshenas, M. Caimmi, N. Pedrocchi, L.M. Tosatti, Assisting
tance control for human-robot collaboration, IEEE Access 8 (2020) 25727–25737, operators in heavy industrial tasks: On the design of an optimized cooperative
http://dx.doi.org/10.1109/ACCESS.2020.2969085. impedance fuzzy-controller with embedded safety rules, Front. Robot. AI 6
[17] W. Chi, J. Liu, H. Rafii-Tari, C. Riga, C. Bicknell, G.Z. Yang, Learning-based en- (2019) http://dx.doi.org/10.3389/frobt.2019.00075.
dovascular navigation through the use of non-rigid registration for collaborative [38] L. Roveda, J. Maskani, P. Franceschi, A. Abdi, F. Braghin, L. Molinari Tosatti,
robotic catheterization, Int. J. Comput. Assist. Radiol. Surg. 13 (2018) 855–864, N. Pedrocchi, Model-based reinforcement learning variable impedance control
http://dx.doi.org/10.1007/s11548-018-1743-5. for human-robot collaboration, J. Intell. Robot. Syst., Theory Appl. 100 (2020)
417–433, http://dx.doi.org/10.1007/s10846-020-01183-3.
[18] S. Choi, K. Lee, H.A. Park, S. Oh, A nonparametric motion flow model for human
[39] L. Rozo, D. Bruno, S. Calinon, D.G. Caldwell, Learning optimal controllers in human-
robot cooperation, in: IEEE International Conference on Robotics and Automation
robot cooperative transportation tasks with position and force constraints, in:
ICRA, 2018, pp. 7211–7218.
2015 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS,
[19] M.J.Y. Chung, A.L. Friesen, D. Fox, A.N. Meltzoff, R.P. Rao, A Bayesian
2015, pp. 1024–1030.
developmental approach to robotic goal-based imitation learning, PLoS One 10
[40] L. Rozo, J. Silvé rio, S. Calinon, D.G. Caldwell, Learning controllers for reactive
(2015) http://dx.doi.org/10.1371/journal.pone.0141965.
and proactive behaviors in human-robot collaboration, Front. Robot. AI 3 (2016)
[20] A. Cunha, F. Ferreira, E. Sousa, L. Louro, P. Vicente, S. Monteiro, W. Erlhagen,
http://dx.doi.org/10.3389/frobt.2016.00030.
E. Bicho, Towards collaborative robots as intelligent co-workers in human-robot
[41] A. Sasagawa, K. Fujimoto, S. Sakaino, T. Tsuji, Imitation learning based on
joint tasks: what to do and who does it? in: 52th International Symposium on
bilateral control for human-robot cooperation, IEEE Robot. Autom. Lett. 5 (2020)
Robotics, ISR 2020, 2020, pp. 1–8.
6169–6176, http://dx.doi.org/10.1109/LRA.2020.3011353.
[21] Z. Deng, J. Mi, D. Han, R. Huang, X. Xiong, J. Zhang, Hierarchical robot [42] D. Shukla, O. Erkent, J. Piater, Learning semantics of gestural instructions for
learning for physical collaboration between humans and robots, in: 2017 IEEE human-robot collaboration, Front. Neurorobot. 12 (2018) http://dx.doi.org/10.
International Conference on Robotics and Biomimetics, IEEE ROBIO 2017, 2017, 3389/fnbot.2018.00007.
pp. 750–755. [43] A. Tabrez, S. Agrawal, B. Hayes, Explanation-based reward coaching to improve
[22] A. Ghadirzadeh, J. Bü tepage, A. Maki, D. Kragic, M. Bjö rkman, A sensorimotor human performance via reinforcement learning, in: ACM IEEE International
reinforcement learning framework for physical human-robot interaction, in: Conference on Human-Robot Interaction, 2019, pp. 249–257.
IEEE International Conference on Intelligent Robots and Systems, 2016, pp. 2682– [44] K. Tsiakas, M. Papakostas, M. Papakostas, M. Bell, R. Mihalcea, S. Wang, M.
2688, http://dx.doi.org/10.1109/IROS.2016.7759417, arXiv:1607.07939. Burzo, F. Makedon, An interactive multisensing framework for personalized
[23] A. Ghadirzadeh, X. Chen, W. Yin, Z. Yi, M. Bjö rkman, D. Kragic, M. Bjorkman, D. human robot collaboration and assistive training using reinforcement learning,
Kragic, Human-centered collaborative robots with deep reinforcement learning, in: ACM International Conference Proceeding Series, vol. Part F1285, 2017, pp.
IEEE Robot. Autom. Lett. 6 (2020) 566–571, http://dx.doi.org/10.1109/LRA. 423–427, http://dx.doi.org/10.1145/3056540.3076191.
2020.3047730. [45] V.V. Unhelkar, S. Li, J.A. Shah, Decision-making for bidirectional communication
[24] E.C. Grigore, A. Roncone, O. Mangin, B. Scassellati, Preference-based assistance in sequential human-robot collaborative tasks, in: ACM/IEEE International Con-
prediction for human-robot collaboration tasks, in: IEEE International Conference ference on Human-Robot Interaction, New York, NY, USA, 2020, pp. 329–341,
on Intelligent Robots and Systems, 2018, pp. 4441–4448, http://dx.doi.org/10. http://dx.doi.org/10.1145/3319502.3374779.
1109/IROS.2018.8593716. [46] L. v. der Spaa, M. Gienger, T. Bates, J. Kober, Predicting and optimizing
[25] Y. Huang, J. Silverio, L. Rozo, D.G. Caldwell, Generalized task-parameterized ergonomics in physical human-robot cooperation tasks, in: 2020 IEEE Interna-
skill learning, in: IEEE International Conference on Robotics and Automation tional Conference on Robotics and Automation, ICRA, 2020, pp. 1799–1805,
ICRA, 2018, pp. 5667–5674. http://dx.doi.org/10.1109/ICRA40945.2020.9197296.
1
F. Semeraro et Robotics and Computer-Integrated Manufacturing 79 (2023)
[47] S. Vinanzi, A. Cangelosi, C. Goerick, The role of social cues for goal disambigua- [62] Q. Liu, Z. Liu, W. Xu, Q. Tang, Z. Zhou, D.T. Pham, Human-robot collaboration
tion in human-robot cooperation, in: 2020 29th IEEE International Conference in disassembly for sustainable manufacturing, Int. J. Prod. Res. 57 (12) (2019)
on Robot and Human Interactive Communication, RO-MAN, 2020, pp. 971–977, 4027–4044.
http://dx.doi.org/10.1109/RO-MAN47096.2020.9223546. [63] Q. Lv, R. Zhang, T. Liu, P. Zheng, Y. Jiang, J. Li, J. Bao, L. Xiao, A strategy
[48] D. Vogt, S. Stepputtis, R. Weinhold, B. Jung, H. Ben Amor, Learning human- transfer approach for intelligent human-robot collaborative assembly, Comput.
robot interactions from human-human demonstrations (with applications in lego Ind. Eng. 168 (2022) 108047, http://dx.doi.org/10.1016/j.cie.2022.108047, URL
rocket assembly), in: IEEE-RAS International Conference on Humanoid Robots,
https://www.sciencedirect.com/science/article/pii/S0360835222001176.
2016, pp. 142–143.
[64] M.S. Yasar, T. Iqbal, A scalable approach to predict multi-agent motion for human-
[49] W. Wang, R. Li, Y. Chen, Z.M. Diekel, Y. Jia, Facilitating human–robot collabo-
robot collaboration, IEEE Robot. Autom. Lett. 6 (2) (2021) 1686–1693.
rative tasks by teaching-learning-collaboration from human demonstrations, IEEE
[65] G. Michalos, N. Kousi, P. Karagiannis, C. Gkournelos, K. Dimoulas, S. Koukas,
Trans. Autom. Sci. Eng. 16 (2018) 640–653.
K. Mparis, A. Papavasileiou, S. Makris, Seamless human robot collaborative
[50] W. Wang, R. Li, Y. Chen, Y. Jia, Human intention prediction in human-robot
collaborative tasks, in: ACM/IEEE International Conference on Human-Robot assembly – An automotive case study, Mechatronics 55 (2018) 194–211, http:
Interaction, 2018, pp. 279–280, http://dx.doi.org/10.1145/3173386.3177025. //dx.doi.org/10.1016/J.MECHATRONICS.2018.08.006.
[51] W. Wojtak, F. Ferreira, P. Vicente, L. Louro, E. Bicho, W. Erlhagen, A neural [66] G. Michalos, P. Karagiannis, N. Dimitropoulos, D. Andronas, S. Makris, Human
integrator model for planning and value-based decision making of a robotics robot collaboration in industrial environments, Intell. Syst. Control Autom. Sci.
assistant, Neural Comput. Appl. (2020) http://dx.doi.org/10.1007/s00521-020- Eng. 81 (2022) 17–39, http://dx.doi.org/10.1007/978-3-030-78513-0_2.
05224-8. [67] N. Dimitropoulos, T. Togias, N. Zacharaki, G. Michalos, S. Makris, Seamless human–
[52] M. Wu, Y. He, S. Liu, Adaptive impedance control based on reinforcement robot collaborative assembly using artificial intelligence and wear- able
learning in a human-robot collaboration task with human reference estimation, devices, Appl. Sci. 11 (12) (2021) 5699, http://dx.doi.org/10.3390/
Int. J. Mech. Control 21 (2020) 21–31. APP11125699.
[53] M. Wu, Y. He, S. Liu, Shared impedance control based on reinforcement [68] Q. Liu, Z. Liu, W. Xu, Q. Tang, Z. Zhou, D.T. Pham, Human-robot collaboration
learning in a human-robot collaboration task, in: K. Berns, D. Gorges (Eds.), in disassembly for sustainable manufacturing, Int. J. Prod. Res. 57 (2019)
in: Advances in Intelligent Systems and Computing, vol. 980, 2020, pp. 95–103, 4027–4044, http://dx.doi.org/10.1080/00207543.2019.1578906.
http://dx.doi.org/10.1007/978-3-030-19648-6_12.
[69] A. Bilberg, A.A. Malik, Digital twin driven human–robot collaborative assembly,
[54] L. Yan, X. Gao, X. Zhang, S. Chang, Human-robot collaboration by intention
CIRP Ann. 68 (1) (2019) 499–502, http://dx.doi.org/10.1016/J.CIRP.2019.04.
recognition using deep lstm neural network, in: Proceedings of the 8th Inter-
011.
national Conference on Fluid Power and Mechatronics, FPM 2019, 2019, pp.
[70] K. Drö der, P. Bobka, T. Germann, F. Gabriel, F. Dietrich, A machine learning-
1390–1396, http://dx.doi.org/10.1109/FPM45753.2019.9035907.
enhanced digital twin approach for human-robot-collaboration, Procedia Cirp 76
[55] J. Zhang, H. Liu, Q. Chang, L. Wang, R.X. Gao, Recurrent neural network for
motion trajectory prediction in human-robot collaborative assembly, CIRP Ann. (2018) 187–192.
69 (2020) 9–12, http://dx.doi.org/10.1016/j.cirp.2020.04.077. [71] M. Matulis, C. Harvey, A robot arm digital twin utilising reinforcement learning,
[56] T. Zhou, J.P. Wachs, Spiking neural networks for early prediction in human– Comput. Graph. 95 (2021) 106–114.
robot collaboration, Int. J. Robot. Res. 38 (2019) 1619–1643, http://dx.doi.org/ [72] N. Kousi, C. Gkournelos, S. Aivaliotis, K. Lotsaris, A.C. Bavelos, P. Baris, G.
10.1177/0278364919872252, arXiv:1807.11096. Michalos, S. Makris, Digital twin for designing and reconfiguring human–robot
[57] T. Jo, Machine Learning Foundations: Supervised, Unsupervised, and Advanced collaborative assembly lines, Appl. Sci. 11 (10) (2021) 4620, http://dx.doi.org/
Learning, Springer Nature, 2021. 10.3390/APP11104620.
[58] A. Weiss, A.K. Wortmeier, B. Kubicek, Cobots in industry 4.0: A roadmap for [73] T. Asfour, M. Waechter, L. Kaul, S. Rader, P. Weiner, S. Ottenhaus, R. Grimm,
future practice studies on human-robot collaboration, IEEE Trans. Hum.-Mach. Y. Zhou, M. Grotz, F. Paus, Armar-6: A high-performance humanoid for human-
Syst. 51 (4) (2021) 335–345, http://dx.doi.org/10.1109/THMS.2021.3092684. robot collaboration in real-world scenarios, IEEE Robot. Autom. Mag. 26 (4)
[59] E. Schneiders, E. Cheon, J. Kjeldskov, M. Rehm, M.B. Skov, Non-dyadic inter- (2019) 108–121.
action: A literature review of 15 years of human-robot interaction conference [74] L. Gualtieri, E. Rauch, R. Vidoni, Emerging research fields in safety and
publications, ACM Trans. Hum.-Robot Interact. 11 (2) (2022) 1–32. ergonomics in industrial collaborative robotics: A systematic literature review,
[60] M. Story, P. Webb, S.R. Fletcher, G. Tang, C. Jaksic, J. Carberry, Do speed and Robot. Comput.-Integr. Manuf. 67 (2021) 101998.
proximity affect human-robot collaboration with an industrial robot arm? Int. J. [75] P. Aivaliotis, S. Aivaliotis, C. Gkournelos, K. Kokkalis, G. Michalos, S. Makris,
Soc. Robot. (2022) 1–16. Power and force limiting on industrial robots for human-robot collaboration,
[61] Y. Liu, Z. Li, H. Liu, Z. Kan, Skill transfer learning for autonomous robots and Robot. Comput.-Integr. Manuf. 59 (2019) 346–360, http://dx.doi.org/10.1016/
human–robot cooperation: A survey, Robot. Auton. Syst. 128 (2020) 103515, J.RCIM.2019.05.001.
http://dx.doi.org/10.1016/j.robot.2020.103515, URL https://www.sciencedirect. [76] L. Wang, R. Gao, J. Vá ncza, J. Krü ger, X.V. Wang, S. Makris, G. Chrys-
com/science/article/pii/S0921889019309972. solouris, Symbiotic human-robot collaborative assembly, CIRP Ann. 68 (2) (2019)
701–726, http://dx.doi.org/10.1016/J.CIRP.2019.05.002.