Professional Documents
Culture Documents
1 http://www.gumstix.net.
2 http://www.beowulf.org/. 3 http://www-csag.ucsd.edu/projects/hpvm.html.
A. Marjovi et al. / Robotics and Autonomous Systems 60 (2012) 1191–1204 1193
computationally hard problems. Computer clusters are typically 2.2. Required equipments for robotic clusters
independent computers connected via a single local area network
(LAN). Robots can also establish a computer cluster through For implementing a robotic cluster, there is no need for extra
wireless networks to form a robotic cluster. We propose the hardware other than a wireless network and some robots equipped
following definition for a robotic cluster and then explain its with simple computers. Most of the recently developed robots
characteristics, requirements and implementations in details. are already equipped with small computers running operating
systems capable of clustering. Only some software packages
Definition 1. A robotic cluster is a group of individual robots which
(named middleware) and applications should be installed on the
are able to share their processing resources among the group in
order to quickly solve computationally hard problems. robots to form a robotic cluster. This paper will describe the details
of these packages in Sections 2.4 and 3.
In a robotic cluster, the processing resources can be shared by
applying computer clustering approaches. In these systems, each
2.3. Programming
node is still able to run its own tasks independently, moreover,
the CPUs of these robots are shared in the cluster. Therefore,
when a robot sends some processing jobs to the others, the robots Programming for robotic clusters is different from other types
simultaneously run their own tasks and also the shared requested of robotic programming. Programs should be written using parallel
job. Fig. 1 demonstrates the concept of a robotic cluster which programming approaches. The programmer should consider the
consists of multiple heterogeneous robots connected through a following questions:
wireless network. • Which parts of the code should be run simultaneously in more
Applications of robotic clusters are tasks that demand than one robot and which parts should be run only on one
• extreme processing resources, and robot?
• relatively cheap designs. • Which message passing protocol is better for a specific
Processing high amount of data or mapping a large and complex application?
environment with multiple simple robots are two examples of • Which variables should be local in a robot and which variables
these applications. Processing and cost, both, are hard constraints should be global to all robots?
that emphasize simplicity of the individual robots, and thus • How and when should the robots communicate?
motivate a cluster approach to solve computationally hard • How should the local results of individual robots be aggregated
problems. Excessive research is needed to find methodologies to find the global result?
that allow designing and implementing robotic clusters to address
The answers of these questions depend on the cluster’s architec-
several challenges in robotics research. This section goes to the
details of this concept. ture and the robotic application. For example, programming on a
cluster with shared memory for the whole system is very differ-
ent from a cluster that does not have any shared memory. Since
2.1. Characteristics of robotic clusters
in many robotic systems, the robots are structurally independent,
Robotic clusters benefit from a number of advantages offered the programmer should not use any shared memory approach and
by concepts of computer clusters, including: should try to minimize the number of messages to be sent and re-
ceived. Based on the application, the programmer should design an
I Processing power: Each robot in a cluster is potentially able
algorithm and address the mentioned questions. Section 4 partic-
to use the processors of other robots. Most of the previous
experiments with real robots show that the robots usually ularly provides an example and presents a parallel algorithm for a
perform simple tasks (e.g. data logging and navigation) that do robotic application. Generally speaking, most of the parallel pro-
not require high processing power, but in some situations they grams are written in C language using message passing protocols
need to process a huge amount of data (e.g. when a robot needs provided by clustering middleware for communication between
to process sensory data or finds itself in a different situation or the nodes. Section 2.4 presents details of these protocols.
needs to make a hard decision). Therefore in a group of robots,
most of the robots usually perform simple tasks and only a 2.4. Middleware
few robots need high processing resources. Using the clustering
concepts, the robots which need processing power will be able In a robotic cluster (similar to a computer cluster), the activities
to use the processing units of the other robots. This idea can be of the computing nodes (robots) are orchestrated by ‘‘clustering
more developed such that there can exist some special robots middleware’’, a software layer that allows the users or applications
that do not participate directly in robotic missions but help the to treat the cluster as one cohesive computing unit, e.g. via a single
other robots with sharing their processing power. system image (SSI) concept [48]. Middleware actually resides
II Reduced cost: The price of off-the-shelf simple computers or
between the operating system and the user applications. It glues
small laptops has decreased in recent years. These simple
hardware resources by message passing, moving processes across
computers can be used as the processing unit of each individual
machines, monitoring and synchronizing work of nodes [49].
robot and as a cluster they provide high processing capacity.
Resource management, advanced task scheduling, workload
Generally, in comparison with single robots systems, the
management, communications, tools for cluster management, and
parallel processing power of a robotic cluster is, in many cases,
more cost efficient than a single robot with the same processing providing cluster building kits are the most important issues in
capability. clustering that middleware standards provide solutions for.
III Scalability: One of the greatest advantages of robotic clusters There are many different implemented standards for the
is the scalability that they offer. While complex single robots middleware in computer clusters. Among these standards, three
with high processing resources have a fixed processing capacity, of them are well accepted by the community; Open Multi-
robotic clusters are easily expandable by adding additional Processing (OpenMP) [50], Parallel Virtual Machine (PVM) [51],
nodes to the system when conditions change. and Message Passing Interface (MPI) [52]. OpenMP can only
IV Availability: When a single supper robot fails, the entire mission be run in shared memory computers, being inappropriate for
fails. However, if a robot in a robotic cluster fails, its operations robotic clusters. In contrast, PVM is a distributed operating system
can be simply transferred to another robot within the cluster, that forms a framework for building a single high performance
ensuring that there is no interruption in the service. parallel virtual machine from a collection of interconnected
1194 A. Marjovi et al. / Robotics and Autonomous Systems 60 (2012) 1191–1204
5 www.olsr.org.
4 http://www.mcs.anl.gov/research/projects/mpi/mpich1. 6 http://www.open-mesh.org.
A. Marjovi et al. / Robotics and Autonomous Systems 60 (2012) 1191–1204 1195
2.7. Dynamic architecture Fig. 5 presents a state diagram that individual robots in a robotic
cluster follow in their decision makings. In this diagram, the robots
Despite computer clusters whose hardware structure is either which do not need and have extra processing resources broadcast
fixed or changed by a human administrator, in robotic clusters the a message of ‘‘I am available’’ to the neighboring robots. On the
hardware structure of the system dynamically changes while the other hand, the robots which lack processing resources, measure
robots spatially move in the environment. The robots eventually the current available processing resources in the cluster and the
(and sometimes intentionally) get in, or get out of, the network overhead of using the network and check if it is worth splitting the
range, making the hardware structure of the cluster very dynamic. task or not. We define the following conditions as the meaning of
This is one of the most significant differences between a computer being ‘‘worth’’:
cluster and a robotic cluster.
Tp + Toverhead < Tseq (1)
A typical architecture of a computer cluster is shown in Fig. 3.
The key components of a cluster include multiple standalone com- where, Tp is the estimated time that it takes for the nodes of the
puters, operating systems, a high performance interconnection, cluster to finish processing their share of task. Tp is mostly relative
communication software, middleware, and applications. Fig. 4 il- to the time that the computationally slowest robot of the cluster
lustrates the architecture of a robotic cluster. In this architecture, takes to process its subtask. Tseq is the estimated time that it takes
1196 A. Marjovi et al. / Robotics and Autonomous Systems 60 (2012) 1191–1204
Table 1
Algorithm 3: Merge_parallel(M1 , M2 ,MAP), MPI solution
List of similar edges and processed vertexes in Algorithms 1 and 2 for the maps
shown in Fig. 9(left). Mismatches are colored red in this table. 1 inputs:
2 M1 = [V , E , k1 ]
3 M2 = [W , F , k2 ]
4 Output: A map that is the result of merging M1 and M2
5 begin
6 MPI_Init(...)
7 MPI_Barrier(MPI_COMM_WORLD)
8 begin
9 MPI_comm_rank (MPI_COMM_WORLD, &id)
10 MPI_comm_size (MPI_COMM_WORLD, &p)
11 Low _v alue = (id − 1) × k1 /p + 1
12 High_v alue = (id) × k1 /p
13 LIST = Null
14 for i = Low _v alue; i < High_v alue; i + + do
15 for j = 1; j < k2 ; j + + do
16 if E_equiv(ei , fj ) then
(I1 M1 , I2 M1 ) = Index Adj(ei )
17
k1
Low_valuei = (idi − 1) × +1 (3) Fig. 10. Robots mapping a real environment [3].
p
k1
High_valuei = (idi ) × (4)
p
where idi denotes the ID of robot i in the cluster, p is the total
number of nodes and k1 is the size of map M1 . To explain these
formulas, we provide an example; imagine the map size k1 is 1000
and the current number of available robots p is 5. Based on (3) and
(4), for the robot whose id is 1, Low_value1 and High_value1 are
respectively 1 and 200, and for the robot with id = 2, Low_value2
Fig. 11. The topological map with 54 nodes of the environment shown partially in
is 201 and High_value2 is 400. Thus, Robot 1 works on interval Fig. 10.
[1, 200], robot 2 on interval [201, 400], robot 3 on [401, 600] and so
on. If the current number of robots was 10, the share of each robot
would be intervals with length 100.
Using function MPI_comm_rank() in the algorithm, the robots
find their own ID and calling function MPI_comm_size(). They
will be aware of the total number of robots (p) participating in
the cluster at the moment. It is worth mentioning that in MPI
programming, the variables are private, i.e. changes in a variable
in a node (inside the parallel part of code) do not have any effect
on the value of the same variable in another node.
By considering k1 (and not k2 ) in (3) and (4), map M1 is divided
between the robots in equal shares. Now each node runs the
previously presented sequential algorithm, but starting from its
given share of data in M1 , and finds the biggest common subgraphs
between M1 and M2 (see Algorithm 3). Fig. 12. Three robots mapping an environment with 36 nodes (left). Three robots
mapping an environment with 81 nodes (right).
4.2.2. Functionality
Similar to Algorithm 1, algorithm Merge_parallel compares all 4.3. Experimental results
the edges in a block of M1 and M2 one by one. Once an edge ei
in M1 is found equivalent with an edge fj in M2 , a transformation The algorithms have been run on the implemented robotic
function F will be calculated that maps ei on fj . Then Algorithm cluster (Fig. 7, explained in Section 3) using up to eight modified
2, Com_subgraph, finds the largest common subgraph between the Roomba robots. For evaluation of the method, all algorithms are
two maps starting from the edge ei and fj to check the compatibility tested ten times with different sets of maps. Several topological
of the vertexes based on the transformation function F . Each robot maps were experimentally generated by real robots using the
saves its found common subgraphs into its LIST . Finally, the LIST s results of our past experiments in [3,32,8,72]. Fig. 10 shows
will contain all common subgraphs that can be generated between two robots exploring and mapping a real environment. This
M1 and M2 with different transformation functions. These LIST s are environment is the corridors of the second floor of the Institute
generated by different robots and need to be aggregated to one of Systems and Robotics in Coimbra. Fig. 11 shows the topological
node and be processed. map of this environment generated by the robots. Moreover, a
set of exploration and mapping experiments were done in the
Player/Stage framework. Figs. 12 and 13 show three simulated
4.2.3. Aggregating the results environments being mapped by multiple robots.
MPI_Bcast is called in the algorithm, so that each robot will Figs. 14 and 15 demonstrate an example of the map merging
broadcast its results (LIST ) to the cluster. In this algorithm, the experiments. In Fig. 14 two partial maps of one environment are
robot with id = Sid is responsible to aggregate the distributed shown that were generated by two robots without having common
results and calculate the final result. Sid denotes the id of the robot reference frames. About 10% of vertexes and edges of these partial
which initially requested the map merging process. This robot runs maps are equivalent. Fig. 15 presents the result of merging these
MPI_Reduce to gather the results of processed data from the other maps. The execution time for each merging experience is measured
computers to its own LIST . Based on the biggest found common five times and their mean values are shown in Fig. 16. These results
subgraph (hypothesis), algorithm Merge_parallel will map the M1 show that if the maps are large enough (map size bigger than 54 in
with the associated transformation function to that hypothesis and this case), execution time decreases as number of robots increase.
generates M1mapped . Finally the output map (MAP) is generated by Parallel algorithms are usually evaluated by analyzing their
overlapping M1mapped and M2 . speedup and their efficiency over the serial algorithms. Reduction
A. Marjovi et al. / Robotics and Autonomous Systems 60 (2012) 1191–1204 1201
Fig. 16. Execution time results for different sizes of real world and simulation maps
Fig. 13. Four robots mapping an environment with 136 nodes. based on number of robots.
Fig. 17. Speedup for different sizes of problem in the real world and simulations.
Fig. 20. Execution time results for different size of artificial maps based on number
of processors.
Fig. 19. Merged map with about 1000 vertexes and 900 edges.
experiments and are reported in Figs. 20 and 21. These results show
that for large enough maps, the speedup increases by the number
of robots in the cluster.
Another parameter to evaluate the performance of parallel Fig. 22. The efficiency of parallel algorithm in different sizes of real and simulated
algorithms is efficiency which measures the utilization rate of the topological maps.
processors in the execution of a parallel program. It is equal to
the ratio of speedup and the number of processors used. We have if the size of maps is too small the parallel solution will take more
measured this parameter for the proposed map merging parallel time than the sequential solution. Based on this information, we
algorithm. Fig. 22 demonstrates the efficiency of the proposed present Algorithm 4 that checks the size of input maps and calls
parallel algorithm on the real world and simulation topological the sequential algorithm if the maps are smaller than threshold M
maps, while, Fig. 23 presents the efficiency of the algorithm on the
and calls the parallel algorithm if the maps are large enough.
artificially generated topological maps. Both graphs show similar
For this particular application and the specified hardware and
results, i.e., the efficiency is mostly between 0.6 and 0.8, especially
software, based on the speedup results of Figs. 17 and 21, we found
on large graphs where it converges to 7.2. This is a significant
that M should be set to 50. For other applications, the system
result that proves the efficiency of the method in reality, since it
designer should estimate or measure the system’s performance in
was obtained by implementation of the algorithm on a real robotic
cluster. both sequential and parallel cases and design a method similar to
Considering both sequential and parallel solutions in Algo- Algorithm 4 that uses the robotic cluster efficiently.
rithms 1 and 2, the following question arises: ‘‘when should a robot The presented results showed that the proposed method can
use each of these algorithms? ’’. Considering the state diagram of speed up the execution time of a map merging algorithm. We
Fig. 5, a robot should use the cluster only if it is ‘‘worth’’ using it. believe that this approach can be used in many other multi-robot
The experimental results (in Figs 17, 21, 22 and 23) showed that applications that demand high processing resources.
A. Marjovi et al. / Robotics and Autonomous Systems 60 (2012) 1191–1204 1203
Acknowledgments
[26] M. Wang, H. Wang, D. Vogel, K. Kumar, D. Chiu, Agent-based negotiation [59] I. Stoica, F. Sultan, D. Keyes, Modeling communication in cluster computing,
and decision making for dynamic supply chain formation, Engineering in: Proc. of the 7th SIAM Conf. on Parallel Processing for Scientific Computing,
Applications of Artificial Intelligence 22 (7) (2009) 1046–1055. San Francisco, CA, 1995, pp. 820–825.
[27] A. Chopra, M. Singh, An architecture for multiagent systems an approach [60] R. Martin, A. Vahdat, D. Culler, T. Anderson, Effects of communication latency,
based on commitments, in: Int. Conf. on Autonomous Agents and Multiagent overhead, and bandwidth in a cluster architecture, in: IEEE Int. Symp. on
Systems, Workshop on Programming Multiagent Systems, Budapest, Hungary, Computer Architecture 1997, pp. 85–97.
2009. [61] J. Hromkovič, Communication Complexity and Parallel Computing, Springer-
[28] B. An, V. Lesser, K. Sim, Strategic agents for multi-resource negotiation, Verlag New York Inc., 1997.
Autonomous Agents and Multi-Agent Systems 23 (1) (2011) 114–153. [62] M.J. Quinn, Parallel Programming in C with MPI and OpenMP, McGraw-Hill
[29] M. Dias, R. Zlot, N. Kalra, A. Stentz, Market-based multirobot coordination: a Press, 2003.
survey and analysis, Proceedings of the IEEE 94 (7) (2006) 1257–1270. [63] R. Grabowski, L. Navarro-Serment, C. Paredis, P. Khosla, Heterogeneous teams
[30] C. Clark, T. Bretl, S. Rock, Applying kinodynamic randomized motion planning of modular robots for mapping and exploration, Autonomous Robots 8 (3)
with a dynamic priority system to multi-robot space systems, in: IEEE (2000) 293–308.
Aerospace Conf. Proceedings, vol. 7, 2002, pp. 7–3621. [64] S. Thrun, A probabilistic on-line mapping algorithm for teams of mobile robots,
[31] E. Ferranti, N. Trigoni, M. Levene, Brick & Mortar: an on-line multi-agent The International Journal of Robotics Research 20 (5) (2001) 335–363.
exploration algorithm, in: Proc. IEEE Int. Conf. on Robotics and Automation, [65] J. Ko, B. Stewart, D. Fox, K. Konolige, B. Limketkai, A practical, deci-
2007, pp. 761–767. sion–theoretic approach to multi-robot mapping and exploration, in: IEEE/RSJ
[32] A. Marjovi, J.G. Nunes, L. Marques, A.T. de Almeida, Multi-robot exploration
Int. Conf. on Intelligent Robots and Systems, vol. 4, 2003, pp. 3232–3238.
and fire searching, in: IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, St.
[66] G. Dedeoglu, G. Sukhatme, Landmark-based matching algorithm for coop-
Louis, MO, USA, 2009.
erative mapping by autonomous robots, in: Int. Symp. on Distributed Au-
[33] Y.U. Cao, A.S. Fukunaga, A.B. Kahng, Cooperative mobile robotics: antecedents
tonomous Robotics Systems, 2000.
and directions, Autonomous Robots 4 (1997) 7–27.
[67] J. Fenwick, P. Newman, J. Leonard, Cooperative concurrent mapping and
[34] G. Dudek, M.R.M. Jenkin, E. Milios, D. Wilkes, A taxonomy for multi-agent
localization, in: Proc. IEEE Int. Conf. on Robotics and Automation, vol. 2, 2002,
robotics, Autonomous Robots 3 (4) (1996) 375–397.
pp. 1810–1817.
[35] D. Borrmann, J. Elseberg, K. Lingemann, A. Nüchter, J. Hertzberg, Globally
consistent 3D mapping with scan matching, Robotics and Autonomous [68] H. Wang, M. Jenkin, P. Dymond, Enhancing exploration in graph-like worlds,
Systems 56 (2) (2008) 130–142. in: IEEE Canadian Conf. on Computer and Robot Vision, 2008, pp. 53–60.
[36] J. Faigl, L. Přeučil, Self-organizing map for the multi-goal path planning with [69] J. Jennings, C. Kirkwood-Watts, C. Tanis, Distributed map-making and
polygonal goals, Artificial Neural Networks and Machine Learning (2011) navigation in dynamic environments, in: IEEE/RSJ Int. Conf. on Intelligent
85–92. Robots and Systems, vol. 3, 1998, pp. 1695–1701.
[37] H. Tamimi, H. Andreasson, A. Treptow, T. Duckett, A. Zell, Localization of [70] G. Dudek, M. Jenkin, E. Milios, D. Wilkes, Topological exploration with multiple
mobile robots with omnidirectional vision using particle filter and iterative robots, in: Int. Symp. on Robotics and Applications, Anchorage, AK, USA, 1998.
SIFT, Robotics and Autonomous Systems 54 (9) (2006) 758–765. [71] K. Konolige, D. Fox, B. Limketkai, J. Ko, B. Stewart, Map merging for distributed
[38] S. Carpin, E. Pagello, An experimental study of distributed robot coordination, robot navigation, in: IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, vol.
Robotics and Autonomous Systems 57 (2) (2009) 129–133. 1, 2003, pp. 212–217.
[39] F. Beutler, M. Huber, U. Hanebeck, Probabilistic instantaneous model- [72] A. Marjovi, J. Nunes, L. Marques, A. de Almeida, Multi-robot fire searching in
based signal processing applied to localization and tracking, Robotics and unknown environment, in: Field and Service Robotics, in: Springer Tracts in
Autonomous Systems 57 (3) (2009) 249–258. Advanced Robotics, vol. 62, 2010, pp. 341–351.
[40] D. Bhadauria, O. Tekdas, V. Isler, Robotic data mules for collecting data over [73] C. Kruskal, L. Rudolph, M. Snir, A complexity theory of efficient parallel
sparse sensor fields, Journal of Field Robotics 28 (3) (2011) 388–404. algorithms, Theoretical Computer Science 71 (1) (1990) 95–132.
[41] G. Pfister, In Search of Clusters, Prentice-Hall, Englewood Cliffs, NJ, 1998. [74] G. Amdahl, Validity of the single processor approach to achieving large
[42] T. Anderson, D. Culler, A case for network of workstations (NOW), IEEE Micro scale computing capabilities, in: Spring Joint Computer Conf., ACM, 1967,
15 (1) (1994) 54–64. pp. 483–485.
[43] H. Casanova, J. Dongarra, NetSolve: a network-enabled server for solving [75] F. ALECU, Performance analysis of parallel algorithms, Journal of Applied
computational science problems, International Journal of High Performance Quantitative Methods (2007) 129.
Computing Applications 11 (3) (1997) 212.
[44] K. Kurihara, S. Hoshino, K. Yamane, Y. Nakamura, Optical motion capture
system with pan-tilt camera tracking and real time data processing, in: Proc.
Ali Marjovi received his B.Eng. degree in Computer
IEEE Int. Conf. on Robotics and Automation, 2002.
Engineering from the University of Isfahan, Iran in 2003
[45] F. Tagliareni, M. Nierlich, O. Steinmetz, T. Velten, J. Brufau, J. Lopez-Sanchez,
and his M.Sc. degree in Computer Hardware Engineering
M. Puig-Vidal, J. Samitier, Manipulating biological cells with a micro-robot
from Sharif University of Technology, Tehran, Iran, in 2005.
cluster, in: IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, 2005.
He is currently a Ph.D. candidate in the Institute of Systems
[46] E. Yoshida, S. Murata, A. Kamimura, K. Tomita, H. Kurokawa, S. Kokaji, A motion
and Robotics, University of Coimbra, Portugal. His research
planning method for a self-reconfigurable modular robot, in: IEEE/RSJ Int.
interests include multi-robot search and exploration, and
Conf. on Intelligent Robots and Systems, 2002.
swarm robotics olfaction.
[47] A. Şamiloğlu, V. Gazi, A. Koku, Effects of asynchronism and neighborhood size
on clustering in self-propelled particle systems, Journal of Computing and
Information Science (2006) 665–676.
[48] R. Buyya, High Performance Cluster Computing: Architectures and Systems
vol. 1, Prentice Hall, Upper Saddle River, NJ, USA, 1999.
[49] S. Georgiev, Evaluation of cluster middleware in a heterogeneous computing Sarvenaz Choobdar is a Ph.D. candidate in the MAP-I
environment, Master’s Thesis, Internationaler Universitätslehrgang, 2009. Doctoral Program in Computer Science and a researcher at
[50] L. Dagum, R. Menon, OpenMP: an industry standard API for shared-memory INESC Porto. She is pursuing her Ph.D. in the field of graph
programming, IEEE Computational Science and Engineering 5 (1) (2002) mining under the supervision of Prof. Fernando Silva. She
46–55. has done her Master’s degree in Information Technology
[51] A. Geist, PVM: Parallel Virtual Machine: A Users’ Guide and Tutorial for at Tarbiat Modares University, Iran in 2008 and received
Networked Parallel Computing, The MIT Press, 1994. her Engineering degree from Alzahra University, Iran
[52] N. Desai, R. Bradshaw, A. Lusk, E. Lusk, MPI cluster system software, in: Recent in 2006. Her research area lies in the field of parallel
Advances in Parallel Virtual Machine and Message Passing Interface, Springer, computing, graph mining, stream mining and data mining
2004, pp. 277–286. for relational data.
[53] J. Corbalan, A. Duran, J. Labarta, Dynamic load balancing of MPI+ OpenMP
applications, in: IEEE Int. Conf. on Parallel Processing, 2004, pp. 195–202.
[54] Q. Snell, G. Judd, M. Clement, Load balancing in a heterogeneous supercom-
puting environment, in: Int. Conf. on Parallel and Distributed Processing Tech-
Lino Marques has a Ph.D. degree in Electrical Engineering
niques and Applications, Las Vegas, NV, 1998, pp. 951–957.
from the University of Coimbra, Portugal. He is currently
[55] M. Bhandarkar, L. Kale, E. de Sturler, J. Hoeflinger, Adaptive load balancing for
a Senior Lecturer at the Department of Electrical and
MPI programs, in: Computational Science-ICCS, 2001, pp. 108–117.
Computer Engineering, University of Coimbra, and he
[56] G. Utrera, J. Corbalán, J. Labarta, Dynamic load balancing in MPI jobs, in: High-
heads the Embedded Systems Laboratory of the Institute
Performance Computing, Springer, 2008, pp. 117–129.
for Systems and Robotics from that university (ISR-UC).
[57] K. Erciyes, R. Payli, A cluster-based dynamic load balancing middleware
His main research interests include embedded systems,
protocol for grids, in: Advances in Grid Computing-EGC, 2005, pp. 805–812.
mechatronics, robotics for risky environment.
[58] Y. Zhu, J. Guo, Y. Wang, Study on dynamic load balancing algorithm based on
MPICH, in: IEEE WRI World Congress on Software Engineering, vol. 1, 2009, pp.
103–107.