Canny and Agah
move, do a little sensing, and interact, teach and entertain humans, to a surprising degree. Personal robots need tointeract effectively with people, and they need to ﬁt comfortably into people’s lifestyles.Personal robots have great potential for connecting people with interesting, rich and interactive remote spaces.Tele-visits to museums, galleries, or exotic places can both entertain and provide a unique educational experience.The success of the Jason project, the tele-studies of the titanic, and the daily pictures from Sojourner’s Mars visithint at these possibilities. But controlling personal tele-robots from a remote computer can be difﬁcult becauseof environment clutter, limited sensing and network delays. In “Internet Control Architecture for Internet-BasedPersonal Robot”, Han, Kim, Kim, and Kim (KAIST, Taejon, Korea) describe a control architecture that is resilientto network delays for tele-operation of personal robots over the Internet. They use a local model of the robot to plancollision-avoiding motions, and then update the remote robot’s goal positions. In “Insect Telepresence”, All andNourbakhsh (Carnegie Mellon University, Pittsburgh, Pennsylvania) describe a system that allows museum visitorsto explore the inside of an insect’s enclosure, and interact “face-to-face” with the insects using a tiny tele-operatedcamera. They employed user-centered design techniques and formal HCI principles in the design of their interface.Apart from pure user control, personal robots can be outﬁtted with varying amounts of autonomy. This allows themto work synergistically with a person. In “Enhancing Randomized Motion Planners: Exploring with Haptic Hints,”Bayazit, Song, and Amato (Texas A&M University, College Station, Texas) explore cooperative solution of motionplanning problems by human and computer. Based on the probabilistic roadmap framework, their planner allowshuman intervention in cases where the system has failed to recognize certain poses of the robot that could improvethe success of the plan.The study of human motion and cognition can drive the design of personal robots. In “Moving Personal Robotsin Real-Time Using Primitive Motions”, Xu and Zheng (Ohio State University, Columbus, Ohio) draw on studiesof human motion to build a taxonomy of motion primitives for robots. They build a reﬂexive control scheme thatcomposes these primitives into more complex motions. Such a system can be naturally controlled with a fewcommands from a human. In “Psychological Effects of Behavior Patterns of a Mobile Personal Robot”, Butlerand Agah (The University of Kansas, Lawrence, Kansas) address the human co-existence question. They study agroup of 40 subjects and their reactions to a robot that is passing near, avoiding them, or performing a task nearthem. Their study is aimed at better understanding of human-robot interaction, and how robot behavior can be bestdesigned so that it inspires conﬁdence and comfort from the people around the robot.Personal robots can help users with sensory-motor injuries or disabilities. In “A Stewart Platform-Based Systemfor Ankle Telerehabilitation”, Girone, Burdea, Bouzit, Popescu, and Deutsch (Rutgers University, Piscataway,New Jersey), describe the “Rutgers Ankle”, a haptic interface designed for orthopedic rehabilitation. This devicecan be connected to the net, and allows patients to exercise at home while their progress is monitored remotely.In “Multiobjective Navigation of a Guide Mobile Robot for the Visually Impaired Based on Intention Inference of Obstacles”, Kang, Kim, Lee, and Bien (KAIST, Taejon, Korea) describe a system for avoiding moving obstacles(such as other pedestrians) for a visually-impaired person. Their system tracks the pedestrian’s positions and inferstheir intended goals using a fuzzy reasoning system. Those goals are used to predict their future positions andadvise the user on how to maintain safe distance from others.In “The CAM-Brain Machine (CBM) an FPGA Based Tool for Evolving a 75 Million Neuron Artiﬁcial Brainto Control a Lifesized Kitten Robot”, de Garis, Korkin, and Fehr (STARLAB, Brussels, Belgium) describe thearchitecture of a very large, real-time neural network. Their design uses ordinary RAM to store the pattern of interconnectionsandacustomFPGAcircuittoperformmanyneuronupdatespersecond. Thesystemusesageneticalgorithmtoupdatethenetwork. Theirgoalistrueartiﬁcialbrains,andtheﬁrstapplicationistoalife-sizekittenrobotcalled “Robokitty”. In “Supervised Autonomy: A Framework for Human-Robot Systems Development”, Chengand Zelinsky (The Australian National University, Canberra, Australia) describe a supervisory control system thatreliesontherobottoperformbasicfunctionsofperceptionandaction. Thehumanprovidesqualitativeinstructions,andreceivesfeedbackthroughagraphicaluserinterface. Thesystemhasbeendesignedinahuman-centeredway,tohelp users accomplish their tasks. The communication of task information between robot and human is the subjectof “Information Sharing via Projection Function for Coexistence of Robot and Human” by Wakita, Hirai, Hori, andFujiwara (Electrotechnical Laboratory, Tsukuba, Japan). They describe “projection functions” as one approach toinformation sharing. They describe the design of such a projection system and its interface to a user.