Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Look up keyword
Like this
1Activity
0 of .
Results for:
No results containing your search query
P. 1
144924345 Personal Edu Robotics

144924345 Personal Edu Robotics

Ratings: (0)|Views: 0|Likes:
Published by Ivan Avramov

More info:

Published by: Ivan Avramov on Nov 30, 2013
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

01/24/2014

pdf

text

original

 
Autonomous Robots 10, 131–134, 2001c
2001 Kluwer Academic Publishers. Manufactured in The Netherlands.
Guest Editorial: Personal Robotics
Welcome to the special issue of the Autonomous Robots journal on the emerging area of Personal Robotics. Whatis “personal robotics”? The concept is evolving, but there is a picture emerging which is more than the “homerobot” that does dishes and mows the lawn. A personal robot may or may not be highly autonomous and intelligent(e.g., it may be a tele-robot with little or no autonomy). It may or may not be highly dexterous or anthropomorphic.A “personal” robot serves some function for a human being, and adapts to their actions and needs. The maincriterion is that it fulfills its role well. Because that role involves interaction with a person, new concepts fromhuman-computer interaction (HCI) and user-centered design are important for the design of personal robots.The papers in this collection span very intelligent robots that can interact with people at high level, to tele-robotsthat augment or extend human senses, to medical robots that assist injured or physically-challenged people. Aroboticsresearcherwillfindsomefamiliarconceptsinthesechapters, likemotionplanningandsupervisorycontrol,but with a new emphasis on environments that are full of people going about their business. The roboticist will alsofind some new methodologies, like user centered design and psychological studies. These techniques will be moreand more important as robots find their way into new human contexts.Our involvement with personal robots began with discussions of one of the editors with Howard Moraff at NSFin early 1996. There was already considerable interest in personal robotics on both sides of the Atlantic, andHoward’s efforts inspired a US-French workshop on personal robotics at LAAS-CNRS in Toulouse in January1998. There was a great deal of excitement at the workshop about personal robotics, but less than completeagreement about what the field should be. By the end of that workshop, some clear themes had emerged, and theworkshopteamhadconvergedonasetofcriteriaandkeyproblemsforpersonalrobotics. Somekeyapplicationareasare:(i) Telepresence, providing rich remote experiences through robotics.(ii) Assistive medical, replacing or augmenting human sensing and action.(iii) Companion/Pet robots, the main goal is emotional satisfaction.(iv) Education, robots as teachers or learning partners.(v) Domestic, robots to help with domestic chores like cleaning etc.(vi) Journeyman robots, which partner with a human to perform tasks.(vii) Interface robots, which provide haptic or tactile sensation.The LAAS workshop was followed by a related workshop at the IEEE International Conference on Robotics andAutomation (ICRA) in 1999. That workshop covered several of the application areas, and was one of the mostsuccessful at the conference. The following year, it was felt that educational robotics deserved a special emphasis.The result was a workshop on “Personal Robots for Education” at ICRA in 2000. The present volume of papers wassolicited in summer of 1999 after the first ICRA workshop. These papers represent a cross-section of techniquesand applications, and point the way to an energetic future for this area.Personal robots are a radical idea for many people. The familiar prototypes for robots are the mechanical men of 1950’ssciencefictionfilms,R2D2andC3POfromStarWars,andSojourner,everyonesfavoritepluckyMarsprobe.Allowing one of these machines to roam one’s house, or the thought of being entertained by one of them, is belowmostpeoplesthresholdofcredibility. AndyetsimpletoyslikeFurbyandMSBarneythatarecommercialsuccessesare true personal robots. These toys do not look like the prototypes of robots. They are soft, non-mechanical, andare not “useful” in the sense of performing menial tasks that we expect to be the staple of home robots. Instead they
 
132
 Canny and Agah
move, do a little sensing, and interact, teach and entertain humans, to a surprising degree. Personal robots need tointeract effectively with people, and they need to fit comfortably into people’s lifestyles.Personal robots have great potential for connecting people with interesting, rich and interactive remote spaces.Tele-visits to museums, galleries, or exotic places can both entertain and provide a unique educational experience.The success of the Jason project, the tele-studies of the titanic, and the daily pictures from Sojourner’s Mars visithint at these possibilities. But controlling personal tele-robots from a remote computer can be difficult becauseof environment clutter, limited sensing and network delays. In “Internet Control Architecture for Internet-BasedPersonal Robot”, Han, Kim, Kim, and Kim (KAIST, Taejon, Korea) describe a control architecture that is resilientto network delays for tele-operation of personal robots over the Internet. They use a local model of the robot to plancollision-avoiding motions, and then update the remote robot’s goal positions. In “Insect Telepresence”, All andNourbakhsh (Carnegie Mellon University, Pittsburgh, Pennsylvania) describe a system that allows museum visitorsto explore the inside of an insect’s enclosure, and interact “face-to-face” with the insects using a tiny tele-operatedcamera. They employed user-centered design techniques and formal HCI principles in the design of their interface.Apart from pure user control, personal robots can be outfitted with varying amounts of autonomy. This allows themto work synergistically with a person. In “Enhancing Randomized Motion Planners: Exploring with Haptic Hints,”Bayazit, Song, and Amato (Texas A&M University, College Station, Texas) explore cooperative solution of motionplanning problems by human and computer. Based on the probabilistic roadmap framework, their planner allowshuman intervention in cases where the system has failed to recognize certain poses of the robot that could improvethe success of the plan.The study of human motion and cognition can drive the design of personal robots. In “Moving Personal Robotsin Real-Time Using Primitive Motions”, Xu and Zheng (Ohio State University, Columbus, Ohio) draw on studiesof human motion to build a taxonomy of motion primitives for robots. They build a reflexive control scheme thatcomposes these primitives into more complex motions. Such a system can be naturally controlled with a fewcommands from a human. In “Psychological Effects of Behavior Patterns of a Mobile Personal Robot”, Butlerand Agah (The University of Kansas, Lawrence, Kansas) address the human co-existence question. They study agroup of 40 subjects and their reactions to a robot that is passing near, avoiding them, or performing a task nearthem. Their study is aimed at better understanding of human-robot interaction, and how robot behavior can be bestdesigned so that it inspires confidence and comfort from the people around the robot.Personal robots can help users with sensory-motor injuries or disabilities. In “A Stewart Platform-Based Systemfor Ankle Telerehabilitation”, Girone, Burdea, Bouzit, Popescu, and Deutsch (Rutgers University, Piscataway,New Jersey), describe the “Rutgers Ankle”, a haptic interface designed for orthopedic rehabilitation. This devicecan be connected to the net, and allows patients to exercise at home while their progress is monitored remotely.In “Multiobjective Navigation of a Guide Mobile Robot for the Visually Impaired Based on Intention Inference of Obstacles”, Kang, Kim, Lee, and Bien (KAIST, Taejon, Korea) describe a system for avoiding moving obstacles(such as other pedestrians) for a visually-impaired person. Their system tracks the pedestrian’s positions and inferstheir intended goals using a fuzzy reasoning system. Those goals are used to predict their future positions andadvise the user on how to maintain safe distance from others.In “The CAM-Brain Machine (CBM) an FPGA Based Tool for Evolving a 75 Million Neuron Artificial Brainto Control a Lifesized Kitten Robot”, de Garis, Korkin, and Fehr (STARLAB, Brussels, Belgium) describe thearchitecture of a very large, real-time neural network. Their design uses ordinary RAM to store the pattern of interconnectionsandacustomFPGAcircuittoperformmanyneuronupdatespersecond. Thesystemusesageneticalgorithmtoupdatethenetwork. Theirgoalistrueartificialbrains,andthefirstapplicationistoalife-sizekittenrobotcalled “Robokitty”. In “Supervised Autonomy: A Framework for Human-Robot Systems Development”, Chengand Zelinsky (The Australian National University, Canberra, Australia) describe a supervisory control system thatreliesontherobottoperformbasicfunctionsofperceptionandaction. Thehumanprovidesqualitativeinstructions,andreceivesfeedbackthroughagraphicaluserinterface. Thesystemhasbeendesignedinahuman-centeredway,tohelp users accomplish their tasks. The communication of task information between robot and human is the subjectof “Information Sharing via Projection Function for Coexistence of Robot and Human” by Wakita, Hirai, Hori, andFujiwara (Electrotechnical Laboratory, Tsukuba, Japan). They describe “projection functions” as one approach toinformation sharing. They describe the design of such a projection system and its interface to a user.
 
Guest Editorial: Personal Robotics 133It should be noted that since the limitations in space did not allow for all ten papers to be included in the sameissue, the first seven papers are included in this special issue, and the other three papers will be published in asubsequent issue.The guest editors would like to thank the authors, the reviewers, and the staff at Kluwer Academic for all theircontributions. We would also like to express our gratitude to the editor, Professor George A. Bekey, for supportingand encouraging this special issue. We hope that the collection of papers in this special issue can serve as a mediumfor further understanding of the emerging area of personal robotics.
John F. Canny
Computer Science Division, University of California, Berkeley, CaliforniaEmail: jfc@cs.berkeley.edu
Arvin Agah
Department of Electrical Engineering and Computer Science,The University of Kansas, Lawrence, KansasEmail: agah@ukans.edu
Arvin Agah
 is Assistant Professor of Electrical Engineering and Computer Science at the University of Kansas. His research interests includehuman interactions with intelligent systems (robots, computers, and interfaces) and distributed autonomous systems (robots and agents). Hehas published over 60 articles in these areas. He has taught courses in artificial intelligence, robotics, software engineering, computer systemsdesign laboratory and intelligent agents. He has served as the technical program committee member, conference session chair, and organizingcommitte member for various international technical conferences. He is a member of ACM and senior member of IEEE. Dr. Agah receivedhis B.A. Computer Science (Highest Honors) from the University of Texas at Austin, in 1986; M.S. Computer Science from Purdue University,West Lafayette, Indiana, in 1988; M.S. Biomedical Engineering from the University of Southern California, Los Angeles, California, in 1993;and Ph.D. Computer Science from the University of Southern California, in 1994.Dr. Agah has been a member of research staff at Xerox Corporation’s Webster Research Center, Rochester, New York; IBM Corporation’sLos Angeles Scientific Center, Santa Monica, California; Ministry of International Trade and Industry’s Mechanical Engineering Laboratory,Tsukuba, Japan; and Naval Research Laboratory’s Navy Center for Applied Research in Artificial Intelligence, Washington, D.C. He has beena instructor at Mansfield Business School, Austin, Taxas; Purdue University’s Department of Computer Science, West Lafayette, Indiana; andUniversity of Tsukuba’s Department of Engineering Systems, Tsukuba, Japan. He has also worked as a systems analyst and software engineerfor entertainment law firms and management companies in Century City and Beverly Hills, California.
JohnCanny
isaprofessorintheComputerScienceDivisionattheUniversityofCaliforniaatBerkeley. HecamefromMITin1987afterhisthesison robot motion planning, which won the ACM dissertation award. He received a Packard Foundation Fellowship and a PYI while at Berkeley.His main research interests are human-computer interaction through computer graphics and robotics. This includes work in gestural input and

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->