You are on page 1of 2
At the end of this lesson, you are expected to: 1. explain the human rights-based approach to science, technology and development; 2. identify the key documents and their principles that ensure the well-being of humans in the midst of scientific progress and technological development; and 3. discuss the importance of upholding human tights in science, technology and development, INSTRUCTIONS: Read the article entitled “The Ethical Dilemmas of Robotics” from BBC News by D. Evans and answer the question that follows. THE ETHICAL DILEMMAS OF ROBOTICS If the idea of robot ethics sounds like something out of science fiction, think again, writes Dylan Evans. Scientists are already beginning to think seriously about the new ethical problems posed by current developments in robotics. This week, experls in South Korea said they were drawing up an ethical code to prevent humans abusing robots, and vice versa. And, a group of leading roboticists called the European Robotics Network (Euron) has even started lobbying governments for legislation. At the top of their list of concems is safety. Robots were once confined to specialist applications in industry and the military, where users received extensive training on their use, but they are increasingly being used by ordinary people. Robot vacuum cleaners and lawn mowers are already in many homes, and robotic toys are increasingly popular with children. As these robots become more intelligent, it will become harder to decide who is responsible if they injure someone. Is the designer to blame, or the user, or the robot itself? Decisions Software robots - basically, just complicated computer programs - already make important financial decisions. Whose fault isit if they make a bad invesiment? Isaac Asimov was already thinking about these problems back in the 1940s, when he developed his famous "three laws of robotics". He argued that intelligent robots should all be programmed to obey the following three laws: © Arrobot may not injure a human being, or, through inaction, allow a human being to come to harm * Arobot must obey the orders given it by human beings except where such orders would contlict with the First Law * Arobot must protect its own existence as long as such protection does not conflict with the First or Second Law These three laws might seem like a good way to keep robots from harming people. But to a roboticist they pose more problems than they solve. In fact, programming a real robot to follow the three laws would itself be very difficult. For a start, the robot would need to be able fo tell humans apart from similar-looking things such as chimpanzees, statues and humanoid robots. This may be easy for us humans, but itis a very hard problem for robots, as anyone working in machine vision will tell you. Robot ‘rights’ Similar problems arise with rule two, as the robot would have to be capable of telling an order aparl from a casual request, which would involve more research in the field of natural language processing. Asimov's three laws only address the problem of making robots safe, so even if we could find away to program robots to follow them, other problems could arise if robots become sentient. If robots can feel pain, should they be granted certain rights? If robots develop emotions, as some experts think they will, should they be allowed to marry humans? Should they be allowed to own property? These questions might sound far-fetched, but debates over animal rights would have seemed equally far-fetched to many people just a few decades ago. Now, however, such questions are part of mainstream public debate. And the technology is progressing so fast that it is probably wise to start addressing the issues now. One area of robotics that raises some difficult ethical questions, and which is already developing rapidly, is the field of emotional robotics. More pressing moral questions are already being raised by the increasing use of robots in the military This is the attempt to endow robots with the ability to recognize human expressions of emotion, and to engage in behavior that humans readily perceive as emotional. Humanoid heads with expressive features have become alarmingly lifelike.

You might also like