The implementation of moral decision making abilities in artiﬁcialintelligence (AI) is a natural and necessary extension to the social mechanismsof autonomous software agents and robots. Engineers exploring designstrategies for systems sensitive to moral considerations in their choices andactions will need to determine what role ethical theory should play in deﬁningcontrol architectures for such systems. The architectures for morally intelli-gent agents fall within two broad approaches: the top-down imposition of ethical theories, and the bottom-up building of systems that aim at goals orstandards which may or may not be speciﬁed in explicitly theoretical terms. Inthis paper we wish to provide some direction for continued research by out-lining the value and limitations inherent in each of these approaches.
Moral judgment in humans is a complex activity, and it is a skill that manyeither fail to learn adequately or perform with limited mastery. Although
W. Wallach (
)ISPS, Interdisciplinary Center for Bioethics, Yale University,87 Trumbull Street, New Haven, CT 06520-8209, USAe-mail: email@example.comC. AllenDepartment of History and Philosophy of Science, Indiana University,1011 E. Third Street, Bloomington, IN 47405, USAe-mail: firstname.lastname@example.orgI. SmitE&E Consultants, Omval 401, 1096 HR Amsterdam, The Netherlandse-mail: email@example.com
AI & Soc (2008) 22:565–582DOI 10.1007/s00146-007-0099-0ORIGINAL ARTICLE
Machine morality: bottom-up and top-downapproaches for modelling human moral faculties
Received: 20 May 2005/Accepted: 2 February 2007/Published online: 9 March 2007
Springer-Verlag London Limited 2007