{"title":"Asimov’s Laws of Robotics: Implications for Information Technology","authors":"R. Clarke","doi":"10.4324/9781003074991-4","DOIUrl":"https://doi.org/10.4324/9781003074991-4","url":null,"abstract":"","PeriodicalId":384017,"journal":{"name":"Machine Ethics and Robot Ethics","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115112081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-10DOI: 10.4324/9781003074991-32
L. Suchman
In this chapter I consider some new resources for thinking about how capacities for action are configured at the human-machine interface, informed by developments in feminist science and technology studies. While not all of the authors and works cited would identify as feminist, they share with feminist research – in my reading at least – a commitment to critical, but also reconstructive engagement with received conceptions of the human, the technological and the relations between them. Based on my own experience of the worlds of technology research and development, I argue that these reconceptualisations have implications for everyday practices of technology design. Both reconceptualisations of the human-machine interface, moreover, and the practices of their realization are inflected by, and consequential for, gendered relations within technoscience and beyond. The ideas and examples that I discuss below are draw from science and technology studies (STS), feminist theory, new media studies and experiments in cooperative systems design, each of which is multiple and extensive in themselves and no one of which can be adequately represented here. My hope nonetheless is to trace out enough of the lines of resonant thought that run across these fields of research to indicate the fertility of the ground, specifically with respect to creative reconfigurations at the interface of human and machine. One of the issues at stake here is the question of what counts as ‘innovation’ in science and engineering. This in itself, I will propose, is a gendered question insofar as it aligns with the longstanding feminist concern with the problem of who is recognized and who not in prevailing discourses of science and technology (see for example Suchman and Jordan 1989). Recent research on the actual work involved in putting technologies into use highlights the mundane forms of inventive yet taken for granted labor, hidden in the background, that are necessary to the success of complex sociotechnical arrangements. A central strategy in recognizing those labors is to decenter sites of innovation from singular persons, places and things to multiple acts of everyday activity, including the actions through which only certain actors and associated achievements
{"title":"Agencies in Technology Design: Feminist Reconfigurations*","authors":"L. Suchman","doi":"10.4324/9781003074991-32","DOIUrl":"https://doi.org/10.4324/9781003074991-32","url":null,"abstract":"In this chapter I consider some new resources for thinking about how capacities for action are configured at the human-machine interface, informed by developments in feminist science and technology studies. While not all of the authors and works cited would identify as feminist, they share with feminist research – in my reading at least – a commitment to critical, but also reconstructive engagement with received conceptions of the human, the technological and the relations between them. Based on my own experience of the worlds of technology research and development, I argue that these reconceptualisations have implications for everyday practices of technology design. Both reconceptualisations of the human-machine interface, moreover, and the practices of their realization are inflected by, and consequential for, gendered relations within technoscience and beyond. The ideas and examples that I discuss below are draw from science and technology studies (STS), feminist theory, new media studies and experiments in cooperative systems design, each of which is multiple and extensive in themselves and no one of which can be adequately represented here. My hope nonetheless is to trace out enough of the lines of resonant thought that run across these fields of research to indicate the fertility of the ground, specifically with respect to creative reconfigurations at the interface of human and machine. One of the issues at stake here is the question of what counts as ‘innovation’ in science and engineering. This in itself, I will propose, is a gendered question insofar as it aligns with the longstanding feminist concern with the problem of who is recognized and who not in prevailing discourses of science and technology (see for example Suchman and Jordan 1989). Recent research on the actual work involved in putting technologies into use highlights the mundane forms of inventive yet taken for granted labor, hidden in the background, that are necessary to the success of complex sociotechnical arrangements. A central strategy in recognizing those labors is to decenter sites of innovation from singular persons, places and things to multiple acts of everyday activity, including the actions through which only certain actors and associated achievements","PeriodicalId":384017,"journal":{"name":"Machine Ethics and Robot Ethics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116025928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-10DOI: 10.4324/9781003074991-10
P. Asaro
: There are at least three things we might mean by “ethics in robotics”: the ethical systems built into robots, the ethics of people who design and use robots, and the ethics of how people treat robots. This paper argues that the best approach to robot ethics is one which addresses all three of these
{"title":"What Should We Want From a Robot Ethic?","authors":"P. Asaro","doi":"10.4324/9781003074991-10","DOIUrl":"https://doi.org/10.4324/9781003074991-10","url":null,"abstract":": There are at least three things we might mean by “ethics in robotics”: the ethical systems built into robots, the ethics of people who design and use robots, and the ethics of how people treat robots. This paper argues that the best approach to robot ethics is one which addresses all three of these","PeriodicalId":384017,"journal":{"name":"Machine Ethics and Robot Ethics","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123826560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-10DOI: 10.4324/9781003074991-21
J. Moor
{"title":"The Mature, Importance, and Difficulty of Machine Ethics","authors":"J. Moor","doi":"10.4324/9781003074991-21","DOIUrl":"https://doi.org/10.4324/9781003074991-21","url":null,"abstract":"","PeriodicalId":384017,"journal":{"name":"Machine Ethics and Robot Ethics","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128580056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-09DOI: 10.4324/9781003074991-12
S. Turkle
Encounters with humanoid robots are new to the everyday experience of children and adults. Yet, increasingly, they are finding their place. This has occurred largely through the introduction of a class of interactive toys (including Furbies, AIBOs, and My Real Babies) that I call “relational artifacts.” Here, I report on several years of fieldwork with commercial relational artifacts (as well as with the MIT AI Laboratory’s Kismet and Cog). It suggests that even these relatively primitive robots have been accepted as companionate objects and are changing the terms by which people judge the “appropriateness” of machine relationships. In these relationships, robots serve as powerful objects of psychological projection and philosophical evocation in ways that are forging a nascent robotics culture.
对于儿童和成人来说,与人形机器人的接触是一种新的日常体验。然而,他们越来越多地找到了自己的位置。这主要是通过引入一类互动玩具(包括Furbies、aibo和My Real Babies)实现的,我称之为“关系工件”。在这里,我报告了几年来对商业关系工件(以及MIT AI实验室的Kismet和Cog)的实地工作。它表明,即使是这些相对原始的机器人也已被接受为伴侣对象,并正在改变人们判断机器关系“适当性”的术语。在这些关系中,机器人作为心理投射和哲学唤起的强大对象,正在形成一种新生的机器人文化。
{"title":"A Nascent Robotics Culture: New Complicities for Companionship","authors":"S. Turkle","doi":"10.4324/9781003074991-12","DOIUrl":"https://doi.org/10.4324/9781003074991-12","url":null,"abstract":"Encounters with humanoid robots are new to the everyday experience of children and adults. Yet, increasingly, they are finding their place. This has occurred largely through the introduction of a class of interactive toys (including Furbies, AIBOs, and My Real Babies) that I call “relational artifacts.” Here, I report on several years of fieldwork with commercial relational artifacts (as well as with the MIT AI Laboratory’s Kismet and Cog). It suggests that even these relatively primitive robots have been accepted as companionate objects and are changing the terms by which people judge the “appropriateness” of machine relationships. In these relationships, robots serve as powerful objects of psychological projection and philosophical evocation in ways that are forging a nascent robotics culture.","PeriodicalId":384017,"journal":{"name":"Machine Ethics and Robot Ethics","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128144521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-01DOI: 10.1142/S1793843011000674
Wendell Wallach, C. Allen, S. Franklin
What roles or functions does consciousness fulfill in the making of moral decisions? Will artificial agents capable of making appropriate decisions in morally charged situations require machine consciousness? Should the capacity to make moral decisions be considered an attribute essential for being designated a fully conscious agent? Research on the prospects for developing machines capable of making moral decisions and research on machine consciousness have developed as independent fields of inquiry. Yet there is significant overlap. Both fields are likely to progress through the instantiation of systems with artificial general intelligence (AGI). Certainly special classes of moral decision making will require attributes of consciousness such as being able to empathize with the pain and suffering of others. But in this article we will propose that consciousness also plays a functional role in making most if not all moral decisions. Work by the authors of this article with LIDA, a computational and conceptual model of human cognition, will help illustrate how consciousness can be understood to serve a very broad role in the making of all decisions including moral decisions.
{"title":"Consciousness and Ethics: Artificially Conscious Moral Agents","authors":"Wendell Wallach, C. Allen, S. Franklin","doi":"10.1142/S1793843011000674","DOIUrl":"https://doi.org/10.1142/S1793843011000674","url":null,"abstract":"What roles or functions does consciousness fulfill in the making of moral decisions? Will artificial agents capable of making appropriate decisions in morally charged situations require machine consciousness? Should the capacity to make moral decisions be considered an attribute essential for being designated a fully conscious agent? Research on the prospects for developing machines capable of making moral decisions and research on machine consciousness have developed as independent fields of inquiry. Yet there is significant overlap. Both fields are likely to progress through the instantiation of systems with artificial general intelligence (AGI). Certainly special classes of moral decision making will require attributes of consciousness such as being able to empathize with the pain and suffering of others. But in this article we will propose that consciousness also plays a functional role in making most if not all moral decisions. Work by the authors of this article with LIDA, a computational and conceptual model of human cognition, will help illustrate how consciousness can be understood to serve a very broad role in the making of all decisions including moral decisions.","PeriodicalId":384017,"journal":{"name":"Machine Ethics and Robot Ethics","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131745122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-03-20DOI: 10.4324/9781003074991-37
Lawrence B. Solum
Could an artificial intelligence become a legal person? As of today, this question is only theoretical. No existing computer program currently possesses the sort of capacities that would justify serious judicial inquiry into the question of legal personhood. The question is nonetheless of some interest. Cognitive science begins with the assumption that the nature of human intelligence is computational, and therefore, that the human mind can, in principle, be modelled as a program that runs on a computer. Artificial intelligence (AI) research attempts to develop such models. But even as cognitive science has displaced behavioralism as the dominant paradigm for investigating the human mind, fundamental questions about the very possibility of artificial intelligence continue to be debated. This Essay explores those questions through a series of thought experiments that transform the theoretical question whether artificial intelligence is possible into legal questions such as, "Could an artificial intelligence serve as a trustee?" What is the relevance of these legal thought experiments for the debate over the possibility of artificial intelligence? A preliminary answer to this question has two parts. First, putting the AI debate in a concrete legal context acts as a pragmatic Occam's razor. By reexamining positions taken in cognitive science or the philosophy of artificial intelligence as legal arguments, we are forced to see them anew in a relentlessly pragmatic context. Philosophical claims that no program running on a digital computer could really be intelligent are put into a context that requires us to take a hard look at just what practical importance the missing reality could have for the way we speak and conduct our affairs. In other words, the legal context provides a way to ask for the "cash value" of the arguments. The hypothesis developed in this Essay is that only some of the claims made in the debate over the possibility of AI do make a pragmatic difference, and it is pragmatic differences that ought to be decisive. Second, and more controversially, we can view the legal system as a repository of knowledge-a formal accumulation of practical judgments. The law embodies core insights about the way the world works and how we evaluate it. Moreover, in common-law systems judges strive to decide particular cases in a way that best fits the legal landscape-the prior cases, the statutory law, and the constitution. Hence, transforming the abstract debate over the possibility of AI into an imagined hard case forces us to check our intuitions and arguments against the assumptions that underlie social decisions made in many other contexts. By using a thought experiment that explicitly focuses on wide coherence, we increase the chance that the positions we eventually adopt will be in reflective equilibrium with our views about related matters. In addition, the law embodies practical knowledge in a form that is subject to public examination and disc
{"title":"Legal Personhood for Artificial Intelligences","authors":"Lawrence B. Solum","doi":"10.4324/9781003074991-37","DOIUrl":"https://doi.org/10.4324/9781003074991-37","url":null,"abstract":"Could an artificial intelligence become a legal person? As of today, this question is only theoretical. No existing computer program currently possesses the sort of capacities that would justify serious judicial inquiry into the question of legal personhood. The question is nonetheless of some interest. Cognitive science begins with the assumption that the nature of human intelligence is computational, and therefore, that the human mind can, in principle, be modelled as a program that runs on a computer. Artificial intelligence (AI) research attempts to develop such models. But even as cognitive science has displaced behavioralism as the dominant paradigm for investigating the human mind, fundamental questions about the very possibility of artificial intelligence continue to be debated. This Essay explores those questions through a series of thought experiments that transform the theoretical question whether artificial intelligence is possible into legal questions such as, \"Could an artificial intelligence serve as a trustee?\" What is the relevance of these legal thought experiments for the debate over the possibility of artificial intelligence? A preliminary answer to this question has two parts. First, putting the AI debate in a concrete legal context acts as a pragmatic Occam's razor. By reexamining positions taken in cognitive science or the philosophy of artificial intelligence as legal arguments, we are forced to see them anew in a relentlessly pragmatic context. Philosophical claims that no program running on a digital computer could really be intelligent are put into a context that requires us to take a hard look at just what practical importance the missing reality could have for the way we speak and conduct our affairs. In other words, the legal context provides a way to ask for the \"cash value\" of the arguments. The hypothesis developed in this Essay is that only some of the claims made in the debate over the possibility of AI do make a pragmatic difference, and it is pragmatic differences that ought to be decisive. Second, and more controversially, we can view the legal system as a repository of knowledge-a formal accumulation of practical judgments. The law embodies core insights about the way the world works and how we evaluate it. Moreover, in common-law systems judges strive to decide particular cases in a way that best fits the legal landscape-the prior cases, the statutory law, and the constitution. Hence, transforming the abstract debate over the possibility of AI into an imagined hard case forces us to check our intuitions and arguments against the assumptions that underlie social decisions made in many other contexts. By using a thought experiment that explicitly focuses on wide coherence, we increase the chance that the positions we eventually adopt will be in reflective equilibrium with our views about related matters. In addition, the law embodies practical knowledge in a form that is subject to public examination and disc","PeriodicalId":384017,"journal":{"name":"Machine Ethics and Robot Ethics","volume":"68 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120823071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}