Pub Date : 2023-11-15DOI: 10.1007/s00146-023-01805-y
Paula Sweeney
In the future, it is likely that we will form strong bonds of attachment and even develop love for social robots. Some of these loving relations will be, from the human’s perspective, as significant as a loving relationship that they might have had with another human. This means that, from the perspective of the loving human, the mindless destruction of their robot partner could be as devastating as the murder of another’s human partner. Yet, the loving partner of a robot has no recourse to legal action beyond the destruction of property and can see no way to prevent future people suffering the same devastating loss. On this basis, some have argued that such a scenario must surely motivate legal protection for social robots. In this paper, I argue that despite the devastating loss that would come from the destruction of one’s robot partner, love cannot itself be a reason for granting robot rights. However, although I argue against beloved robots having protective rights, I argue that the loss of a robot partner must be socially recognised as a form of bereavement if further secondary harms are to be avoided, and that, if certain conditions obtain, the destruction of a beloved robot could be criminalised as a hate crime.
{"title":"Could the destruction of a beloved robot be considered a hate crime? An exploration of the legal and social significance of robot love","authors":"Paula Sweeney","doi":"10.1007/s00146-023-01805-y","DOIUrl":"10.1007/s00146-023-01805-y","url":null,"abstract":"<div><p>In the future, it is likely that we will form strong bonds of attachment and even develop love for social robots. Some of these loving relations will be, from the human’s perspective, as significant as a loving relationship that they might have had with another human. This means that, from the perspective of the loving human, the mindless destruction of their robot partner could be as devastating as the murder of another’s human partner. Yet, the loving partner of a robot has no recourse to legal action beyond the destruction of property and can see no way to prevent future people suffering the same devastating loss. On this basis, some have argued that such a scenario must surely motivate legal protection for social robots. In this paper, I argue that despite the devastating loss that would come from the destruction of one’s robot partner, love cannot itself be a reason for granting robot rights. However, although I argue against beloved robots having protective rights, I argue that the loss of a robot partner must be socially recognised as a form of bereavement if further secondary harms are to be avoided, and that, if certain conditions obtain, the destruction of a beloved robot could be criminalised as a hate crime.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 6","pages":"2735 - 2741"},"PeriodicalIF":2.9,"publicationDate":"2023-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01805-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139271153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-10DOI: 10.1007/s00146-023-01799-7
John McClellan Marshall
This paper examines the societal paradigm shift growing from the tension between traditional institutional structures, in law and medicine for example, and the expansion of the human population. Similarly, the definition of “reality” in relation to the technological ability to create “virtual reality” in this environment is examined as a cyberæsthetic component of this evolutionary process. The question is presented as to whether the mere algebraic expansion of the traditional systems is adequate to maintain the relationship between human beings and technology despite the pressure of increasing population numbers. At issue is the proposition of whether the quality of human existence is diminished by the rapid onset of technological innovation. The corollary concept is the potential subjugation of human beings to the machine in the name of efficiency, with the movie “Metropolis” as the example from popular culture. Set in the context of the Three Laws, the paper presents an interdisciplinary examination as to the relationship between technological progress and the quality of life of the human species.
{"title":"\"Metropolis\" Revisited. . .and Coming","authors":"John McClellan Marshall","doi":"10.1007/s00146-023-01799-7","DOIUrl":"10.1007/s00146-023-01799-7","url":null,"abstract":"<div><p>This paper examines the societal paradigm shift growing from the tension between traditional institutional structures, in law and medicine for example, and the expansion of the human population. Similarly, the definition of “reality” in relation to the technological ability to create “virtual reality” in this environment is examined as a <i>cyberæsthetic</i> component of this evolutionary process. The question is presented as to whether the mere algebraic expansion of the traditional systems is adequate to maintain the relationship between human beings and technology despite the pressure of increasing population numbers. At issue is the proposition of whether the quality of human existence is diminished by the rapid onset of technological innovation. The corollary concept is the potential subjugation of human beings to the machine in the name of efficiency, with the movie “Metropolis” as the example from popular culture. Set in the context of the Three Laws, the paper presents an interdisciplinary examination as to the relationship between technological progress and the quality of life of the human species.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 6","pages":"2913 - 2920"},"PeriodicalIF":2.9,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135137926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-08DOI: 10.1007/s00146-023-01800-3
Ngoc-Thang B. Le, Manh-Tung Ho
{"title":"A review of Robots Won’t Save Japan: An Ethnography of Eldercare Automation by James Wright","authors":"Ngoc-Thang B. Le, Manh-Tung Ho","doi":"10.1007/s00146-023-01800-3","DOIUrl":"10.1007/s00146-023-01800-3","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 6","pages":"3069 - 3070"},"PeriodicalIF":2.9,"publicationDate":"2023-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135342055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-06DOI: 10.1007/s00146-023-01784-0
Manh-Tung Ho, Hong-Kong T. Nguyen
{"title":"Correction: Artificial intelligence as the new fire and its geopolitics","authors":"Manh-Tung Ho, Hong-Kong T. Nguyen","doi":"10.1007/s00146-023-01784-0","DOIUrl":"10.1007/s00146-023-01784-0","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 6","pages":"3073 - 3073"},"PeriodicalIF":2.9,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135679944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-04DOI: 10.1007/s00146-023-01801-2
Manh-Tung Ho
{"title":"Artificial intelligence and the law from a Japanese perspective: a book review","authors":"Manh-Tung Ho","doi":"10.1007/s00146-023-01801-2","DOIUrl":"10.1007/s00146-023-01801-2","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 6","pages":"3067 - 3068"},"PeriodicalIF":2.9,"publicationDate":"2023-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135773422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-02DOI: 10.1007/s00146-023-01781-3
Avigail Ferdman
{"title":"Correction: Bowling alone in the autonomous vehicle: the ethics of well-being in the driverless car","authors":"Avigail Ferdman","doi":"10.1007/s00146-023-01781-3","DOIUrl":"10.1007/s00146-023-01781-3","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 6","pages":"3071 - 3071"},"PeriodicalIF":2.9,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135932831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01DOI: 10.1007/s00146-023-01802-1
Jialei Wang, Li Fu
{"title":"Review of “AI assurance: towards trustworthy, explainable, safe, and ethical AI” by Feras A. Batarseh and Laura J. Freeman, Academic Press, 2023","authors":"Jialei Wang, Li Fu","doi":"10.1007/s00146-023-01802-1","DOIUrl":"10.1007/s00146-023-01802-1","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 6","pages":"3065 - 3066"},"PeriodicalIF":2.9,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135221098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-31DOI: 10.1007/s00146-023-01796-w
Fabian Fischbach, Tijs Vandemeulebroucke, Aimee van Wynsberghe
Abstract This paper aims to show that dominant conceptions of intelligence used in artificial intelligence (AI) are biased by normative assumptions that originate from the Global North, making it questionable if AI can be uncritically applied elsewhere without risking serious harm to vulnerable people. After the introduction in Sect. 1 we shortly present the history of IQ testing in Sect. 2, focusing on its multiple discriminatory biases. To determine how these biases came into existence, we define intelligence ontologically and underline its constructed and culturally variable character. Turning to AI, specifically the Turing Test (TT), in Sect. 3, we critically examine its underlying intelligence conceptions. The test has been of central influence in AI research and remains an important point of orientation. We argue that both the test itself and how it is used in practice risk promoting a limited conception of intelligence which solely originated in the Global North. Hence, this conception should be critically assessed in relation to the different global contexts in which AI technologies are and will be used. In Sect. 4, we highlight how unequal power relations in AI research are a real threat, rather than just philosophical sophistry while considering the history of IQ testing and the TT’s practical biases. In the last section, we examine the limits of our account and identify fields for further investigation. Tracing colonial continuities in AI intelligence research, this paper points to a more diverse and historically aware approach to the design, development, and use of AI.
{"title":"Mind who’s testing: Turing tests and the post-colonial imposition of their implicit conceptions of intelligence","authors":"Fabian Fischbach, Tijs Vandemeulebroucke, Aimee van Wynsberghe","doi":"10.1007/s00146-023-01796-w","DOIUrl":"https://doi.org/10.1007/s00146-023-01796-w","url":null,"abstract":"Abstract This paper aims to show that dominant conceptions of intelligence used in artificial intelligence (AI) are biased by normative assumptions that originate from the Global North, making it questionable if AI can be uncritically applied elsewhere without risking serious harm to vulnerable people. After the introduction in Sect. 1 we shortly present the history of IQ testing in Sect. 2, focusing on its multiple discriminatory biases. To determine how these biases came into existence, we define intelligence ontologically and underline its constructed and culturally variable character. Turning to AI, specifically the Turing Test (TT), in Sect. 3, we critically examine its underlying intelligence conceptions. The test has been of central influence in AI research and remains an important point of orientation. We argue that both the test itself and how it is used in practice risk promoting a limited conception of intelligence which solely originated in the Global North. Hence, this conception should be critically assessed in relation to the different global contexts in which AI technologies are and will be used. In Sect. 4, we highlight how unequal power relations in AI research are a real threat, rather than just philosophical sophistry while considering the history of IQ testing and the TT’s practical biases. In the last section, we examine the limits of our account and identify fields for further investigation. Tracing colonial continuities in AI intelligence research, this paper points to a more diverse and historically aware approach to the design, development, and use of AI.","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"224 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135871905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-30DOI: 10.1007/s00146-023-01794-y
Dave Murray-Rust, Maria Luce Lupetti, Iohanna Nicenboim, Wouter van der Hoog
Artificial intelligence (AI) and machine learning (ML) are increasingly integrated into the functioning of physical and digital products, creating unprecedented opportunities for interaction and functionality. However, there is a challenge for designers to ideate within this creative landscape, balancing the possibilities of technology with human interactional concerns. We investigate techniques for exploring and reflecting on the interactional affordances, the unique relational possibilities, and the wider social implications of AI systems. We introduced into an interaction design course (n = 100) nine ‘AI exercises’ that draw on more than human design, responsible AI, and speculative enactment to create experiential engagements around AI interaction design. We find that exercises around metaphors and enactments make questions of training and learning, privacy and consent, autonomy and agency more tangible, and thereby help students be more reflective and responsible on how to design with AI and its complex properties in both their design process and outcomes.
{"title":"Grasping AI: experiential exercises for designers","authors":"Dave Murray-Rust, Maria Luce Lupetti, Iohanna Nicenboim, Wouter van der Hoog","doi":"10.1007/s00146-023-01794-y","DOIUrl":"10.1007/s00146-023-01794-y","url":null,"abstract":"<div><p>Artificial intelligence (AI) and machine learning (ML) are increasingly integrated into the functioning of physical and digital products, creating unprecedented opportunities for interaction and functionality. However, there is a challenge for designers to ideate within this creative landscape, balancing the possibilities of technology with human interactional concerns. We investigate techniques for exploring and reflecting on the interactional affordances, the unique relational possibilities, and the wider social implications of AI systems. We introduced into an interaction design course (<i>n</i> = 100) nine ‘AI exercises’ that draw on more than human design, responsible AI, and speculative enactment to create experiential engagements around AI interaction design. We find that exercises around metaphors and enactments make questions of training and learning, privacy and consent, autonomy and agency more tangible, and thereby help students be more reflective and responsible on how to design with AI and its complex properties in both their design process and outcomes.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 6","pages":"2891 - 2911"},"PeriodicalIF":2.9,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01794-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136103770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}