Pub Date : 2023-11-08DOI: 10.1007/s00146-023-01800-3
Ngoc-Thang B. Le, Manh-Tung Ho
{"title":"A review of Robots Won’t Save Japan: An Ethnography of Eldercare Automation by James Wright","authors":"Ngoc-Thang B. Le, Manh-Tung Ho","doi":"10.1007/s00146-023-01800-3","DOIUrl":"https://doi.org/10.1007/s00146-023-01800-3","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"67 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135342055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-06DOI: 10.1007/s00146-023-01784-0
Manh-Tung Ho, Hong-Kong T. Nguyen
{"title":"Correction: Artificial intelligence as the new fire and its geopolitics","authors":"Manh-Tung Ho, Hong-Kong T. Nguyen","doi":"10.1007/s00146-023-01784-0","DOIUrl":"https://doi.org/10.1007/s00146-023-01784-0","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"126 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135679944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-04DOI: 10.1007/s00146-023-01801-2
Manh-Tung Ho
{"title":"Artificial intelligence and the law from a Japanese perspective: a book review","authors":"Manh-Tung Ho","doi":"10.1007/s00146-023-01801-2","DOIUrl":"https://doi.org/10.1007/s00146-023-01801-2","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"17 12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135773422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-02DOI: 10.1007/s00146-023-01781-3
Avigail Ferdman
{"title":"Correction: Bowling alone in the autonomous vehicle: the ethics of well-being in the driverless car","authors":"Avigail Ferdman","doi":"10.1007/s00146-023-01781-3","DOIUrl":"https://doi.org/10.1007/s00146-023-01781-3","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"13 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135932831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01DOI: 10.1007/s00146-023-01802-1
Jialei Wang, Li Fu
{"title":"Review of “AI assurance: towards trustworthy, explainable, safe, and ethical AI” by Feras A. Batarseh and Laura J. Freeman, Academic Press, 2023","authors":"Jialei Wang, Li Fu","doi":"10.1007/s00146-023-01802-1","DOIUrl":"https://doi.org/10.1007/s00146-023-01802-1","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"75 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135221098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-31DOI: 10.1007/s00146-023-01796-w
Fabian Fischbach, Tijs Vandemeulebroucke, Aimee van Wynsberghe
Abstract This paper aims to show that dominant conceptions of intelligence used in artificial intelligence (AI) are biased by normative assumptions that originate from the Global North, making it questionable if AI can be uncritically applied elsewhere without risking serious harm to vulnerable people. After the introduction in Sect. 1 we shortly present the history of IQ testing in Sect. 2, focusing on its multiple discriminatory biases. To determine how these biases came into existence, we define intelligence ontologically and underline its constructed and culturally variable character. Turning to AI, specifically the Turing Test (TT), in Sect. 3, we critically examine its underlying intelligence conceptions. The test has been of central influence in AI research and remains an important point of orientation. We argue that both the test itself and how it is used in practice risk promoting a limited conception of intelligence which solely originated in the Global North. Hence, this conception should be critically assessed in relation to the different global contexts in which AI technologies are and will be used. In Sect. 4, we highlight how unequal power relations in AI research are a real threat, rather than just philosophical sophistry while considering the history of IQ testing and the TT’s practical biases. In the last section, we examine the limits of our account and identify fields for further investigation. Tracing colonial continuities in AI intelligence research, this paper points to a more diverse and historically aware approach to the design, development, and use of AI.
{"title":"Mind who’s testing: Turing tests and the post-colonial imposition of their implicit conceptions of intelligence","authors":"Fabian Fischbach, Tijs Vandemeulebroucke, Aimee van Wynsberghe","doi":"10.1007/s00146-023-01796-w","DOIUrl":"https://doi.org/10.1007/s00146-023-01796-w","url":null,"abstract":"Abstract This paper aims to show that dominant conceptions of intelligence used in artificial intelligence (AI) are biased by normative assumptions that originate from the Global North, making it questionable if AI can be uncritically applied elsewhere without risking serious harm to vulnerable people. After the introduction in Sect. 1 we shortly present the history of IQ testing in Sect. 2, focusing on its multiple discriminatory biases. To determine how these biases came into existence, we define intelligence ontologically and underline its constructed and culturally variable character. Turning to AI, specifically the Turing Test (TT), in Sect. 3, we critically examine its underlying intelligence conceptions. The test has been of central influence in AI research and remains an important point of orientation. We argue that both the test itself and how it is used in practice risk promoting a limited conception of intelligence which solely originated in the Global North. Hence, this conception should be critically assessed in relation to the different global contexts in which AI technologies are and will be used. In Sect. 4, we highlight how unequal power relations in AI research are a real threat, rather than just philosophical sophistry while considering the history of IQ testing and the TT’s practical biases. In the last section, we examine the limits of our account and identify fields for further investigation. Tracing colonial continuities in AI intelligence research, this paper points to a more diverse and historically aware approach to the design, development, and use of AI.","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"224 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135871905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-30DOI: 10.1007/s00146-023-01794-y
Dave Murray-Rust, Maria Luce Lupetti, Iohanna Nicenboim, Wouter van der Hoog
Abstract Artificial intelligence (AI) and machine learning (ML) are increasingly integrated into the functioning of physical and digital products, creating unprecedented opportunities for interaction and functionality. However, there is a challenge for designers to ideate within this creative landscape, balancing the possibilities of technology with human interactional concerns. We investigate techniques for exploring and reflecting on the interactional affordances, the unique relational possibilities, and the wider social implications of AI systems. We introduced into an interaction design course ( n = 100) nine ‘AI exercises’ that draw on more than human design, responsible AI, and speculative enactment to create experiential engagements around AI interaction design. We find that exercises around metaphors and enactments make questions of training and learning, privacy and consent, autonomy and agency more tangible, and thereby help students be more reflective and responsible on how to design with AI and its complex properties in both their design process and outcomes.
{"title":"Grasping AI: experiential exercises for designers","authors":"Dave Murray-Rust, Maria Luce Lupetti, Iohanna Nicenboim, Wouter van der Hoog","doi":"10.1007/s00146-023-01794-y","DOIUrl":"https://doi.org/10.1007/s00146-023-01794-y","url":null,"abstract":"Abstract Artificial intelligence (AI) and machine learning (ML) are increasingly integrated into the functioning of physical and digital products, creating unprecedented opportunities for interaction and functionality. However, there is a challenge for designers to ideate within this creative landscape, balancing the possibilities of technology with human interactional concerns. We investigate techniques for exploring and reflecting on the interactional affordances, the unique relational possibilities, and the wider social implications of AI systems. We introduced into an interaction design course ( n = 100) nine ‘AI exercises’ that draw on more than human design, responsible AI, and speculative enactment to create experiential engagements around AI interaction design. We find that exercises around metaphors and enactments make questions of training and learning, privacy and consent, autonomy and agency more tangible, and thereby help students be more reflective and responsible on how to design with AI and its complex properties in both their design process and outcomes.","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"222 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136103770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The advent of ChatGPT by OpenAI has prompted extensive discourse on its potential implications for science and higher education. While the impact on education has been a primary focus, there is limited empirical research on the effects of large language models (LLMs) and LLM-based chatbots on science and scientific practice. To investigate this further, we conducted a Delphi study involving 72 researchers specializing in AI and digitization. The study focused on applications and limitations of LLMs, their effects on the science system, ethical and legal considerations, and the required competencies for their effective use. Our findings highlight the transformative potential of LLMs in science, particularly in administrative, creative, and analytical tasks. However, risks related to bias, misinformation, and quality assurance need to be addressed through proactive regulation and science education. This research contributes to informed discussions on the impact of generative AI in science and helps identify areas for future action.
{"title":"Friend or foe? Exploring the implications of large language models on the science system","authors":"Benedikt Fecher, Marcel Hebing, Melissa Laufer, Jörg Pohle, Fabian Sofsky","doi":"10.1007/s00146-023-01791-1","DOIUrl":"https://doi.org/10.1007/s00146-023-01791-1","url":null,"abstract":"Abstract The advent of ChatGPT by OpenAI has prompted extensive discourse on its potential implications for science and higher education. While the impact on education has been a primary focus, there is limited empirical research on the effects of large language models (LLMs) and LLM-based chatbots on science and scientific practice. To investigate this further, we conducted a Delphi study involving 72 researchers specializing in AI and digitization. The study focused on applications and limitations of LLMs, their effects on the science system, ethical and legal considerations, and the required competencies for their effective use. Our findings highlight the transformative potential of LLMs in science, particularly in administrative, creative, and analytical tasks. However, risks related to bias, misinformation, and quality assurance need to be addressed through proactive regulation and science education. This research contributes to informed discussions on the impact of generative AI in science and helps identify areas for future action.","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"9 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134908498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-21DOI: 10.1007/s00146-023-01770-6
Victoria Vesna
{"title":"Towards a decolonial I in AI & Society","authors":"Victoria Vesna","doi":"10.1007/s00146-023-01770-6","DOIUrl":"https://doi.org/10.1007/s00146-023-01770-6","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135513195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}