Pub Date : 2023-05-28DOI: 10.1007/s00146-023-01690-5
Giulio Mangano, Andrea Ferrari, Carlo Rafele, Enrico Vezzetti, Federica Marcolin
The research on technologies and methodologies for (accurate, real-time, spontaneous, three-dimensional…) facial expression recognition is ongoing and has been fostered in the past decades by advances in classification algorithms like deep learning, which makes them part of the Artificial Intelligence literature. Still, despite its upcoming application to contexts such as human–computer interaction, product and service design, and marketing, only a few literature studies have investigated the willingness of end users to share their facial data with the purpose of detecting emotions. This study investigates the level of awareness and interest of 373 potential consumers towards this technology in the car insurance sector, particularly in the contract drafting phase, with a focus on differentiating the respondents between generation Y and Z. Results show that younger people, individuals with higher levels of education, and social network users feel more confident about this innovative technology and are more likely to share their expressive facial data.
对(准确、实时、自发、三维......)面部表情识别技术和方法的研究一直在进行,过去几十年来,深度学习等分类算法的进步促进了这方面的研究,使其成为人工智能文献的一部分。然而,尽管面部表情识别即将应用于人机交互、产品和服务设计以及市场营销等领域,但只有少数文献研究了最终用户是否愿意分享他们的面部数据以检测情绪。本研究调查了 373 名潜在消费者对汽车保险领域这项技术的认知水平和兴趣,尤其是在合同起草阶段,重点是区分 Y 世代和 Z 世代的受访者。结果显示,年轻人、受教育程度较高的个人和社交网络用户对这项创新技术更有信心,也更愿意分享他们富有表现力的面部数据。
{"title":"Willingness of sharing facial data for emotion recognition: a case study in the insurance market","authors":"Giulio Mangano, Andrea Ferrari, Carlo Rafele, Enrico Vezzetti, Federica Marcolin","doi":"10.1007/s00146-023-01690-5","DOIUrl":"10.1007/s00146-023-01690-5","url":null,"abstract":"<div><p>The research on technologies and methodologies for (accurate, real-time, spontaneous, three-dimensional…) facial expression recognition is ongoing and has been fostered in the past decades by advances in classification algorithms like deep learning, which makes them part of the Artificial Intelligence literature. Still, despite its upcoming application to contexts such as human–computer interaction, product and service design, and marketing, only a few literature studies have investigated the willingness of end users to share their facial data with the purpose of detecting emotions. This study investigates the level of awareness and interest of 373 potential consumers towards this technology in the car insurance sector, particularly in the contract drafting phase, with a focus on differentiating the respondents between generation Y and Z. Results show that younger people, individuals with higher levels of education, and social network users feel more confident about this innovative technology and are more likely to share their expressive facial data.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2373 - 2384"},"PeriodicalIF":2.9,"publicationDate":"2023-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135831727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-24DOI: 10.1007/s00146-023-01694-1
Randon R. Taylor, Bessie O’Dell, John W. Murphy
This article provides a course of correction in the discourse surrounding human-centric AI by elucidating the philosophical underpinning that serves to create a view that AI is divorced from human-centric values. Next, we espouse the need to explicitly designate stakeholder- or community-centric values which are needed to resolve the issue of alignment. To achieve this, we present two frameworks, Ubuntu and maximum feasible participation. Finally, we demonstrate how employing the aforementioned frameworks in AI can benefit society by flattening the current top-down social hierarchies as AI is currently being utilized. Implications are discussed.
{"title":"Human-centric AI: philosophical and community-centric considerations","authors":"Randon R. Taylor, Bessie O’Dell, John W. Murphy","doi":"10.1007/s00146-023-01694-1","DOIUrl":"10.1007/s00146-023-01694-1","url":null,"abstract":"<div><p>This article provides a course of correction in the discourse surrounding human-centric AI by elucidating the philosophical underpinning that serves to create a view that AI is divorced from human-centric values. Next, we espouse the need to explicitly designate stakeholder- or community-centric values which are needed to resolve the issue of alignment. To achieve this, we present two frameworks, Ubuntu and maximum feasible participation. Finally, we demonstrate how employing the aforementioned frameworks in AI can benefit society by flattening the current top-down social hierarchies as AI is currently being utilized. Implications are discussed.\u0000</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2417 - 2424"},"PeriodicalIF":2.9,"publicationDate":"2023-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01694-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129798503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-20DOI: 10.1007/s00146-023-01684-3
Alice Liefgreen, Netta Weinstein, Sandra Wachter, Brent Mittelstadt
Artificial intelligence (AI) is increasingly relied upon by clinicians for making diagnostic and treatment decisions, playing an important role in imaging, diagnosis, risk analysis, lifestyle monitoring, and health information management. While research has identified biases in healthcare AI systems and proposed technical solutions to address these, we argue that effective solutions require human engagement. Furthermore, there is a lack of research on how to motivate the adoption of these solutions and promote investment in designing AI systems that align with values such as transparency and fairness from the outset. Drawing on insights from psychological theories, we assert the need to understand the values that underlie decisions made by individuals involved in creating and deploying AI systems. We describe how this understanding can be leveraged to increase engagement with de-biasing and fairness-enhancing practices within the AI healthcare industry, ultimately leading to sustained behavioral change via autonomy-supportive communication strategies rooted in motivational and social psychology theories. In developing these pathways to engagement, we consider the norms and needs that govern the AI healthcare domain, and we evaluate incentives for maintaining the status quo against economic, legal, and social incentives for behavior change in line with transparency and fairness values.
{"title":"Beyond ideals: why the (medical) AI industry needs to motivate behavioural change in line with fairness and transparency values, and how it can do it","authors":"Alice Liefgreen, Netta Weinstein, Sandra Wachter, Brent Mittelstadt","doi":"10.1007/s00146-023-01684-3","DOIUrl":"10.1007/s00146-023-01684-3","url":null,"abstract":"<div><p>Artificial intelligence (AI) is increasingly relied upon by clinicians for making diagnostic and treatment decisions, playing an important role in imaging, diagnosis, risk analysis, lifestyle monitoring, and health information management. While research has identified biases in healthcare AI systems and proposed technical solutions to address these, we argue that effective solutions require human engagement. Furthermore, there is a lack of research on how to motivate the adoption of these solutions and promote investment in designing AI systems that align with values such as transparency and fairness from the outset. Drawing on insights from psychological theories, we assert the need to understand the values that underlie decisions made by individuals involved in creating and deploying AI systems. We describe how this understanding can be leveraged to increase engagement with de-biasing and fairness-enhancing practices within the AI healthcare industry, ultimately leading to sustained behavioral change via autonomy-supportive communication strategies rooted in motivational and social psychology theories. In developing these pathways to engagement, we consider the norms and needs that govern the AI healthcare domain, and we evaluate incentives for maintaining the status quo against economic, legal, and social incentives for behavior change in line with transparency and fairness values.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2183 - 2199"},"PeriodicalIF":2.9,"publicationDate":"2023-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01684-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114626433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-19DOI: 10.1007/s00146-023-01689-y
Gabriele Griffin, Elisabeth Wennerström, Anna Foka
This article examines the challenges and opportunities that arise with artificial intelligence (AI) and machine learning (ML) methods and tools when implemented within cultural heritage institutions (CHIs), focusing on three selected Swedish case studies. The article centres on the perspectives of the CHI professionals who deliver that implementation. Its purpose is to elucidate how CHI professionals respond to the opportunities and challenges AI/ML provides. The three Swedish CHIs discussed here represent different organizational frameworks and have different types of collections, while sharing, to some extent, a similar position in terms of the use of AI/ML tools and methodologies. The overarching question of this article is what is the state of knowledge about AI/ML among Swedish CHI professionals, and what are the related issues? To answer this question, we draw on (1) semi-structured interviews with CHI professionals, (2) individual CHI website information, and (3) CHI-internal digitization protocols and digitalization strategies, to provide a nuanced analysis of both professional and organisational processes concerning the implementation of AI/ML methods and tools. Our study indicates that AI/ML implementation is in many ways at the very early stages of implementation in Swedish CHIs. The CHI professionals are affected in their AI/ML engagement by four key issues that emerged in the interviews: their institutional and professional knowledge regarding AI/ML; the specificities of their collections and associated digitization and digitalization issues; issues around personnel; and issues around AI/ML resources. The article suggests that a national CHI strategy for AI/ML might be helpful as would be knowledge-, expertise-, and potentially personnel- and resource-sharing to move beyond the constraints that the CHIs face in implementing AI/ML.
{"title":"AI and Swedish Heritage Organisations: challenges and opportunities","authors":"Gabriele Griffin, Elisabeth Wennerström, Anna Foka","doi":"10.1007/s00146-023-01689-y","DOIUrl":"10.1007/s00146-023-01689-y","url":null,"abstract":"<div><p>This article examines the challenges and opportunities that arise with artificial intelligence (AI) and machine learning (ML) methods and tools when implemented within cultural heritage institutions (CHIs), focusing on three selected Swedish case studies. The article centres on the perspectives of the CHI professionals who deliver that implementation. Its purpose is to elucidate how CHI professionals respond to the opportunities and challenges AI/ML provides. The three Swedish CHIs discussed here represent different organizational frameworks and have different types of collections, while sharing, to some extent, a similar position in terms of the use of AI/ML tools and methodologies. The overarching question of this article is what is the state of knowledge about AI/ML among Swedish CHI professionals, and what are the related issues? To answer this question, we draw on (1) semi-structured interviews with CHI professionals, (2) individual CHI website information, and (3) CHI-internal digitization protocols and digitalization strategies, to provide a nuanced analysis of both professional and organisational processes concerning the implementation of AI/ML methods and tools. Our study indicates that AI/ML implementation is in many ways at the very early stages of implementation in Swedish CHIs. The CHI professionals are affected in their AI/ML engagement by four key issues that emerged in the interviews: their institutional and professional knowledge regarding AI/ML; the specificities of their collections and associated digitization and digitalization issues; issues around personnel; and issues around AI/ML resources. The article suggests that a national CHI strategy for AI/ML might be helpful as would be knowledge-, expertise-, and potentially personnel- and resource-sharing to move beyond the constraints that the CHIs face in implementing AI/ML.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2359 - 2372"},"PeriodicalIF":2.9,"publicationDate":"2023-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01689-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133956947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-17DOI: 10.1007/s00146-023-01676-3
Ana Cristina Bicharra Garcia, Marcio Gomes Pinto Garcia, Roberto Rigobon
The widespread usage of machine learning systems and econometric methods in the credit domain has transformed the decision-making process for evaluating loan applications. Automated analysis of credit applications diminishes the subjectivity of the decision-making process. On the other hand, since machine learning is based on past decisions recorded in the financial institutions’ datasets, the process very often consolidates existing bias and prejudice against groups defined by race, sex, sexual orientation, and other attributes. Therefore, the interest in identifying, preventing, and mitigating algorithmic discrimination has grown exponentially in many areas, such as Computer Science, Economics, Law, and Social Science. We conducted a comprehensive systematic literature review to understand (1) the research settings, including the discrimination theory foundation, the legal framework, and the applicable fairness metric; (2) the addressed issues and solutions; and (3) the open challenges for potential future research. We explored five sources: ACM Digital Library, Google Scholar, IEEE Digital Library, Springer Link, and Scopus. Following inclusion and exclusion criteria, we selected 78 papers written in English and published between 2017 and 2022. According to the meta-analysis of this literature survey, algorithmic discrimination has been addressed mainly by looking at the CS, Law, and Economics perspectives. There has been great interest in this topic in the financial area, especially the discrimination in providing access to the mortgage market and differential treatment (different fees, number of parcels, and interest rates). Most attention has been devoted to the potential discrimination due to bias in the dataset. Researchers are still only dealing with direct discrimination, addressed by algorithmic fairness, while indirect discrimination (structural discrimination) has not received the same attention.
{"title":"Algorithmic discrimination in the credit domain: what do we know about it?","authors":"Ana Cristina Bicharra Garcia, Marcio Gomes Pinto Garcia, Roberto Rigobon","doi":"10.1007/s00146-023-01676-3","DOIUrl":"10.1007/s00146-023-01676-3","url":null,"abstract":"<div><p>The widespread usage of machine learning systems and econometric methods in the credit domain has transformed the decision-making process for evaluating loan applications. Automated analysis of credit applications diminishes the subjectivity of the decision-making process. On the other hand, since machine learning is based on past decisions recorded in the financial institutions’ datasets, the process very often consolidates existing bias and prejudice against groups defined by race, sex, sexual orientation, and other attributes. Therefore, the interest in identifying, preventing, and mitigating algorithmic discrimination has grown exponentially in many areas, such as Computer Science, Economics, Law, and Social Science. We conducted a comprehensive systematic literature review to understand (1) the research settings, including the discrimination theory foundation, the legal framework, and the applicable fairness metric; (2) the addressed issues and solutions; and (3) the open challenges for potential future research. We explored five sources: ACM Digital Library, Google Scholar, IEEE Digital Library, Springer Link, and Scopus. Following inclusion and exclusion criteria, we selected 78 papers written in English and published between 2017 and 2022. According to the meta-analysis of this literature survey, algorithmic discrimination has been addressed mainly by looking at the CS, Law, and Economics perspectives. There has been great interest in this topic in the financial area, especially the discrimination in providing access to the mortgage market and differential treatment (different fees, number of parcels, and interest rates). Most attention has been devoted to the potential discrimination due to bias in the dataset. Researchers are still only dealing with direct discrimination, addressed by algorithmic fairness, while indirect discrimination (structural discrimination) has not received the same attention.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 4","pages":"2059 - 2098"},"PeriodicalIF":2.9,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01676-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132289250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-17DOI: 10.1007/s00146-023-01692-3
Fabio Morreale, Elham Bahmanteymouri, Brent Burmester, Andrew Chen, Michelle Thorp
Many modern digital products use Machine Learning (ML) to emulate human abilities, knowledge, and intellect. In order to achieve this goal, ML systems need the greatest possible quantity of training data to allow the Artificial Intelligence (AI) model to develop an understanding of “what it means to be human”. We propose that the processes by which companies collect this data are problematic, because they entail extractive practices that resemble labour exploitation. The article presents four case studies in which unwitting individuals contribute their humanness to develop AI training sets. By employing a post-Marxian framework, we then analyse the characteristic of these individuals and describe the elements of the capture-machine. Then, by describing and characterising the types of applications that are problematic, we set a foundation for defining and justifying interventions to address this form of labour exploitation.
{"title":"The unwitting labourer: extracting humanness in AI training","authors":"Fabio Morreale, Elham Bahmanteymouri, Brent Burmester, Andrew Chen, Michelle Thorp","doi":"10.1007/s00146-023-01692-3","DOIUrl":"10.1007/s00146-023-01692-3","url":null,"abstract":"<div><p>Many modern digital products use Machine Learning (ML) to emulate human abilities, knowledge, and intellect. In order to achieve this goal, ML systems need the greatest possible quantity of training data to allow the Artificial Intelligence (AI) model to develop an understanding of “what it means to be human”. We propose that the processes by which companies collect this data are problematic, because they entail extractive practices that resemble labour exploitation. The article presents four case studies in which unwitting individuals contribute their humanness to develop AI training sets. By employing a post-Marxian framework, we then analyse the characteristic of these individuals and describe the elements of the capture-machine. Then, by describing and characterising the types of applications that are problematic, we set a foundation for defining and justifying interventions to address this form of labour exploitation.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2389 - 2399"},"PeriodicalIF":2.9,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01692-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133677220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-16DOI: 10.1007/s00146-023-01667-4
Vibeke Sørensen, J Stephen Lansing
Intelligence augmentation was one of the original goals of computing. Artificial Intelligence (AI) inherits this project and is at the leading edge of computing today. Computing can be considered an extension of brain and body, with mathematical prowess and logic fundamental to the infrastructure of computing. Multimedia computing-sensing, analyzing, and translating data to and from visual images, animation, sound and music, touch and haptics, as well as smell-is based on our human senses and is now commonplace. We use data visualization and sonification, as well as data mining and analysis, to sort through the complexity and vast volume of data coming from the world inside and around us. It helps us 'see' in new ways. We can think of this capacity as a new kind of "digital glasses". The Internet of Living Things (IOLT) is potentially an even more profound extension of ourselves to the world: a network of electronic devices embedded into objects, but now with subcutaneous, ingestible devices, and embedded sensors that include people and other living things. Like the Internet of Things (IOT), living things are connected; we call those connections "ecology". As the IOT becomes increasingly synonymous with the IOLT, the question of ethics that is at the centre of aesthetics and the arts will move to the forefront of our experience of and regard for the world in and around us.
{"title":"Art, technology and the Internet of Living Things.","authors":"Vibeke Sørensen, J Stephen Lansing","doi":"10.1007/s00146-023-01667-4","DOIUrl":"10.1007/s00146-023-01667-4","url":null,"abstract":"<p><p>Intelligence augmentation was one of the original goals of computing. Artificial Intelligence (AI) inherits this project and is at the leading edge of computing today. Computing can be considered an extension of brain and body, with mathematical prowess and logic fundamental to the infrastructure of computing. Multimedia computing-sensing, analyzing, and translating data to and from visual images, animation, sound and music, touch and haptics, as well as smell-is based on our human senses and is now commonplace. We use data visualization and sonification, as well as data mining and analysis, to sort through the complexity and vast volume of data coming from the world inside and around us. It helps us 'see' in new ways. We can think of this capacity as a new kind of \"digital glasses\". The Internet of Living Things (IOLT) is potentially an even more profound extension of ourselves to the world: a network of electronic devices embedded into objects, but now with subcutaneous, ingestible devices, and embedded sensors that include people and other living things. Like the Internet of Things (IOT), living things are connected; we call those connections \"ecology\". As the IOT becomes increasingly synonymous with the IOLT, the question of ethics that is at the centre of aesthetics and the arts will move to the forefront of our experience of and regard for the world in and around us.</p>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":" ","pages":"1-17"},"PeriodicalIF":3.0,"publicationDate":"2023-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10187521/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9713370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-15DOI: 10.1007/s00146-023-01687-0
Luca Capone, Marta Rocchi, Marta Bertolaso
In the current social and technological scenario, the term digital is abundantly used with an apparently transparent and unambiguous meaning. This article aims to unveil the complexity of this concept, retracing its historical and cultural origin. This genealogical overview allows to understand the reason why an instrumental conception of digital media has prevailed, considering the digital as a mere tool to convey a message, as opposed to a constitutive conception. The constitutive conception places the digital phenomenon in the broader ground of media studies, and it considers digital technologies as an interface between the subject and the world. In this perspective, the media is not added to the experience of the person, but it shapes it from within on a cognitive, expressive and communicative level. The article makes use of two powerful examples to show the shortcomings of an instrumental conception of the digital, and to affirm the value of a constitutive conception for current media studies regarding digital interfaces.
{"title":"Rethinking “digital”: a genealogical enquiry into the meaning of digital and its impact on individuals and society","authors":"Luca Capone, Marta Rocchi, Marta Bertolaso","doi":"10.1007/s00146-023-01687-0","DOIUrl":"10.1007/s00146-023-01687-0","url":null,"abstract":"<div><p>In the current social and technological scenario, the term <i>digital</i> is abundantly used with an apparently transparent and unambiguous meaning. This article aims to unveil the complexity of this concept, retracing its historical and cultural origin. This genealogical overview allows to understand the reason why an <i>instrumental conception</i> of digital media has prevailed, considering the digital as a mere tool to convey a message, as opposed to a <i>constitutive conception</i>. The constitutive conception places the digital phenomenon in the broader ground of media studies, and it considers digital technologies as an interface between the subject and the world. In this perspective, the media is not added to the experience of the person, but it shapes it from within on a cognitive, expressive and communicative level. The article makes use of two powerful examples to show the shortcomings of an instrumental conception of the digital, and to affirm the value of a constitutive conception for current media studies regarding digital interfaces.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2285 - 2295"},"PeriodicalIF":2.9,"publicationDate":"2023-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01687-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86310330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-12DOI: 10.1007/s00146-023-01678-1
Manh-Tung Ho, H. Nguyen
{"title":"Artificial intelligence as the new fire and its geopolitics","authors":"Manh-Tung Ho, H. Nguyen","doi":"10.1007/s00146-023-01678-1","DOIUrl":"https://doi.org/10.1007/s00146-023-01678-1","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"117 1","pages":"1-2"},"PeriodicalIF":3.0,"publicationDate":"2023-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75866046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-12DOI: 10.1007/s00146-023-01666-5
Michael C. Horowitz, Lauren Kahn, Julia Macdonald, Jacquelyn Schneider
Despite pronouncements about the inevitable diffusion of artificial intelligence and autonomous technologies, in practice, it is human behavior, not technology in a vacuum, that dictates how technology seeps into—and changes—societies. To better understand how human preferences shape technological adoption and the spread of AI-enabled autonomous technologies, we look at representative adult samples of US public opinion in 2018 and 2020 on the use of four types of autonomous technologies: vehicles, surgery, weapons, and cyber defense. By focusing on these four diverse uses of AI-enabled autonomy that span transportation, medicine, and national security, we exploit the inherent variation between these AI-enabled autonomous use cases. We find that those with familiarity and expertise with AI and similar technologies were more likely to support all of the autonomous applications we tested (except weapons) than those with a limited understanding of the technology. Individuals that had already delegated the act of driving using ride-share apps were also more positive about autonomous vehicles. However, familiarity cut both ways; individuals are also less likely to support AI-enabled technologies when applied directly to their life, especially if technology automates tasks they are already familiar with operating. Finally, we find that familiarity plays little role in support for AI-enabled military applications, for which opposition has slightly increased over time.
{"title":"Adopting AI: how familiarity breeds both trust and contempt","authors":"Michael C. Horowitz, Lauren Kahn, Julia Macdonald, Jacquelyn Schneider","doi":"10.1007/s00146-023-01666-5","DOIUrl":"10.1007/s00146-023-01666-5","url":null,"abstract":"<div><p>Despite pronouncements about the inevitable diffusion of artificial intelligence and autonomous technologies, in practice, it is human behavior, not technology in a vacuum, that dictates how technology seeps into—and changes—societies. To better understand how human preferences shape technological adoption and the spread of AI-enabled autonomous technologies, we look at representative adult samples of US public opinion in 2018 and 2020 on the use of four types of autonomous technologies: vehicles, surgery, weapons, and cyber defense. By focusing on these four diverse uses of AI-enabled autonomy that span transportation, medicine, and national security, we exploit the inherent variation between these AI-enabled autonomous use cases. We find that those with familiarity and expertise with AI and similar technologies were more likely to support all of the autonomous applications we tested (except weapons) than those with a limited understanding of the technology. Individuals that had already delegated the act of driving using ride-share apps were also more positive about autonomous vehicles. However, familiarity cut both ways; individuals are also less likely to support AI-enabled technologies when applied directly to their life, especially if technology automates tasks they are already familiar with operating. Finally, we find that familiarity plays little role in support for AI-enabled military applications, for which opposition has slightly increased over time.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 4","pages":"1721 - 1735"},"PeriodicalIF":2.9,"publicationDate":"2023-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10090776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}