首页 > 最新文献

AI & Society最新文献

英文 中文
Sentience, Vulcans, and zombies: the value of phenomenal consciousness 知觉、瓦肯人和僵尸:现象意识的价值
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-12 DOI: 10.1007/s00146-023-01835-6
Joshua Shepherd

Many think that a specific aspect of phenomenal consciousness—valenced or affective experience—is essential to consciousness’s moral significance (valence sentientism). They hold that valenced experience is necessary for well-being, or moral status, or psychological intrinsic value (or all three). Some think that phenomenal consciousness generally is necessary for non-derivative moral significance (broad sentientism). Few think that consciousness is unnecessary for moral significance (non-necessitarianism). In this paper, I consider the prospects for these views. I first consider the prospects for valence sentientism in light of Vulcans, beings who are conscious but without affect or valence of any sort. I think Vulcans pressure us to accept broad sentientism. But I argue that a consideration of explanations for broad sentientism opens up possible explanations for non-necessitarianism about the moral significance of consciousness. That is, once one leans away from valence sentientism because of Vulcans, one should feel pressure to accept a view on which consciousness is not necessary for well-being, moral status, or psychological intrinsic value.

许多人认为,现象意识的一个特定方面——价值或情感经验——对意识的道德意义至关重要(价感伤主义)。他们认为有价值的经验对于幸福、道德地位或心理内在价值(或三者兼而有之)是必要的。有些人认为现象意识对于非衍生的道德意义是必要的(广义感伤主义)。很少有人认为意识对于道德意义是不必要的(非必要主义)。在本文中,我对这些观点的前景进行了展望。我首先从瓦肯人的角度考虑价感性主义的前景,瓦肯人是有意识的,但没有任何形式的情感或价。我认为瓦肯人迫使我们接受广泛的感伤主义。但我认为,考虑对广义感伤主义的解释,为关于意识的道德意义的非必要主义提供了可能的解释。也就是说,一旦一个人因为瓦肯人而偏离了价值感伤主义,他就会感到压力,不得不接受这样一种观点,即意识对于幸福、道德地位或心理内在价值不是必要的。
{"title":"Sentience, Vulcans, and zombies: the value of phenomenal consciousness","authors":"Joshua Shepherd","doi":"10.1007/s00146-023-01835-6","DOIUrl":"10.1007/s00146-023-01835-6","url":null,"abstract":"<div><p>Many think that a specific aspect of phenomenal consciousness—valenced or affective experience—is essential to consciousness’s moral significance (valence sentientism). They hold that valenced experience is necessary for well-being, or moral status, or psychological intrinsic value (or all three). Some think that phenomenal consciousness generally is necessary for non-derivative moral significance (broad sentientism). Few think that consciousness is unnecessary for moral significance (non-necessitarianism). In this paper, I consider the prospects for these views. I first consider the prospects for valence sentientism in light of Vulcans, beings who are conscious but without affect or valence of any sort. I think Vulcans pressure us to accept broad sentientism. But I argue that a consideration of explanations for broad sentientism opens up possible explanations for non-necessitarianism about the moral significance of consciousness. That is, once one leans away from valence sentientism because of Vulcans, one should feel pressure to accept a view on which consciousness is not necessary for well-being, moral status, or psychological intrinsic value.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 6","pages":"3005 - 3015"},"PeriodicalIF":2.9,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01835-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139532292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating the acceptability of ethical recommendations in industry 4.0: an ethics by design approach 评估工业 4.0 中伦理建议的可接受性:设计伦理方法
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-12 DOI: 10.1007/s00146-023-01834-7
Marc M. Anderson, Karën Fort

In this paper, we present the methodology we used in the European Horizon 2020 AI-PROFICIENT project, to evaluate the implementation of the ethical component of the project. The project is a 3-year collaboration between a university partner and industrial and tech partners, which aims to research the integration of AI services in heavy industry work settings. An AI ethics approach developed for the project has involved embedded ethical analysis of work contexts and design solutions and the generation of specific and evolving ethical recommendations for partners. We have performed an ongoing evaluation and monitoring of the implementation of recommendations. We describe the quantitative results of these implementations: overall, broken down by category, and broken down by category and responsible project partner (anonymized). In parallel, we discuss the results in light of our approach and offer insights for future research into the ground-level application of ethical recommendations for AI in heavy industry.

在本文中,我们介绍了我们在欧洲地平线2020人工智能精通项目中使用的方法,以评估该项目的道德部分的实施情况。该项目是大学合作伙伴与工业和技术合作伙伴之间为期3年的合作,旨在研究人工智能服务在重工业工作环境中的集成。为该项目制定的人工智能伦理方法包括对工作环境和设计解决方案进行嵌入式伦理分析,并为合作伙伴提出具体和不断发展的伦理建议。我们对各项建议的执行情况进行了持续的评价和监测。我们描述了这些实现的定量结果:总体上,按类别细分,按类别和负责的项目合作伙伴(匿名)细分。与此同时,我们根据我们的方法讨论了结果,并为未来研究在重工业中人工智能伦理建议的基层应用提供了见解。
{"title":"Evaluating the acceptability of ethical recommendations in industry 4.0: an ethics by design approach","authors":"Marc M. Anderson,&nbsp;Karën Fort","doi":"10.1007/s00146-023-01834-7","DOIUrl":"10.1007/s00146-023-01834-7","url":null,"abstract":"<div><p>In this paper, we present the methodology we used in the European Horizon 2020 AI-PROFICIENT project, to evaluate the implementation of the ethical component of the project. The project is a 3-year collaboration between a university partner and industrial and tech partners, which aims to research the integration of AI services in heavy industry work settings. An AI ethics approach developed for the project has involved embedded ethical analysis of work contexts and design solutions and the generation of specific and evolving ethical recommendations for partners. We have performed an ongoing evaluation and monitoring of the implementation of recommendations. We describe the quantitative results of these implementations: overall, broken down by category, and broken down by category and responsible project partner (anonymized). In parallel, we discuss the results in light of our approach and offer insights for future research into the ground-level application of ethical recommendations for AI in heavy industry.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 6","pages":"2989 - 3003"},"PeriodicalIF":2.9,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139532695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perspectives of patients and clinicians on big data and AI in health: a comparative empirical investigation 患者和临床医生对健康领域大数据和人工智能的看法:一项比较实证调查
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-07 DOI: 10.1007/s00146-023-01825-8
Patrik Hummel, Matthias Braun, Serena Bischoff, David Samhammer, Katharina Seitz, Peter A. Fasching, Peter Dabrock

Background

Big data and AI applications now play a major role in many health contexts. Much research has already been conducted on ethical and social challenges associated with these technologies. Likewise, there are already some studies that investigate empirically which values and attitudes play a role in connection with their design and implementation. What is still in its infancy, however, is the comparative investigation of the perspectives of different stakeholders.

Methods

To explore this issue in a multi-faceted manner, we conducted semi-structured interviews as well as focus group discussions with patients and clinicians. These empirical methods were used to gather interviewee’s views on the opportunities and challenges of medical AI and other data-intensive applications.

Results

Different clinician and patient groups are exposed to medical AI to differing degrees. Interviewees expect and demand that the purposes of data processing accord with patient preferences, and that data are put to effective use to generate social value. One central result is the shared tendency of clinicians and patients to maintain individualistic ascriptions of responsibility for clinical outcomes.

Conclusions

Medical AI and the proliferation of data with import for health-related inferences shape and partially reconfigure stakeholder expectations of how these technologies relate to the decision-making of human agents. Intuitions about individual responsibility for clinical outcomes could eventually be disrupted by the increasing sophistication of data-intensive and AI-driven clinical tools. Besides individual responsibility, systemic governance will be key to promote alignment with stakeholder expectations in AI-driven and data-intensive health settings.

大数据和人工智能应用现在在许多卫生环境中发挥着重要作用。关于与这些技术相关的伦理和社会挑战,已经进行了大量研究。同样,已经有一些研究经验性地调查了哪些价值观和态度在其设计和执行方面发挥了作用。然而,对不同利益相关者观点的比较研究仍处于起步阶段。方法为了从多方面探讨这一问题,我们对患者和临床医生进行了半结构化访谈和焦点小组讨论。这些实证方法用于收集受访者对医疗人工智能和其他数据密集型应用的机遇和挑战的看法。结果不同临床医师和患者群体对医疗人工智能的暴露程度不同。受访者期望并要求数据处理的目的符合患者的偏好,数据被有效地利用以产生社会价值。一个中心结果是,临床医生和患者都倾向于对临床结果承担个人责任。医疗人工智能和与健康相关的推断数据的激增,塑造并部分重新配置了利益相关者对这些技术如何与人类代理人的决策相关的期望。数据密集型和人工智能驱动的临床工具日益复杂,最终可能会打破个人对临床结果负责的直觉。除了个人责任外,在人工智能驱动的数据密集型卫生环境中,系统治理将是促进与利益攸关方期望保持一致的关键。
{"title":"Perspectives of patients and clinicians on big data and AI in health: a comparative empirical investigation","authors":"Patrik Hummel,&nbsp;Matthias Braun,&nbsp;Serena Bischoff,&nbsp;David Samhammer,&nbsp;Katharina Seitz,&nbsp;Peter A. Fasching,&nbsp;Peter Dabrock","doi":"10.1007/s00146-023-01825-8","DOIUrl":"10.1007/s00146-023-01825-8","url":null,"abstract":"<div><h3>Background</h3><p>Big data and AI applications now play a major role in many health contexts. Much research has already been conducted on ethical and social challenges associated with these technologies. Likewise, there are already some studies that investigate empirically which values and attitudes play a role in connection with their design and implementation. What is still in its infancy, however, is the comparative investigation of the perspectives of different stakeholders.</p><h3>Methods</h3><p>To explore this issue in a multi-faceted manner, we conducted semi-structured interviews as well as focus group discussions with patients and clinicians. These empirical methods were used to gather interviewee’s views on the opportunities and challenges of medical AI and other data-intensive applications.</p><h3>Results</h3><p>Different clinician and patient groups are exposed to medical AI to differing degrees. Interviewees expect and demand that the purposes of data processing accord with patient preferences, and that data are put to effective use to generate social value. One central result is the shared tendency of clinicians and patients to maintain individualistic ascriptions of responsibility for clinical outcomes.</p><h3>Conclusions</h3><p>Medical AI and the proliferation of data with import for health-related inferences shape and partially reconfigure stakeholder expectations of how these technologies relate to the decision-making of human agents. Intuitions about individual responsibility for clinical outcomes could eventually be disrupted by the increasing sophistication of data-intensive and AI-driven clinical tools. Besides individual responsibility, systemic governance will be key to promote alignment with stakeholder expectations in AI-driven and data-intensive health settings.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 6","pages":"2973 - 2987"},"PeriodicalIF":2.9,"publicationDate":"2024-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01825-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139449193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence and modern planned economies: a discussion on methods and institutions 人工智能与现代计划经济:关于方法和体制的讨论
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-05 DOI: 10.1007/s00146-023-01826-7
Spyridon Samothrakis

Interest in computerised central economic planning (CCEP) has seen a resurgence, as there is strong demand for an alternative vision to modern free (or not so free) market liberal capitalism. Given the close links of CCEP with what we would now broadly call artificial intelligence (AI)—e.g. optimisation, game theory, function approximation, machine learning, automated reasoning—it is reasonable to draw direct analogues and perform an analysis that would help identify what commodities and institutions we should see for a CCEP programme to become successful. Following this analysis, we conclude that a CCEP economy would need to have a very different outlook from current market practices, with a focus on producing basic “interlinking” commodities (e.g. tools, processed materials, instruction videos) that consumers can use as a form of collective R &D. On an institutional level, CCEP should strive for the release of basic commodities that empower consumers by having as many alternative uses as possible, but also making sure that a baseline of basic necessities is widely available.

人们对计算机化中央经济计划(CCEP)的兴趣再度抬头,因为人们强烈要求找到现代自由(或不那么自由)市场自由资本主义的另一种愿景。鉴于CCEP与我们现在广泛称为人工智能(AI)的密切联系,例如。优化、博弈论、函数近似、机器学习、自动推理——绘制直接类比并执行分析是合理的,这将有助于确定我们应该看到哪些商品和机构使CCEP计划取得成功。根据这一分析,我们得出结论,CCEP经济需要有一个与当前市场实践截然不同的前景,重点是生产基本的“相互关联”商品(例如工具、加工材料、教学视频),消费者可以将其作为集体研发的一种形式。在制度层面上,CCEP应努力提供基本商品,使消费者拥有尽可能多的替代用途,但也要确保基本必需品的基线得到广泛提供。
{"title":"Artificial intelligence and modern planned economies: a discussion on methods and institutions","authors":"Spyridon Samothrakis","doi":"10.1007/s00146-023-01826-7","DOIUrl":"10.1007/s00146-023-01826-7","url":null,"abstract":"<div><p>Interest in computerised central economic planning (CCEP) has seen a resurgence, as there is strong demand for an alternative vision to modern free (or not so free) market liberal capitalism. Given the close links of CCEP with what we would now broadly call artificial intelligence (AI)—e.g. optimisation, game theory, function approximation, machine learning, automated reasoning—it is reasonable to draw direct analogues and perform an analysis that would help identify what commodities and institutions we should see for a CCEP programme to become successful. Following this analysis, we conclude that a CCEP economy would need to have a very different outlook from current market practices, with a focus on producing basic “interlinking” commodities (e.g. tools, processed materials, instruction videos) that consumers can use as a form of collective R &amp;D. On an institutional level, CCEP should strive for the release of basic commodities that empower consumers by having as many alternative uses as possible, but also making sure that a baseline of basic necessities is widely available.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 6","pages":"2961 - 2972"},"PeriodicalIF":2.9,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01826-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139382702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding users’ responses to disclosed vs. undisclosed customer service chatbots: a mixed methods study 了解用户对公开与非公开客户服务聊天机器人的反应:一项混合方法研究
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-05 DOI: 10.1007/s00146-023-01818-7
Margot J. van der Goot, Nathalie Koubayová, Eva A. van Reijmersdal

Due to huge advancements in natural language processing (NLP) and machine learning, chatbots are gaining significance in the field of customer service. For users, it may be hard to distinguish whether they are communicating with a human or a chatbot. This brings ethical issues, as users have the right to know who or what they are interacting with (European Commission in Regulatory framework proposal on artificial intelligence. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai, 2022). One of the solutions is to include a disclosure at the start of the interaction (e.g., “this is a chatbot”). However, companies are reluctant to use disclosures, as consumers may perceive artificial agents as less knowledgeable and empathetic than their human counterparts (Luo et al. in Market Sci 38(6):937–947, 2019). The current mixed methods study, combining qualitative interviews (n = 8) and a quantitative experiment (n = 194), delves into users’ responses to a disclosed vs. undisclosed customer service chatbot, focusing on source orientation, anthropomorphism, and social presence. The qualitative interviews reveal that it is the willingness to help the customer and the friendly tone of voice that matters to the users, regardless of the artificial status of the customer care representative. The experiment did not show significant effects of the disclosure (vs. non-disclosure). Implications for research, legislators and businesses are discussed.

由于自然语言处理(NLP)和机器学习的巨大进步,聊天机器人在客户服务领域越来越重要。对于用户来说,可能很难区分他们是在与人交流还是在与聊天机器人交流。这带来了伦理问题,因为用户有权知道他们正在与谁或什么进行交互(欧盟委员会在人工智能监管框架提案中)。https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai, 2022)。解决方案之一是在交互开始时包含一个披露(例如,“这是一个聊天机器人”)。然而,公司不愿意使用披露,因为消费者可能会认为人工代理比人类同行更缺乏知识和同理心(Luo等人在《市场科学》38(6):937-947,2019)。目前的混合方法研究,结合定性访谈(n = 8)和定量实验(n = 194),深入研究了用户对公开和未公开的客户服务聊天机器人的反应,重点关注来源取向、拟人化和社会存在。定性访谈显示,对用户来说,重要的是帮助客户的意愿和友好的语气,而不是客户服务代表的人为地位。实验没有显示出披露(与未披露)的显著影响。对研究、立法者和企业的影响进行了讨论。
{"title":"Understanding users’ responses to disclosed vs. undisclosed customer service chatbots: a mixed methods study","authors":"Margot J. van der Goot,&nbsp;Nathalie Koubayová,&nbsp;Eva A. van Reijmersdal","doi":"10.1007/s00146-023-01818-7","DOIUrl":"10.1007/s00146-023-01818-7","url":null,"abstract":"<div><p>Due to huge advancements in natural language processing (NLP) and machine learning, chatbots are gaining significance in the field of customer service. For users, it may be hard to distinguish whether they are communicating with a human or a chatbot. This brings ethical issues, as users have the right to know who or what they are interacting with (European Commission in Regulatory framework proposal on artificial intelligence. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai, 2022). One of the solutions is to include a disclosure at the start of the interaction (e.g., “this is a chatbot”). However, companies are reluctant to use disclosures, as consumers may perceive artificial agents as less knowledgeable and empathetic than their human counterparts (Luo et al. in Market Sci 38(6):937–947, 2019). The current mixed methods study, combining qualitative interviews (<i>n</i> = 8) and a quantitative experiment (<i>n</i> = 194), delves into users’ responses to a disclosed vs. undisclosed customer service chatbot, focusing on source orientation, anthropomorphism, and social presence. The qualitative interviews reveal that it is the willingness to help the customer and the friendly tone of voice that matters to the users, regardless of the artificial status of the customer care representative. The experiment did not show significant effects of the disclosure (vs. non-disclosure). Implications for research, legislators and businesses are discussed.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 6","pages":"2947 - 2960"},"PeriodicalIF":2.9,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01818-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139380962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Escape climate apathy by harnessing the power of generative AI 利用人工智能的生成能力,摆脱对气候的冷漠态度
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-03 DOI: 10.1007/s00146-023-01830-x
Quan-Hoang Vuong, Manh-Tung Ho
{"title":"Escape climate apathy by harnessing the power of generative AI","authors":"Quan-Hoang Vuong,&nbsp;Manh-Tung Ho","doi":"10.1007/s00146-023-01830-x","DOIUrl":"10.1007/s00146-023-01830-x","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 6","pages":"3057 - 3058"},"PeriodicalIF":2.9,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139388200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling AI Trust for 2050: perspectives from media and info-communication experts 2050 年人工智能信任模型:媒体和信息传播专家的观点
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-03 DOI: 10.1007/s00146-023-01827-6
Katalin Feher, Lilla Vicsek, Mark Deuze

The study explores the future of AI-driven media and info-communication as envisioned by experts from all world regions, defining relevant terminology and expectations for 2050. Participants engaged in a 4-week series of surveys, questioning their definitions and projections about AI for the field of media and communication. Their expectations predict universal access to democratically available, automated, personalized and unbiased information determined by trusted narratives, recolonization of information technology and the demystification of the media process. These experts, as technology ambassadors, advocate AI-to-AI solutions to mitigate technology-driven misuse and misinformation. The optimistic scenarios shift responsibility to future generations, relying on AI-driven solutions and finding inspiration in nature. Their present-based forecasts could be construed as being indicative of professional near-sightedness and cognitive dissonance. Visualizing our findings into a Glasses Model of AI Trust, the study contributes to key debates regarding AI policy, developmental trajectories, and academic research in media and info-communication fields.

该研究根据世界各地专家的设想,探讨了人工智能驱动的媒体和信息传播的未来,定义了相关术语和对2050年的期望。参与者参与了为期四周的一系列调查,询问他们对媒体和传播领域人工智能的定义和预测。他们的期望预示着通过民主方式获得的、自动化的、个性化的、无偏见的、由可信叙述决定的信息的普遍获取、信息技术的重新殖民化和媒体过程的非神秘化。这些专家作为技术大使,倡导人工智能到人工智能的解决方案,以减轻技术驱动的误用和错误信息。乐观的情景将责任转移给后代,依靠人工智能驱动的解决方案,并在大自然中寻找灵感。他们基于当前的预测可以被解释为职业近视和认知失调的指示。该研究将我们的发现可视化为人工智能信任的眼镜模型,有助于就人工智能政策、发展轨迹以及媒体和信息传播领域的学术研究进行关键辩论。
{"title":"Modeling AI Trust for 2050: perspectives from media and info-communication experts","authors":"Katalin Feher,&nbsp;Lilla Vicsek,&nbsp;Mark Deuze","doi":"10.1007/s00146-023-01827-6","DOIUrl":"10.1007/s00146-023-01827-6","url":null,"abstract":"<div><p>The study explores the future of AI-driven media and info-communication as envisioned by experts from all world regions, defining relevant terminology and expectations for 2050. Participants engaged in a 4-week series of surveys, questioning their definitions and projections about AI for the field of media and communication. Their expectations predict universal access to democratically available, automated, personalized and unbiased information determined by trusted narratives, recolonization of information technology and the demystification of the media process. These experts, as technology ambassadors, advocate AI-to-AI solutions to mitigate technology-driven misuse and misinformation. The optimistic scenarios shift responsibility to future generations, relying on AI-driven solutions and finding inspiration in nature. Their present-based forecasts could be construed as being indicative of professional near-sightedness and cognitive dissonance. Visualizing our findings into a Glasses Model of AI Trust, the study contributes to key debates regarding AI policy, developmental trajectories, and academic research in media and info-communication fields.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 6","pages":"2933 - 2946"},"PeriodicalIF":2.9,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01827-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139387548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trustworthy AI: AI made in Germany and Europe? 值得信赖的人工智能:德国和欧洲制造的人工智能?
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-11-22 DOI: 10.1007/s00146-023-01808-9
Hartmut Hirsch-Kreinsen, Thorben Krokowski

As the capabilities of artificial intelligence (AI) continue to expand, concerns are also growing about the ethical and social consequences of unregulated development and, above all, use of AI systems in a wide range of social areas. It is therefore indisputable that the application of AI requires social standardization and regulation. For years, innovation policy measures and the most diverse activities of European and German institutions have been directed toward this goal. Under the label “Trustworthy AI” (TAI), a promise is formulated, according to which AI can meet criteria of transparency, legality, privacy, non-discrimination, and reliability. In this article, we ask what significance and scope the politically initiated concepts of TAI occupy in the current process of AI dynamics and to what extent they can stand for an independent, unique European or German development path of this technology.

随着人工智能(AI)的能力不断扩大,人们也越来越担心不受监管的发展,尤其是在广泛的社会领域使用人工智能系统所带来的伦理和社会后果。因此,人工智能的应用需要社会规范和监管,这是不争的事实。多年来,欧洲和德国机构的创新政策措施和最多样化的活动都是为了实现这一目标。在“值得信赖的人工智能”(TAI)的标签下,制定了一个承诺,根据该承诺,人工智能可以满足透明度、合法性、隐私性、非歧视性和可靠性的标准。在本文中,我们将探讨政治上发起的TAI概念在当前人工智能动态过程中的意义和范围,以及它们在多大程度上可以代表该技术的独立、独特的欧洲或德国发展道路。
{"title":"Trustworthy AI: AI made in Germany and Europe?","authors":"Hartmut Hirsch-Kreinsen,&nbsp;Thorben Krokowski","doi":"10.1007/s00146-023-01808-9","DOIUrl":"10.1007/s00146-023-01808-9","url":null,"abstract":"<div><p>As the capabilities of artificial intelligence (AI) continue to expand, concerns are also growing about the ethical and social consequences of unregulated development and, above all, use of AI systems in a wide range of social areas. It is therefore indisputable that the application of AI requires social standardization and regulation. For years, innovation policy measures and the most diverse activities of European and German institutions have been directed toward this goal. Under the label “Trustworthy AI” (TAI), a promise is formulated, according to which AI can meet criteria of transparency, legality, privacy, non-discrimination, and reliability. In this article, we ask what significance and scope the politically initiated concepts of TAI occupy in the current process of AI dynamics and to what extent they can stand for an independent, unique European or German development path of this technology.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 6","pages":"2921 - 2931"},"PeriodicalIF":2.9,"publicationDate":"2023-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01808-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139249361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
“Machine Down”: making sense of human–computer interaction—Garfinkel’s research on ELIZA and LYRIC from 1967 to 1969 and its contemporary relevance "机器向下":理解人机交互--1967 年至 1969 年期间加芬克尔对 ELIZA 和 LYRIC 的研究及其当代意义
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-11-21 DOI: 10.1007/s00146-023-01793-z
Clemens Eisenmann, Jakub Mlynář, Jason Turowetz, Anne W. Rawls

This paper examines Harold Garfinkel’s work with ELIZA and a related program LYRIC from 1967 to 1969. AI researchers have tended to treat successful human–machine interaction as if it relied primarily on non-human machine characteristics, and thus the often-reported attribution of human-like qualities to communication with computers has been criticized as a misperception—and humans who make such reports referred to as “deluded.” By contrast Garfinkel, building on two decades of prior research on information and communication, argued that the ELIZA and the LYRIC “chatbots” were achieving interactions that felt human to many users by exploiting human sense-making practices. In keeping with his long-term practice of using “trouble” as a way of discovering the taken-for-granted practices of human sense-making, Garfinkel designed scripts for ELIZA and LYRIC that he could disrupt in order to reveal how their success depended on human social practices. Hence, the announcement “Machine Down” by the chatbot was a desired result of Garfinkel’s interactions with it. This early (but largely unknown) research has implications not only for understanding contemporary AI chatbots, but also opens possibilities for respecifying current information systems design and computational practices to provide for the design of more flexible information objects.

本文考察了1967年至1969年哈罗德·加芬克尔与《伊丽莎》和相关节目《抒情》的合作。人工智能研究人员倾向于将成功的人机交互视为主要依赖于非人类的机器特征,因此,经常报道的将类似人类的品质归因于与计算机交流的行为被批评为误解,而做出此类报告的人则被称为“受骗”。相比之下,加芬克尔基于20年来对信息和通信的研究,认为ELIZA和LYRIC“聊天机器人”通过利用人类的感知实践,实现了许多用户感觉人类的互动。长期以来,加芬克尔一直把“麻烦”作为一种发现人类创造意义的方法,他为《ELIZA》和《LYRIC》设计了脚本,他可以破坏这些脚本,以揭示它们的成功是如何依赖于人类社会实践的。因此,聊天机器人宣布“机器关机”是加芬克尔与它互动的理想结果。这一早期(但在很大程度上未知)的研究不仅对理解当代人工智能聊天机器人有影响,而且为重新指定当前信息系统设计和计算实践提供了可能性,从而为设计更灵活的信息对象提供了可能。
{"title":"“Machine Down”: making sense of human–computer interaction—Garfinkel’s research on ELIZA and LYRIC from 1967 to 1969 and its contemporary relevance","authors":"Clemens Eisenmann,&nbsp;Jakub Mlynář,&nbsp;Jason Turowetz,&nbsp;Anne W. Rawls","doi":"10.1007/s00146-023-01793-z","DOIUrl":"10.1007/s00146-023-01793-z","url":null,"abstract":"<div><p>This paper examines Harold Garfinkel’s work with ELIZA and a related program LYRIC from 1967 to 1969. AI researchers have tended to treat successful human–machine interaction as if it relied primarily on non-human machine characteristics, and thus the often-reported attribution of human-like qualities to communication with computers has been criticized as a misperception—and humans who make such reports referred to as “deluded.” By contrast Garfinkel, building on two decades of prior research on information and communication, argued that the ELIZA and the LYRIC “chatbots” were achieving interactions that felt human to many users by exploiting human sense-making practices. In keeping with his long-term practice of using “trouble” as a way of discovering the taken-for-granted practices of human sense-making, Garfinkel designed scripts for ELIZA and LYRIC that he could disrupt in order to reveal how their success depended on human social practices. Hence, the announcement “Machine Down” by the chatbot was a desired result of Garfinkel’s interactions with it. This early (but largely unknown) research has implications not only for understanding contemporary AI chatbots, but also opens possibilities for respecifying current information systems design and computational practices to provide for the design of more flexible information objects.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 6","pages":"2715 - 2733"},"PeriodicalIF":2.9,"publicationDate":"2023-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01793-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139252865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Could the destruction of a beloved robot be considered a hate crime? An exploration of the legal and social significance of robot love 毁坏心爱的机器人会被视为仇恨犯罪吗?探讨机器人之爱的法律和社会意义
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-11-15 DOI: 10.1007/s00146-023-01805-y
Paula Sweeney

In the future, it is likely that we will form strong bonds of attachment and even develop love for social robots. Some of these loving relations will be, from the human’s perspective, as significant as a loving relationship that they might have had with another human. This means that, from the perspective of the loving human, the mindless destruction of their robot partner could be as devastating as the murder of another’s human partner. Yet, the loving partner of a robot has no recourse to legal action beyond the destruction of property and can see no way to prevent future people suffering the same devastating loss. On this basis, some have argued that such a scenario must surely motivate legal protection for social robots. In this paper, I argue that despite the devastating loss that would come from the destruction of one’s robot partner, love cannot itself be a reason for granting robot rights. However, although I argue against beloved robots having protective rights, I argue that the loss of a robot partner must be socially recognised as a form of bereavement if further secondary harms are to be avoided, and that, if certain conditions obtain, the destruction of a beloved robot could be criminalised as a hate crime.

在未来,我们很可能会形成强烈的依恋关系,甚至对社交机器人产生爱。从人类的角度来看,其中一些爱的关系与他们与另一个人的爱的关系一样重要。这意味着,从有爱心的人类的角度来看,无意识地摧毁他们的机器人伴侣可能与谋杀另一个人类伴侣一样具有破坏性。然而,机器人的爱人除了破坏财产之外,没有诉诸法律的办法,也看不到任何办法来防止未来的人遭受同样的毁灭性损失。在此基础上,一些人认为,这种情况肯定会激发对社交机器人的法律保护。在本文中,我认为,尽管毁灭机器人伴侣会带来毁灭性的损失,但爱情本身不能成为授予机器人权利的理由。然而,尽管我反对让心爱的机器人拥有受保护的权利,但我认为,如果要避免进一步的二次伤害,失去机器人伴侣必须被社会认可为一种丧亲之痛,而且,如果某些情况发生,摧毁心爱的机器人可能会被定为仇恨犯罪。
{"title":"Could the destruction of a beloved robot be considered a hate crime? An exploration of the legal and social significance of robot love","authors":"Paula Sweeney","doi":"10.1007/s00146-023-01805-y","DOIUrl":"10.1007/s00146-023-01805-y","url":null,"abstract":"<div><p>In the future, it is likely that we will form strong bonds of attachment and even develop love for social robots. Some of these loving relations will be, from the human’s perspective, as significant as a loving relationship that they might have had with another human. This means that, from the perspective of the loving human, the mindless destruction of their robot partner could be as devastating as the murder of another’s human partner. Yet, the loving partner of a robot has no recourse to legal action beyond the destruction of property and can see no way to prevent future people suffering the same devastating loss. On this basis, some have argued that such a scenario must surely motivate legal protection for social robots. In this paper, I argue that despite the devastating loss that would come from the destruction of one’s robot partner, love cannot itself be a reason for granting robot rights. However, although I argue against beloved robots having protective rights, I argue that the loss of a robot partner must be socially recognised as a form of bereavement if further secondary harms are to be avoided, and that, if certain conditions obtain, the destruction of a beloved robot could be criminalised as a hate crime.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 6","pages":"2735 - 2741"},"PeriodicalIF":2.9,"publicationDate":"2023-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01805-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139271153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
AI & Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1