首页 > 最新文献

Ai Magazine最新文献

英文 中文
Toward the confident deployment of real-world reinforcement learning agents 自信地部署现实世界中的强化学习代理
IF 2.5 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-22 DOI: 10.1002/aaai.12190
Josiah P. Hanna

Intelligent learning agents must be able to learn from experience so as to accomplish tasks that require more ability than could be initially programmed. Reinforcement learning (RL) has emerged as a potentially powerful class of solution methods to create agents that learn from trial-and-error interaction with the world. Despite many prominent success stories, a number of challenges often stand between the use of RL in real-world problems. As part of the AAAI New Faculty Highlight Program, in this article, I will describe the work that my group is doing at the University of Wisconsin—Madison with the intent to remove barriers to the use of RL in practice. Specifically, I will describe recent work that aims to give practitioners confidence in learned behaviors, methods to increase the data efficiency of RL, and work on “challenge” domains that stress RL algorithms beyond current testbeds.

智能学习代理必须能够从经验中学习,从而完成需要比最初编程能力更强的任务。强化学习(RL)已成为一类潜在的强大解决方法,用于创建从与世界的试错互动中学习的代理。尽管有许多突出的成功案例,但在现实世界的问题中使用强化学习往往面临着许多挑战。作为 AAAI 新教师亮点计划的一部分,我将在本文中介绍我所在的威斯康星大学麦迪逊分校的研究小组正在开展的工作,目的是消除在实践中使用 RL 的障碍。具体来说,我将介绍最近的工作,这些工作旨在让实践者对学习到的行为有信心、提高 RL 数据效率的方法,以及在 "挑战 "领域的工作,这些领域对 RL 算法的压力超出了当前的测试平台。
{"title":"Toward the confident deployment of real-world reinforcement learning agents","authors":"Josiah P. Hanna","doi":"10.1002/aaai.12190","DOIUrl":"https://doi.org/10.1002/aaai.12190","url":null,"abstract":"<p>Intelligent learning agents must be able to learn from experience so as to accomplish tasks that require more ability than could be initially programmed. Reinforcement learning (RL) has emerged as a potentially powerful class of solution methods to create agents that learn from trial-and-error interaction with the world. Despite many prominent success stories, a number of challenges often stand between the use of RL in real-world problems. As part of the AAAI New Faculty Highlight Program, in this article, I will describe the work that my group is doing at the University of Wisconsin—Madison with the intent to remove barriers to the use of RL in practice. Specifically, I will describe recent work that aims to give practitioners confidence in learned behaviors, methods to increase the data efficiency of RL, and work on “challenge” domains that stress RL algorithms beyond current testbeds.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 3","pages":"396-403"},"PeriodicalIF":2.5,"publicationDate":"2024-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12190","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142324461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards robust visual understanding: A paradigm shift in computer vision from recognition to reasoning 实现稳健的视觉理解:计算机视觉从识别到推理的范式转变
IF 2.5 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-22 DOI: 10.1002/aaai.12194
Tejas Gokhale

Models that learn from data are widely and rapidly being deployed today for real-world use, but they suffer from unforeseen failures that limit their reliability. These failures often have several causes such as distribution shift; adversarial attacks; calibration errors; scarcity of data and/or ground-truth labels; noisy, corrupted, or partial data; and limitations of evaluation metrics. But many failures also occur because many modern AI tasks require reasoning beyond pattern matching and such reasoning abilities are difficult to formulate as data-based input–output function fitting. The reliability problem has become increasingly important under the new paradigm of semantic “multimodal” learning. In this article, I will discuss findings from our work to provide avenues for the development of robust and reliable computer vision systems, particularly by leveraging the interactions between vision and language. This article expands upon the invited talk at AAAI 2024 and covers three thematic areas: robustness of visual recognition systems, open-domain reliability for visual reasoning, and challenges and opportunities associated with generative models in vision.

如今,从数据中学习的模型正被广泛、快速地部署到现实世界中使用,但这些模型会出现不可预见的故障,从而限制了其可靠性。这些故障通常有几个原因,如分布偏移;对抗性攻击;校准错误;数据和/或地面实况标签稀缺;数据嘈杂、损坏或不完整;以及评估指标的局限性。但是,许多失败的原因还在于,许多现代人工智能任务需要进行模式匹配之外的推理,而这种推理能力很难表述为基于数据的输入输出函数拟合。在语义 "多模态 "学习的新范式下,可靠性问题变得越来越重要。在本文中,我将讨论我们的研究成果,为开发稳健可靠的计算机视觉系统提供途径,特别是通过利用视觉与语言之间的互动。本文是对 2024 年 AAAI 大会特邀演讲的进一步阐述,涵盖三个主题领域:视觉识别系统的鲁棒性、视觉推理的开放域可靠性以及与视觉生成模型相关的挑战和机遇。
{"title":"Towards robust visual understanding: A paradigm shift in computer vision from recognition to reasoning","authors":"Tejas Gokhale","doi":"10.1002/aaai.12194","DOIUrl":"https://doi.org/10.1002/aaai.12194","url":null,"abstract":"<p>Models that learn from data are widely and rapidly being deployed today for real-world use, but they suffer from unforeseen failures that limit their reliability. These failures often have several causes such as distribution shift; adversarial attacks; calibration errors; scarcity of data and/or ground-truth labels; noisy, corrupted, or partial data; and limitations of evaluation metrics. But many failures also occur because many modern AI tasks require reasoning beyond pattern matching and such reasoning abilities are difficult to formulate as data-based input–output function fitting. The reliability problem has become increasingly important under the new paradigm of semantic “multimodal” learning. In this article, I will discuss findings from our work to provide avenues for the development of robust and reliable computer vision systems, particularly by leveraging the interactions between vision and language. This article expands upon the invited talk at AAAI 2024 and covers three thematic areas: robustness of visual recognition systems, open-domain reliability for visual reasoning, and challenges and opportunities associated with generative models in vision.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 3","pages":"429-435"},"PeriodicalIF":2.5,"publicationDate":"2024-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12194","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142324469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Better environments for better AI 更好的环境造就更好的人工智能
IF 2.5 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-07 DOI: 10.1002/aaai.12187
Sarah Keren

Most AI research focuses exclusively on the AI agent itself, that is, given some input, what are the improvements to the agent's reasoning that will yield the best possible output? In my research, I take a novel approach to increasing the capabilities of AI agents via the use of AI to design the environments in which they are intended to act. My methods identify the inherent capabilities and limitations of AI agents and find the best way to modify their environment in order to maximize performance. With this agenda in mind, I describe here several research projects that vary in their objective, in the AI methodologies that are applied for finding optimal designs, and in the real-world applications to which they correspond. I also discuss how the different projects fit within my overarching objective of using AI to promote effective multi-agent collaboration and to enhance the way robots and machines interact with humans.

大多数人工智能研究只关注人工智能代理本身,也就是说,在给定输入的情况下,如何改进代理的推理才能产生最佳输出?在我的研究中,我采用了一种新颖的方法,通过使用人工智能来设计人工智能代理的行动环境,从而提高人工智能代理的能力。我的方法可以识别人工智能代理的固有能力和局限性,并找到修改其环境的最佳方法,从而最大限度地提高性能。考虑到这一议程,我在此介绍几个研究项目,这些项目在目标、用于寻找最佳设计的人工智能方法以及它们所对应的现实世界应用方面各不相同。我还将讨论不同的项目如何与我的总体目标相吻合,即利用人工智能促进有效的多机器人协作,并增强机器人和机器与人类的互动方式。
{"title":"Better environments for better AI","authors":"Sarah Keren","doi":"10.1002/aaai.12187","DOIUrl":"https://doi.org/10.1002/aaai.12187","url":null,"abstract":"<p>Most AI research focuses exclusively on the AI agent itself, that is, given some input, what are the improvements to the agent's reasoning that will yield the best possible output? In my research, I take a novel approach to increasing the capabilities of AI agents via the use of AI to design the environments in which they are intended to act. My methods identify the inherent capabilities and limitations of AI agents and find the best way to modify their environment in order to maximize performance. With this agenda in mind, I describe here several research projects that vary in their objective, in the AI methodologies that are applied for finding optimal designs, and in the real-world applications to which they correspond. I also discuss how the different projects fit within my overarching objective of using AI to promote effective multi-agent collaboration and to enhance the way robots and machines interact with humans.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 3","pages":"369-375"},"PeriodicalIF":2.5,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12187","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142324489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combating misinformation in the age of LLMs: Opportunities and challenges 打击法律硕士时代的错误信息:机遇与挑战
IF 2.5 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-01 DOI: 10.1002/aaai.12188
Canyu Chen, Kai Shu

Misinformation such as fake news and rumors is a serious threat for information ecosystems and public trust. The emergence of large language models (LLMs) has great potential to reshape the landscape of combating misinformation. Generally, LLMs can be a double-edged sword in the fight. On the one hand, LLMs bring promising opportunities for combating misinformation due to their profound world knowledge and strong reasoning abilities. Thus, one emerging question is: can we utilize LLMs to combat misinformation? On the other hand, the critical challenge is that LLMs can be easily leveraged to generate deceptive misinformation at scale. Then, another important question is: how to combat LLM-generated misinformation? In this paper, we first systematically review the history of combating misinformation before the advent of LLMs. Then we illustrate the current efforts and present an outlook for these two fundamental questions, respectively. The goal of this survey paper is to facilitate the progress of utilizing LLMs for fighting misinformation and call for interdisciplinary efforts from different stakeholders for combating LLM-generated misinformation.

假新闻和谣言等虚假信息严重威胁着信息生态系统和公众信任。大型语言模型(LLMs)的出现极有可能重塑打击虚假信息的格局。一般来说,LLMs 在这场斗争中可能是一把双刃剑。一方面,LLMs 凭借其深厚的世界知识和强大的推理能力,为打击虚假信息带来了大有可为的机会;另一方面,LLMs 也有可能成为虚假信息领域的 "新宠儿"。因此,一个新出现的问题是:我们能否利用 LLM 来打击错误信息?另一方面,关键的挑战在于 LLMs 很容易被用来大规模生成欺骗性的错误信息。那么,另一个重要问题是:如何打击 LLM 生成的错误信息?在本文中,我们首先系统回顾了在 LLM 出现之前打击误导信息的历史。然后,我们分别阐述了当前的努力,并对这两个基本问题进行了展望。本调查报告的目的是促进利用 LLMs 打击误导信息的进展,并呼吁不同利益相关者为打击 LLM 生成的误导信息做出跨学科努力。
{"title":"Combating misinformation in the age of LLMs: Opportunities and challenges","authors":"Canyu Chen,&nbsp;Kai Shu","doi":"10.1002/aaai.12188","DOIUrl":"https://doi.org/10.1002/aaai.12188","url":null,"abstract":"<p>Misinformation such as fake news and rumors is a serious threat for information ecosystems and public trust. The emergence of large language models (LLMs) has great potential to reshape the landscape of combating misinformation. Generally, LLMs can be a double-edged sword in the fight. On the one hand, LLMs bring promising opportunities for combating misinformation due to their profound world knowledge and strong reasoning abilities. Thus, one emerging question is: <i>can we utilize LLMs to combat misinformation?</i> On the other hand, the critical challenge is that LLMs can be easily leveraged to generate deceptive misinformation at scale. Then, another important question is: <i>how to combat LLM-generated misinformation?</i> In this paper, we first systematically review the history of combating misinformation before the advent of LLMs. Then we illustrate the current efforts and present an outlook for these two fundamental questions, respectively. The goal of this survey paper is to facilitate the progress of utilizing LLMs for fighting misinformation and call for interdisciplinary efforts from different stakeholders for combating LLM-generated misinformation.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 3","pages":"354-368"},"PeriodicalIF":2.5,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12188","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142324415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Food information engineering 食品信息工程
IF 2.5 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-31 DOI: 10.1002/aaai.12185
Azanzi Jiomekong, Allard Oelen, Soren Auer, Lorenz Anna-Lena, Vogt Lars

Food information engineering relies on statistical and AI techniques (e.g., symbolic, connectionist, and neurosymbolic AI) for collecting, storing, processing, diffusing, and putting food information in a form exploitable by humans and machines. Food information is collected manually and automatically. Once collected, food information is organized using tabular data representation schema, symbolic, connectionist or neurosymbolic AI techniques. Once collected, processed, and stored, food information is diffused to different stakeholders using appropriate formats. Even if neurosymbolic AI has shown promising results in many domains, we found that this approach is rarely used in the domain of food information engineering. This paper aims to serve as a good reference for food information engineering researchers. Unlike existing reviews on the subject, we cover all the aspects of food information engineering and we linked the paper to online resources built using Open Research Knowledge Graph. These resources are composed of templates, comparison tables of research contributions and smart reviews. All these resources are organized in the “Food Information Engineering” observatory and will be continually updated with new research contributions.

食品信息工程依靠统计和人工智能技术(如符号、联结主义和神经符号人工智能)来收集、存储、处理、传播食品信息,并将其转化为人类和机器都能利用的形式。食物信息可通过人工或自动方式收集。收集后,使用表格数据表示模式、符号、联结主义或神经符号人工智能技术对食物信息进行组织。一旦收集、处理和存储完毕,食品信息就会以适当的格式传播给不同的利益相关者。尽管神经符号人工智能在许多领域都取得了可喜的成果,但我们发现这种方法在食品信息工程领域却鲜有应用。本文旨在为食品信息工程研究人员提供一个良好的参考。与现有的相关综述不同,我们涵盖了食品信息工程的所有方面,并将本文与使用开放研究知识图谱构建的在线资源相链接。这些资源由模板、研究成果对照表和智能评论组成。所有这些资源都组织在 "食品信息工程 "观察站中,并将根据新的研究成果不断更新。
{"title":"Food information engineering","authors":"Azanzi Jiomekong,&nbsp;Allard Oelen,&nbsp;Soren Auer,&nbsp;Lorenz Anna-Lena,&nbsp;Vogt Lars","doi":"10.1002/aaai.12185","DOIUrl":"https://doi.org/10.1002/aaai.12185","url":null,"abstract":"<p>Food information engineering relies on statistical and AI techniques (e.g., symbolic, connectionist, and neurosymbolic AI) for collecting, storing, processing, diffusing, and putting food information in a form exploitable by humans and machines. Food information is collected manually and automatically. Once collected, food information is organized using tabular data representation schema, symbolic, connectionist or neurosymbolic AI techniques. Once collected, processed, and stored, food information is diffused to different stakeholders using appropriate formats. Even if neurosymbolic AI has shown promising results in many domains, we found that this approach is rarely used in the domain of food information engineering. This paper aims to serve as a good reference for food information engineering researchers. Unlike existing reviews on the subject, we cover all the aspects of food information engineering and we linked the paper to online resources built using Open Research Knowledge Graph. These resources are composed of templates, comparison tables of research contributions and smart reviews. All these resources are organized in the “Food Information Engineering” observatory and will be continually updated with new research contributions.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 3","pages":"338-353"},"PeriodicalIF":2.5,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12185","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142324669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
XAI is in trouble XAI 陷入困境
IF 2.5 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-29 DOI: 10.1002/aaai.12184
Rosina O Weber, Adam J Johs, Prateek Goel, João Marques Silva

Researchers focusing on how artificial intelligence (AI) methods explain their decisions often discuss controversies and limitations. Some even assert that most publications offer little to no valuable contributions. In this article, we substantiate the claim that explainable AI (XAI) is in trouble by describing and illustrating four problems: the disagreements on the scope of XAI, the lack of definitional cohesion, precision, and adoption, the issues with motivations for XAI research, and limited and inconsistent evaluations. As we delve into their potential underlying sources, our analysis finds these problems seem to originate from AI researchers succumbing to the pitfalls of interdisciplinarity or from insufficient scientific rigor. Analyzing these potential factors, we discuss the literature at times coming across unexplored research questions. Hoping to alleviate existing problems, we make recommendations on precautions against the challenges of interdisciplinarity and propose directions in support of scientific rigor.

关注人工智能(AI)方法如何解释决策的研究人员经常讨论争议和局限性。有些人甚至断言,大多数出版物几乎没有任何有价值的贡献。在这篇文章中,我们通过描述和说明四个问题来证实 "可解释人工智能(XAI)陷入困境 "的说法:对 XAI 范围的分歧,定义缺乏一致性、精确性和采用性,XAI 研究的动机问题,以及有限且不一致的评估。当我们深入探究这些问题的潜在根源时,我们的分析发现,这些问题似乎都源于人工智能研究人员屈从于跨学科的陷阱或科学严谨性不足。通过分析这些潜在因素,我们讨论了文献中时常出现的未探索的研究问题。为了缓解现有问题,我们提出了防范跨学科挑战的建议,并提出了支持科学严谨性的方向。
{"title":"XAI is in trouble","authors":"Rosina O Weber,&nbsp;Adam J Johs,&nbsp;Prateek Goel,&nbsp;João Marques Silva","doi":"10.1002/aaai.12184","DOIUrl":"https://doi.org/10.1002/aaai.12184","url":null,"abstract":"<p>Researchers focusing on how artificial intelligence (AI) methods explain their decisions often discuss controversies and limitations. Some even assert that most publications offer little to no valuable contributions. In this article, we substantiate the claim that explainable AI (XAI) is in trouble by describing and illustrating four problems: the disagreements on the scope of XAI, the lack of definitional cohesion, precision, and adoption, the issues with motivations for XAI research, and limited and inconsistent evaluations. As we delve into their potential underlying sources, our analysis finds these problems seem to originate from AI researchers succumbing to the pitfalls of interdisciplinarity or from insufficient scientific rigor. Analyzing these potential factors, we discuss the literature at times coming across unexplored research questions. Hoping to alleviate existing problems, we make recommendations on precautions against the challenges of interdisciplinarity and propose directions in support of scientific rigor.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 3","pages":"300-316"},"PeriodicalIF":2.5,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12184","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142324510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implementation of the EU AI act calls for interdisciplinary governance 实施欧盟人工智能法案需要跨学科管理
IF 2.5 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-19 DOI: 10.1002/aaai.12183
Huixin Zhong

The European Union Parliament passed the EU AI Act in 2024, which is an important milestone towards the world's first comprehensive AI law to formally take effect. Although this is a significant achievement, the real work begins with putting these rules into action, a journey filled with challenges and opportunities. This perspective article reviews recent interdisciplinary research aimed at facilitating the implementation of the prohibited AI practices outlined in the EU AI Act. It also explores the necessary future efforts to effectively enforce the banning of those prohibited practices across the EU market and the challenges associated with such enforcement. Addressing these future tasks and challenges calls for the establishment of an interdisciplinary governance framework. This framework may contain a workflow that can identify the necessary expertise and coordinate experts’ collaboration at different stages of AI governance. Additionally, it involves developing and implementing a set of compliance and ethical safeguards to ensure effective management and supervision of AI practices.

欧盟议会于 2024 年通过了《欧盟人工智能法》,这是世界上第一部正式生效的全面人工智能法律的重要里程碑。虽然这是一项重大成就,但真正的工作始于将这些规则付诸行动,这是一段充满挑战和机遇的旅程。本视角文章回顾了近期旨在促进实施欧盟人工智能法案中列出的禁止人工智能做法的跨学科研究。文章还探讨了未来在欧盟市场上有效执行这些禁止行为的必要努力,以及与此类执行相关的挑战。应对这些未来任务和挑战需要建立一个跨学科治理框架。该框架可能包含一个工作流程,可以确定必要的专业知识,并协调专家在人工智能治理不同阶段的合作。此外,它还涉及制定和实施一套合规和道德保障措施,以确保对人工智能实践进行有效管理和监督。
{"title":"Implementation of the EU AI act calls for interdisciplinary governance","authors":"Huixin Zhong","doi":"10.1002/aaai.12183","DOIUrl":"10.1002/aaai.12183","url":null,"abstract":"<p>The European Union Parliament passed the EU AI Act in 2024, which is an important milestone towards the world's first comprehensive AI law to formally take effect. Although this is a significant achievement, the real work begins with putting these rules into action, a journey filled with challenges and opportunities. This perspective article reviews recent interdisciplinary research aimed at facilitating the implementation of the prohibited AI practices outlined in the EU AI Act. It also explores the necessary future efforts to effectively enforce the banning of those prohibited practices across the EU market and the challenges associated with such enforcement. Addressing these future tasks and challenges calls for the establishment of an interdisciplinary governance framework. This framework may contain a workflow that can identify the necessary expertise and coordinate experts’ collaboration at different stages of AI governance. Additionally, it involves developing and implementing a set of compliance and ethical safeguards to ensure effective management and supervision of AI practices.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 3","pages":"333-337"},"PeriodicalIF":2.5,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12183","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141821161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
In search of verifiability: Explanations rarely enable complementary performance in AI-advised decision making 寻找可验证性:在人工智能辅助决策中,很少有解释能实现性能互补
IF 2.5 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-01 DOI: 10.1002/aaai.12182
Raymond Fok, Daniel S. Weld

The current literature on AI-advised decision making—involving explainable AI systems advising human decision makers—presents a series of inconclusive and confounding results. To synthesize these findings, we propose a simple theory that elucidates the frequent failure of AI explanations to engender appropriate reliance and complementary decision making performance. In contrast to other common desiderata, for example, interpretability or spelling out the AI's reasoning process, we argue that explanations are only useful to the extent that they allow a human decision maker to verify the correctness of the AI's prediction. Prior studies find in many decision making contexts that AI explanations do not facilitate such verification. Moreover, most tasks fundamentally do not allow easy verification, regardless of explanation method, limiting the potential benefit of any type of explanation. We also compare the objective of complementary performance with that of appropriate reliance, decomposing the latter into the notions of outcome-graded and strategy-graded reliance.

目前关于人工智能辅助决策的文献--涉及可解释的人工智能系统为人类决策者提供建议--呈现出一系列不确定和令人困惑的结果。为了综合这些研究结果,我们提出了一个简单的理论,以阐明人工智能的解释为何经常无法产生适当的依赖性和辅助决策性能。与其他常见的要求(例如可解释性或阐明人工智能的推理过程)相比,我们认为,解释只有在允许人类决策者验证人工智能预测的正确性时才是有用的。先前的研究发现,在许多决策环境中,人工智能的解释并不能促进这种验证。此外,无论采用哪种解释方法,大多数任务从根本上说都不便于验证,从而限制了任何类型解释的潜在益处。我们还将互补性能目标与适当依赖目标进行了比较,并将后者分解为结果分级依赖和策略分级依赖两个概念。
{"title":"In search of verifiability: Explanations rarely enable complementary performance in AI-advised decision making","authors":"Raymond Fok,&nbsp;Daniel S. Weld","doi":"10.1002/aaai.12182","DOIUrl":"https://doi.org/10.1002/aaai.12182","url":null,"abstract":"<p>The current literature on AI-advised decision making—involving explainable AI systems advising human decision makers—presents a series of inconclusive and confounding results. To synthesize these findings, we propose a simple theory that elucidates the frequent failure of AI explanations to engender appropriate reliance and complementary decision making performance. In contrast to other common desiderata, for example, interpretability or spelling out the AI's reasoning process, we argue that explanations are only useful to the extent that they <i>allow a human decision maker to verify the correctness of the AI's prediction</i>. Prior studies find in many decision making contexts that AI explanations <i>do not</i> facilitate such verification. Moreover, most tasks fundamentally do not allow easy verification, regardless of explanation method, limiting the potential benefit of any type of explanation. We also compare the objective of complementary performance with that of appropriate reliance, decomposing the latter into the notions of outcome-graded and strategy-graded reliance.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 3","pages":"317-332"},"PeriodicalIF":2.5,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12182","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142324408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A new era of AI-assisted journalism at Bloomberg 彭博社的人工智能辅助新闻新时代
IF 0.9 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-05 DOI: 10.1002/aaai.12181
Claudia Quinonez, Edgar Meij
<p>Artificial intelligence (AI) is impacting and has the potential to upend entire business models and structures. The adoption of such new technologies to support newsgathering processes is established practice for newsrooms. For AI specifically, we are seeing a new era of AI-assisted journalism emerge with trust in the AI-driven analyses and accuracy of results as core tenets.</p><p>In Part I of this position paper, we discuss the contributions of six recently published research papers co-authored by Bloomberg's Artificial Intelligence Engineering team that show the intricacies of training AI models for reliable newsgathering processes. The papers investigate (a) the creation of models for updated headline generation, showing that headline generation models benefit from access to the past state of the article, (b) sequentially controlled text generation, which is a novel task and we show that in general, more structured awareness results in higher control accuracy and grammatical coherence, (c) chart summarization, which looks into identifying the key message and generating sentences that describe salient information in the multimodal documents, (d) a semistructured natural language inference task to develop a framework for data augmentation for tabular inference, (e) the introduction of a human-annotated dataset (ENTSUM) for controllable summarization with a focus on named entities as the aspect to control, and (f) a novel defense mechanism against adversarial attacks (ATINTER). We also examine Bloomberg's research work, building its own internal, not-for-commercial-use large language model, BloombergGPT, and training it with the goal of demonstrating support for a wide range of tasks within the financial industry.</p><p>In Part II, we analyze the evolution of automation tasks in the Bloomberg newsroom that led to the creation of Bloomberg's News Innovation Lab. Technology-assisted content creation has been a reality at Bloomberg News for nearly a decade and has evolved from rules-based headline generation from structured files to the constant exploration of potential ways to assist story creation and storytelling in the financial domain. The Lab now oversees the operation of hundreds of software bots that create semi- and fully automated stories of financial relevance, providing journalists with depth in terms of data and analysis, speed in terms of reacting to breaking news, and transparency to corners of the financial world where data investigation is a gigantic undertaking. The Lab recently introduced new tools that provide journalists with the ability to explore automation on demand while it continues to experiment with ways to assist story production.</p><p>In Part III, we conceptually discuss the transformative impact that generative AI can have in any newsroom, along with considerations about the technology's shortcomings in its current state of development. As with any revolutionary new technology, as well as with exciting research op
人工智能(AI)正在影响并有可能颠覆整个商业模式和结构。采用此类新技术来支持新闻采集流程是新闻编辑室的既定做法。在本立场文件的第一部分,我们讨论了彭博社人工智能工程团队最近发表的六篇合著研究论文的贡献,这些论文展示了为可靠的新闻采集流程训练人工智能模型的复杂性。这些论文研究了:(a) 更新标题生成模型的创建,表明标题生成模型得益于对文章过去状态的访问;(b) 顺序控制文本生成,这是一项新颖的任务,我们表明,一般来说,更多的结构化意识会带来更高的控制精度和语法连贯性;(c) 图表摘要、(d) 半结构化自然语言推理任务,为表格推理开发数据增强框架;(e) 引入人类注释数据集 (ENTSUM),用于可控摘要,重点控制命名实体;(f) 新型防御机制,抵御恶意攻击 (ATINTER)。在第二部分中,我们分析了彭博社新闻编辑室自动化任务的演变过程,这些演变促成了彭博社新闻创新实验室的成立。技术辅助内容创作在彭博新闻社已经实现了近十年,从基于规则的结构化文件标题生成发展到不断探索潜在的方法来辅助金融领域的故事创作和故事讲述。目前,该实验室负责数百个软件机器人的运行,这些机器人可以半自动或全自动地创作与财经相关的故事,为记者提供深度的数据和分析,快速地对突发新闻做出反应,并使数据调查是一项艰巨任务的财经世界的各个角落变得透明。在第三部分中,我们将从概念上讨论生成式人工智能对任何新闻编辑室可能产生的变革性影响,并对该技术在当前发展状态下的不足之处进行思考。与任何革命性的新技术以及令人兴奋的研究机会一样,部分挑战在于平衡对社会的潜在积极和消极影响。我们提出了自己的原则和指导方针,用于指导我们尝试新的人工智能生成技术。彭博新闻社的风格指南提醒我们,"我们的新闻报道面向的可能是世界上最复杂的受众,对他们来说,准确性至关重要"。
{"title":"A new era of AI-assisted journalism at Bloomberg","authors":"Claudia Quinonez,&nbsp;Edgar Meij","doi":"10.1002/aaai.12181","DOIUrl":"10.1002/aaai.12181","url":null,"abstract":"&lt;p&gt;Artificial intelligence (AI) is impacting and has the potential to upend entire business models and structures. The adoption of such new technologies to support newsgathering processes is established practice for newsrooms. For AI specifically, we are seeing a new era of AI-assisted journalism emerge with trust in the AI-driven analyses and accuracy of results as core tenets.&lt;/p&gt;&lt;p&gt;In Part I of this position paper, we discuss the contributions of six recently published research papers co-authored by Bloomberg's Artificial Intelligence Engineering team that show the intricacies of training AI models for reliable newsgathering processes. The papers investigate (a) the creation of models for updated headline generation, showing that headline generation models benefit from access to the past state of the article, (b) sequentially controlled text generation, which is a novel task and we show that in general, more structured awareness results in higher control accuracy and grammatical coherence, (c) chart summarization, which looks into identifying the key message and generating sentences that describe salient information in the multimodal documents, (d) a semistructured natural language inference task to develop a framework for data augmentation for tabular inference, (e) the introduction of a human-annotated dataset (ENTSUM) for controllable summarization with a focus on named entities as the aspect to control, and (f) a novel defense mechanism against adversarial attacks (ATINTER). We also examine Bloomberg's research work, building its own internal, not-for-commercial-use large language model, BloombergGPT, and training it with the goal of demonstrating support for a wide range of tasks within the financial industry.&lt;/p&gt;&lt;p&gt;In Part II, we analyze the evolution of automation tasks in the Bloomberg newsroom that led to the creation of Bloomberg's News Innovation Lab. Technology-assisted content creation has been a reality at Bloomberg News for nearly a decade and has evolved from rules-based headline generation from structured files to the constant exploration of potential ways to assist story creation and storytelling in the financial domain. The Lab now oversees the operation of hundreds of software bots that create semi- and fully automated stories of financial relevance, providing journalists with depth in terms of data and analysis, speed in terms of reacting to breaking news, and transparency to corners of the financial world where data investigation is a gigantic undertaking. The Lab recently introduced new tools that provide journalists with the ability to explore automation on demand while it continues to experiment with ways to assist story production.&lt;/p&gt;&lt;p&gt;In Part III, we conceptually discuss the transformative impact that generative AI can have in any newsroom, along with considerations about the technology's shortcomings in its current state of development. As with any revolutionary new technology, as well as with exciting research op","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 2","pages":"187-199"},"PeriodicalIF":0.9,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12181","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141382493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the impact of automated correction of misinformation in social media 探索自动更正社交媒体错误信息的影响
IF 0.9 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-04 DOI: 10.1002/aaai.12180
Grégoire Burel, Mohammadali Tavakoli, Harith Alani

Correcting misinformation is a complex task, influenced by various psychological, social, and technical factors. Most research evaluation methods for identifying effective correction approaches tend to rely on either crowdsourcing, questionnaires, lab-based simulations, or hypothetical scenarios. However, the translation of these methods and findings into real-world settings, where individuals willingly and freely disseminate misinformation, remains largely unexplored. Consequently, we lack a comprehensive understanding of how individuals who share misinformation in natural online environments would respond to corrective interventions. In this study, we explore the effectiveness of corrective messaging on 3898 users who shared misinformation on Twitter/X over 2 years. We designed and deployed a bot to automatically identify individuals who share misinformation and subsequently alert them to related fact-checks in various message formats. Our analysis shows that only a small minority of users react positively to the corrective messages, with most users either ignoring them or reacting negatively. Nevertheless, we also found that more active users were proportionally more likely to react positively to corrections and we observed that different message tones made particular user groups more likely to react to the bot.

纠正错误信息是一项复杂的任务,受到各种心理、社会和技术因素的影响。大多数用于确定有效纠正方法的研究评估方法往往依赖于众包、问卷调查、实验室模拟或假设情景。然而,如何将这些方法和研究结果应用到真实世界的环境中,即个人自愿、自由地传播错误信息的环境中,在很大程度上仍有待探索。因此,我们缺乏对在自然网络环境中分享错误信息的个人如何应对纠正干预措施的全面了解。在本研究中,我们探讨了纠正信息对 3898 名两年来在 Twitter/X 上分享错误信息的用户的有效性。我们设计并部署了一个机器人来自动识别分享错误信息的个人,并随后以各种信息格式提醒他们注意相关的事实核查。我们的分析表明,只有少数用户对纠正信息做出了积极反应,大多数用户要么置之不理,要么做出消极反应。不过,我们也发现,更活跃的用户更有可能对纠正信息做出积极反应,而且我们还观察到,不同的信息语调会使特定用户群更有可能对机器人做出反应。
{"title":"Exploring the impact of automated correction of misinformation in social media","authors":"Grégoire Burel,&nbsp;Mohammadali Tavakoli,&nbsp;Harith Alani","doi":"10.1002/aaai.12180","DOIUrl":"10.1002/aaai.12180","url":null,"abstract":"<p>Correcting misinformation is a complex task, influenced by various psychological, social, and technical factors. Most research evaluation methods for identifying effective correction approaches tend to rely on either crowdsourcing, questionnaires, lab-based simulations, or hypothetical scenarios. However, the translation of these methods and findings into real-world settings, where individuals willingly and freely disseminate misinformation, remains largely unexplored. Consequently, we lack a comprehensive understanding of how individuals who share misinformation in natural online environments would respond to corrective interventions. In this study, we explore the effectiveness of corrective messaging on 3898 users who shared misinformation on Twitter/X over 2 years. We designed and deployed a bot to automatically identify individuals who share misinformation and subsequently alert them to related fact-checks in various message formats. Our analysis shows that only a small minority of users react positively to the corrective messages, with most users either ignoring them or reacting negatively. Nevertheless, we also found that more active users were proportionally more likely to react positively to corrections and we observed that different message tones made particular user groups more likely to react to the bot.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 2","pages":"227-245"},"PeriodicalIF":0.9,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12180","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141387863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Ai Magazine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1