首页 > 最新文献

ACM Transactions on Interactive Intelligent Systems最新文献

英文 中文
How should an AI trust its human teammates? Exploring possible cues of artificial trust 人工智能应如何信任人类队友?探索人工信任的可能线索
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-12-06 DOI: 10.1145/3635475
Carolina Centeio Jorge, Catholijn M. Jonker, Myrthe L. Tielman

In teams composed of humans, we use trust in others to make decisions, such as what to do next, who to help and who to ask for help. When a team member is artificial, they should also be able to assess whether a human teammate is trustworthy for a certain task. We see trustworthiness as the combination of (1) whether someone will do a task and (2) whether they can do it. With building beliefs in trustworthiness as an ultimate goal, we explore which internal factors (krypta) of the human may play a role (e.g. ability, benevolence and integrity) in determining trustworthiness, according to existing literature. Furthermore, we investigate which observable metrics (manifesta) an agent may take into account as cues for the human teammate’s krypta in an online 2D grid-world experiment (n=54). Results suggest that cues of ability, benevolence and integrity influence trustworthiness. However, we observed that trustworthiness is mainly influenced by human’s playing strategy and cost-benefit analysis, which deserves further investigation. This is a first step towards building informed beliefs of human trustworthiness in human-AI teamwork.

在由人类组成的团队中,我们利用对他人的信任来做出决定,例如下一步该做什么、该帮助谁以及该向谁求助。当团队成员是人工智能时,他们也应该能够评估人类队友是否值得信任,以完成某项任务。我们认为,可信度是以下两个方面的结合:(1) 某人是否会完成任务;(2) 某人是否能完成任务。以建立对可信度的信念为最终目标,我们根据现有文献,探索人类的哪些内部因素(krypta)可能在决定可信度方面发挥作用(如能力、仁慈和正直)。此外,在一个在线二维网格世界实验(n=54)中,我们还研究了代理可以将哪些可观测指标(manifesta)作为人类队友的 "氪"(krypta)线索。结果表明,能力、仁慈和正直的线索会影响可信度。然而,我们观察到,可信度主要受人类的游戏策略和成本效益分析的影响,这值得进一步研究。这是在人类-人工智能团队合作中建立人类可信度知情信念的第一步。
{"title":"How should an AI trust its human teammates? Exploring possible cues of artificial trust","authors":"Carolina Centeio Jorge, Catholijn M. Jonker, Myrthe L. Tielman","doi":"10.1145/3635475","DOIUrl":"https://doi.org/10.1145/3635475","url":null,"abstract":"<p>In teams composed of humans, we use trust in others to make decisions, such as what to do next, who to help and who to ask for help. When a team member is artificial, they should also be able to assess whether a human teammate is trustworthy for a certain task. We see trustworthiness as the combination of (1) whether someone will do a task and (2) whether they can do it. With building beliefs in trustworthiness as an ultimate goal, we explore which internal factors (krypta) of the human may play a role (e.g. ability, benevolence and integrity) in determining trustworthiness, according to existing literature. Furthermore, we investigate which observable metrics (manifesta) an agent may take into account as cues for the human teammate’s krypta in an online 2D grid-world experiment (n=54). Results suggest that cues of ability, benevolence and integrity influence trustworthiness. However, we observed that trustworthiness is mainly influenced by human’s playing strategy and cost-benefit analysis, which deserves further investigation. This is a first step towards building informed beliefs of human trustworthiness in human-AI teamwork.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138545567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
I Know This Looks Bad, But I Can Explain: Understanding When AI Should Explain Actions In Human-AI Teams 我知道这看起来很糟糕,但我可以解释:理解人工智能何时应该解释人类-人工智能团队中的行为
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-12-02 DOI: 10.1145/3635474
Rui Zhang, Christopher Flathmann, Geoff Musick, Beau Schelble, Nathan J. McNeese, Bart Knijnenburg, Wen Duan

Explanation of artificial intelligence (AI) decision-making has become an important research area in human-computer interaction (HCI) and computer-supported teamwork research. While plenty of research has investigated AI explanations with an intent to improve AI transparency and human trust in AI, how AI explanations function in teaming environments remains unclear. Given that a major benefit of AI giving explanations is to increase human trust understanding how AI explanations impact human trust is crucial to effective human-AI teamwork. An online experiment was conducted with 156 participants to explore this question by examining how a teammate’s explanations impact the perceived trust of the teammate and the effectiveness of the team and how these impacts vary based on whether the teammate is a human or an AI. This study shows that explanations facilitate trust in AI teammates when explaining why AI disobeyed humans’ orders but hindered trust when explaining why an AI lied to humans. In addition, participants’ personal characteristics (e.g., their gender and the individual’s ethical framework) impacted their perceptions of AI teammates both directly and indirectly in different scenarios. Our study contributes to interactive intelligent systems and HCI by shedding light on how an AI teammate’s actions and corresponding explanations are perceived by humans while identifying factors that impact trust and perceived effectiveness. This work provides an initial understanding of AI explanations in human-AI teams, which can be used for future research to build upon in exploring AI explanation implementation in collaborative environments.

人工智能(AI)决策的解释已成为人机交互(HCI)和计算机支持的团队研究的一个重要研究领域。尽管大量研究调查了人工智能解释,目的是提高人工智能的透明度和人类对人工智能的信任,但人工智能解释在团队环境中的作用仍不清楚。考虑到人工智能解释的一个主要好处是增加人类的信任,了解人工智能解释如何影响人类的信任对于有效的人类-人工智能团队合作至关重要。我们对156名参与者进行了一项在线实验,通过检查队友的解释如何影响队友的信任和团队效率,以及这些影响如何根据队友是人类还是人工智能而变化,来探索这个问题。这项研究表明,解释为什么AI不服从人类的命令,会促进对AI队友的信任,但解释为什么AI对人类撒谎,会阻碍信任。此外,参与者的个人特征(例如,他们的性别和个人的道德框架)直接或间接地影响了他们在不同场景下对AI队友的看法。我们的研究通过揭示人工智能队友的行为和相应的解释如何被人类感知,同时确定影响信任和感知有效性的因素,为交互式智能系统和HCI做出了贡献。这项工作提供了对人类-人工智能团队中人工智能解释的初步理解,可用于未来的研究,以探索协作环境中人工智能解释的实现。
{"title":"I Know This Looks Bad, But I Can Explain: Understanding When AI Should Explain Actions In Human-AI Teams","authors":"Rui Zhang, Christopher Flathmann, Geoff Musick, Beau Schelble, Nathan J. McNeese, Bart Knijnenburg, Wen Duan","doi":"10.1145/3635474","DOIUrl":"https://doi.org/10.1145/3635474","url":null,"abstract":"<p>Explanation of artificial intelligence (AI) decision-making has become an important research area in human-computer interaction (HCI) and computer-supported teamwork research. While plenty of research has investigated AI explanations with an intent to improve AI transparency and human trust in AI, how AI explanations function in teaming environments remains unclear. Given that a major benefit of AI giving explanations is to increase human trust understanding how AI explanations impact human trust is crucial to effective human-AI teamwork. An online experiment was conducted with 156 participants to explore this question by examining how a teammate’s explanations impact the perceived trust of the teammate and the effectiveness of the team and how these impacts vary based on whether the teammate is a human or an AI. This study shows that explanations facilitate trust in AI teammates when explaining why AI disobeyed humans’ orders but hindered trust when explaining why an AI lied to humans. In addition, participants’ personal characteristics (e.g., their gender and the individual’s ethical framework) impacted their perceptions of AI teammates both directly and indirectly in different scenarios. Our study contributes to interactive intelligent systems and HCI by shedding light on how an AI teammate’s actions and corresponding explanations are perceived by humans while identifying factors that impact trust and perceived effectiveness. This work provides an initial understanding of AI explanations in human-AI teams, which can be used for future research to build upon in exploring AI explanation implementation in collaborative environments.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Meaningful Explanation Effect on User’s Trust in an AI Medical System: Designing Explanations for Non-Expert Users AI医疗系统中有意义的解释对用户信任的影响:为非专家用户设计解释
4区 计算机科学 Q2 Computer Science Pub Date : 2023-11-08 DOI: 10.1145/3631614
Retno Larasati, Anna De Liddo, Enrico Motta
Whereas most research in AI system explanation for healthcare applications looks at developing algorithmic explanations targeted at AI experts or medical professionals, the question we raise is: How do we build meaningful explanations for laypeople? And how does a meaningful explanation affect user’s trust perceptions? Our research investigates how the key factors affecting human-AI trust change in the light of human expertise, and how to design explanations specifically targeted at non-experts. By means of a stage-based design method, we map the ways laypeople understand AI explanations in a User Explanation Model. We also map both medical professionals and AI experts’ practice in an Expert Explanation Model. A Target Explanation Model is then proposed, which represents how experts’ practice and layperson’s understanding can be combined to design meaningful explanations. Design guidelines for meaningful AI explanations are proposed, and a prototype of AI system explanation for non-expert users in a breast cancer scenario is presented and assessed on how it affect users’ trust perceptions.
尽管大多数针对医疗保健应用的人工智能系统解释研究着眼于开发针对人工智能专家或医疗专业人员的算法解释,但我们提出的问题是:我们如何为外行人构建有意义的解释?有意义的解释如何影响用户的信任感知?我们的研究调查了影响人类与人工智能信任的关键因素如何随着人类的专业知识而变化,以及如何设计专门针对非专家的解释。通过基于阶段的设计方法,我们映射了外行人在用户解释模型中理解AI解释的方式。我们还将医疗专业人员和人工智能专家的实践映射到专家解释模型中。然后提出了一个目标解释模型,它代表了专家的实践和外行人的理解如何结合起来设计有意义的解释。提出了有意义的人工智能解释的设计指南,并提出了一个针对乳腺癌场景中非专家用户的人工智能系统解释原型,并评估了它如何影响用户的信任感知。
{"title":"Meaningful Explanation Effect on User’s Trust in an AI Medical System: Designing Explanations for Non-Expert Users","authors":"Retno Larasati, Anna De Liddo, Enrico Motta","doi":"10.1145/3631614","DOIUrl":"https://doi.org/10.1145/3631614","url":null,"abstract":"Whereas most research in AI system explanation for healthcare applications looks at developing algorithmic explanations targeted at AI experts or medical professionals, the question we raise is: How do we build meaningful explanations for laypeople? And how does a meaningful explanation affect user’s trust perceptions? Our research investigates how the key factors affecting human-AI trust change in the light of human expertise, and how to design explanations specifically targeted at non-experts. By means of a stage-based design method, we map the ways laypeople understand AI explanations in a User Explanation Model. We also map both medical professionals and AI experts’ practice in an Expert Explanation Model. A Target Explanation Model is then proposed, which represents how experts’ practice and layperson’s understanding can be combined to design meaningful explanations. Design guidelines for meaningful AI explanations are proposed, and a prototype of AI system explanation for non-expert users in a breast cancer scenario is presented and assessed on how it affect users’ trust perceptions.","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135390620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Explainable Activity Recognition in Videos using Deep Learning and Tractable Probabilistic Models 使用深度学习和可处理概率模型的视频中可解释的活动识别
4区 计算机科学 Q2 Computer Science Pub Date : 2023-10-12 DOI: 10.1145/3626961
Chiradeep Roy, Mahsan Nourani, Shivvrat Arya, Mahesh Shanbhag, Tahrima Rahman, Eric D. Ragan, Nicholas Ruozzi, Vibhav Gogate
We consider the following video activity recognition (VAR) task: given a video, infer the set of activities being performed in the video and assign each frame to an activity. Although VAR can be solved accurately using existing deep learning techniques, deep networks are neither interpretable nor explainable and as a result their use is problematic in high stakes decision-making applications (e.g., in healthcare, experimental Biology, aviation, law, etc.). In such applications, failure may lead to disastrous consequences and therefore it is necessary that the user is able to either understand the inner workings of the model or probe it to understand its reasoning patterns for a given decision. We address these limitations of deep networks by proposing a new approach that feeds the output of a deep model into a tractable, interpretable probabilistic model called a dynamic conditional cutset network that is defined over the explanatory and output variables and then performing joint inference over the combined model. The two key benefits of using cutset networks are: (a) they explicitly model the relationship between the output and explanatory variables and as a result the combined model is likely to be more accurate than the vanilla deep model and (b) they can answer reasoning queries in polynomial time and as a result they can derive meaningful explanations by efficiently answering explanation queries. We demonstrate the efficacy of our approach on two datasets, Textually Annotated Cooking Scenes (TACoS), and wet lab, using conventional evaluation measures such as the Jaccard Index and Hamming Loss, as well as a human-subjects study.
我们考虑以下视频活动识别(VAR)任务:给定视频,推断视频中正在执行的活动集,并将每一帧分配给一个活动。尽管使用现有的深度学习技术可以准确地解决VAR,但深度网络既不可解释也不可解释,因此在高风险决策应用(例如医疗保健、实验生物学、航空、法律等)中使用它们是有问题的。在这样的应用程序中,失败可能会导致灾难性的后果,因此用户必须能够理解模型的内部工作原理,或者探索模型以理解给定决策的推理模式。我们通过提出一种新的方法来解决深度网络的这些局限性,该方法将深度模型的输出输入到一个可处理的,可解释的概率模型中,称为动态条件割集网络,该模型定义在解释变量和输出变量上,然后在组合模型上执行联合推理。使用割集网络的两个关键好处是:(a)它们显式地建模输出和解释变量之间的关系,因此组合模型可能比普通深度模型更准确;(b)它们可以在多项式时间内回答推理查询,因此它们可以通过有效地回答解释查询来获得有意义的解释。我们在两个数据集上证明了我们的方法的有效性,文本注释烹饪场景(TACoS)和湿实验室,使用传统的评估措施,如Jaccard指数和Hamming损失,以及人类受试者研究。
{"title":"Explainable Activity Recognition in Videos using Deep Learning and Tractable Probabilistic Models","authors":"Chiradeep Roy, Mahsan Nourani, Shivvrat Arya, Mahesh Shanbhag, Tahrima Rahman, Eric D. Ragan, Nicholas Ruozzi, Vibhav Gogate","doi":"10.1145/3626961","DOIUrl":"https://doi.org/10.1145/3626961","url":null,"abstract":"We consider the following video activity recognition (VAR) task: given a video, infer the set of activities being performed in the video and assign each frame to an activity. Although VAR can be solved accurately using existing deep learning techniques, deep networks are neither interpretable nor explainable and as a result their use is problematic in high stakes decision-making applications (e.g., in healthcare, experimental Biology, aviation, law, etc.). In such applications, failure may lead to disastrous consequences and therefore it is necessary that the user is able to either understand the inner workings of the model or probe it to understand its reasoning patterns for a given decision. We address these limitations of deep networks by proposing a new approach that feeds the output of a deep model into a tractable, interpretable probabilistic model called a dynamic conditional cutset network that is defined over the explanatory and output variables and then performing joint inference over the combined model. The two key benefits of using cutset networks are: (a) they explicitly model the relationship between the output and explanatory variables and as a result the combined model is likely to be more accurate than the vanilla deep model and (b) they can answer reasoning queries in polynomial time and as a result they can derive meaningful explanations by efficiently answering explanation queries. We demonstrate the efficacy of our approach on two datasets, Textually Annotated Cooking Scenes (TACoS), and wet lab, using conventional evaluation measures such as the Jaccard Index and Hamming Loss, as well as a human-subjects study.","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136012607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
XAutoML: A Visual Analytics Tool for Understanding and Validating Automated Machine Learning XAutoML:用于理解和验证自动机器学习的可视化分析工具
4区 计算机科学 Q2 Computer Science Pub Date : 2023-09-28 DOI: 10.1145/3625240
Marc-André Zöller, Waldemar Titov, Thomas Schlegel, Marco F. Huber
In the last ten years, various automated machine learning (AutoML) systems have been proposed to build end-to-end machine learning (ML) pipelines with minimal human interaction. Even though such automatically synthesized ML pipelines are able to achieve competitive performance, recent studies have shown that users do not trust models constructed by AutoML due to missing transparency of AutoML systems and missing explanations for the constructed ML pipelines. In a requirements analysis study with 36 domain experts, data scientists, and AutoML researchers from different professions with vastly different expertise in ML, we collect detailed informational needs for AutoML. We propose XAutoML, an interactive visual analytics tool for explaining arbitrary AutoML optimization procedures and ML pipelines constructed by AutoML. XAutoML combines interactive visualizations with established techniques from explainable artificial intelligence (XAI) to make the complete AutoML procedure transparent and explainable. By integrating XAutoML with JupyterLab, experienced users can extend the visual analytics with ad-hoc visualizations based on information extracted from XAutoML. We validate our approach in a user study with the same diverse user group from the requirements analysis. All participants were able to extract useful information from XAutoML, leading to a significantly increased understanding of ML pipelines produced by AutoML and the AutoML optimization itself.
在过去的十年里,人们提出了各种自动化机器学习(AutoML)系统来构建端到端的机器学习(ML)管道,而人工交互最少。尽管这种自动合成的ML管道能够达到有竞争力的性能,但最近的研究表明,由于AutoML系统缺乏透明度和对构建的ML管道缺乏解释,用户不信任AutoML构建的模型。在对36名领域专家、数据科学家和AutoML研究人员的需求分析研究中,我们收集了AutoML的详细信息需求,这些专家来自不同的行业,在ML方面的专业知识差异很大。我们提出了一个交互式可视化分析工具XAutoML,用于解释任意AutoML优化过程和由AutoML构建的ML管道。XAutoML将交互式可视化与来自可解释人工智能(XAI)的成熟技术相结合,使整个AutoML过程透明且可解释。通过将XAutoML与JupyterLab集成,有经验的用户可以使用基于从XAutoML提取的信息的特别可视化来扩展可视化分析。我们在一个用户研究中验证我们的方法,该研究使用来自需求分析的相同的不同用户组。所有参与者都能够从XAutoML中提取有用的信息,从而大大提高了对AutoML生成的ML管道和AutoML优化本身的理解。
{"title":"XAutoML: A Visual Analytics Tool for Understanding and Validating Automated Machine Learning","authors":"Marc-André Zöller, Waldemar Titov, Thomas Schlegel, Marco F. Huber","doi":"10.1145/3625240","DOIUrl":"https://doi.org/10.1145/3625240","url":null,"abstract":"In the last ten years, various automated machine learning (AutoML) systems have been proposed to build end-to-end machine learning (ML) pipelines with minimal human interaction. Even though such automatically synthesized ML pipelines are able to achieve competitive performance, recent studies have shown that users do not trust models constructed by AutoML due to missing transparency of AutoML systems and missing explanations for the constructed ML pipelines. In a requirements analysis study with 36 domain experts, data scientists, and AutoML researchers from different professions with vastly different expertise in ML, we collect detailed informational needs for AutoML. We propose XAutoML, an interactive visual analytics tool for explaining arbitrary AutoML optimization procedures and ML pipelines constructed by AutoML. XAutoML combines interactive visualizations with established techniques from explainable artificial intelligence (XAI) to make the complete AutoML procedure transparent and explainable. By integrating XAutoML with JupyterLab, experienced users can extend the visual analytics with ad-hoc visualizations based on information extracted from XAutoML. We validate our approach in a user study with the same diverse user group from the requirements analysis. All participants were able to extract useful information from XAutoML, leading to a significantly increased understanding of ML pipelines produced by AutoML and the AutoML optimization itself.","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135343679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
2022 TiiS Best Paper Announcement 2022年度最佳论文公告
4区 计算机科学 Q2 Computer Science Pub Date : 2023-09-11 DOI: 10.1145/3615590
Michelle Zhou, Shlomo Berkovsky
The IEEE TRANSACTIONS ON SIGNAL PROCESSING is fortunate to attract submissions of the highest quality and to publish articles that deal with topics that are at the forefront of what is happening in the field of signal processing and its adjacent areas. ...
IEEE TRANSACTIONS ON SIGNAL PROCESSING很幸运地吸引了最高质量的投稿,并发表了涉及信号处理领域及其邻近领域中正在发生的前沿主题的文章. ...
{"title":"2022 TiiS Best Paper Announcement","authors":"Michelle Zhou, Shlomo Berkovsky","doi":"10.1145/3615590","DOIUrl":"https://doi.org/10.1145/3615590","url":null,"abstract":"The IEEE TRANSACTIONS ON SIGNAL PROCESSING is fortunate to attract submissions of the highest quality and to publish articles that deal with topics that are at the forefront of what is happening in the field of signal processing and its adjacent areas. ...","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135980610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalisable Dialogue-based Approach for Active Learning of Activities of Daily Living 基于对话的日常生活活动主动学习方法
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-08-14 DOI: 10.1145/3616017
Ronnie Smith, M. Dragone
While Human Activity Recognition systems may benefit from Active Learning by allowing users to self-annotate their Activities of Daily Living (ADLs), many proposed methods for collecting such annotations are for short-term data collection campaigns for specific datasets. We present a reusable dialogue-based approach to user interaction for active learning in activity recognition systems, which utilises semantic similarity measures and a dataset of natural language descriptions of common activities (which we make publicly available). Our approach involves system-initiated dialogue, including follow-up questions to reduce ambiguity in user responses where appropriate. We apply this approach to two active learning scenarios: (i) using an existing CASAS dataset, demonstrating long-term usage; and (ii) using an online activity recognition system, which tackles the issue of online segmentation and labelling. We demonstrate our work in context, in which a natural language interface provides knowledge that can help interpret other multi-modal sensor data. We provide results highlighting the potential of our dialogue- and semantic similarity-based approach. We evaluate our work: (i) quantitatively, as an efficient way to seek users’ input for active learning of ADLs; and (ii) qualitatively, through a user study in which users were asked to compare our approach and an established method. Results show the potential of our approach as a hands-free interface for annotation of sensor data as part of an active learning system. We provide insights into the challenges of active learning for activity recognition under real-world conditions and identify potential ways to address them.
虽然人类活动识别系统可以通过允许用户自我注释他们的日常生活活动(adl)而受益于主动学习,但许多收集此类注释的建议方法都是针对特定数据集的短期数据收集活动。我们提出了一种可重用的基于对话的用户交互方法,用于活动识别系统中的主动学习,该方法利用语义相似性度量和常见活动的自然语言描述数据集(我们公开提供)。我们的方法包括系统发起的对话,包括后续问题,以减少在适当情况下用户回答的模糊性。我们将这种方法应用于两个主动学习场景:(i)使用现有的CASAS数据集,展示长期使用情况;(ii)使用在线活动识别系统,该系统解决了在线分割和标签问题。我们在上下文中展示了我们的工作,其中自然语言界面提供了可以帮助解释其他多模态传感器数据的知识。我们提供的结果突出了我们基于对话和语义相似性的方法的潜力。我们评估我们的工作:(i)定量地,作为一种有效的方式来寻求用户对adl的主动学习的输入;(ii)定性地,通过用户研究,要求用户比较我们的方法和既定的方法。结果表明,作为主动学习系统的一部分,我们的方法具有作为传感器数据注释的免提接口的潜力。我们提供了在现实世界条件下主动学习对活动识别的挑战的见解,并确定了解决这些挑战的潜在方法。
{"title":"Generalisable Dialogue-based Approach for Active Learning of Activities of Daily Living","authors":"Ronnie Smith, M. Dragone","doi":"10.1145/3616017","DOIUrl":"https://doi.org/10.1145/3616017","url":null,"abstract":"While Human Activity Recognition systems may benefit from Active Learning by allowing users to self-annotate their Activities of Daily Living (ADLs), many proposed methods for collecting such annotations are for short-term data collection campaigns for specific datasets. We present a reusable dialogue-based approach to user interaction for active learning in activity recognition systems, which utilises semantic similarity measures and a dataset of natural language descriptions of common activities (which we make publicly available). Our approach involves system-initiated dialogue, including follow-up questions to reduce ambiguity in user responses where appropriate. We apply this approach to two active learning scenarios: (i) using an existing CASAS dataset, demonstrating long-term usage; and (ii) using an online activity recognition system, which tackles the issue of online segmentation and labelling. We demonstrate our work in context, in which a natural language interface provides knowledge that can help interpret other multi-modal sensor data. We provide results highlighting the potential of our dialogue- and semantic similarity-based approach. We evaluate our work: (i) quantitatively, as an efficient way to seek users’ input for active learning of ADLs; and (ii) qualitatively, through a user study in which users were asked to compare our approach and an established method. Results show the potential of our approach as a hands-free interface for annotation of sensor data as part of an active learning system. We provide insights into the challenges of active learning for activity recognition under real-world conditions and identify potential ways to address them.","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82427498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
When Biased Humans Meet Debiased AI: A Case Study in College Major Recommendation 当有偏见的人类遇到无偏见的人工智能:大学专业推荐的案例研究
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-08-01 DOI: 10.1145/3611313
Clarice Wang, Kathryn Wang, Andrew Bian, Rashidul Islam, Kamrun Keya, James R. Foulds, Shimei Pan
Currently, there is a surge of interest in fair Artificial Intelligence (AI) and Machine Learning (ML) research which aims to mitigate discriminatory bias in AI algorithms, e.g., along lines of gender, age, and race. While most research in this domain focuses on developing fair AI algorithms, in this work, we examine the challenges which arise when humans and fair AI interact. Our results show that due to an apparent conflict between human preferences and fairness, a fair AI algorithm on its own may be insufficient to achieve its intended results in the real world. Using college major recommendation as a case study, we build a fair AI recommender by employing gender debiasing machine learning techniques. Our offline evaluation showed that the debiased recommender makes fairer career recommendations without sacrificing its accuracy in prediction. Nevertheless, an online user study of more than 200 college students revealed that participants on average prefer the original biased system over the debiased system. Specifically, we found that perceived gender disparity is a determining factor for the acceptance of a recommendation. In other words, we cannot fully address the gender bias issue in AI recommendations without addressing the gender bias in humans. We conducted a follow-up survey to gain additional insights into the effectiveness of various design options that can help participants to overcome their own biases. Our results suggest that making fair AI explainable is crucial for increasing its adoption in the real world.
目前,人们对公平的人工智能(AI)和机器学习(ML)研究的兴趣激增,这些研究旨在减轻人工智能算法中的歧视性偏见,例如性别,年龄和种族。虽然这一领域的大多数研究都集中在开发公平的人工智能算法上,但在这项工作中,我们研究了人类和公平的人工智能交互时出现的挑战。我们的研究结果表明,由于人类偏好与公平之间存在明显的冲突,一个公平的人工智能算法本身可能不足以在现实世界中达到预期的结果。以大学专业推荐为例,我们采用消除性别偏见的机器学习技术构建了一个公平的人工智能推荐系统。我们的离线评估表明,去偏见推荐器在不牺牲预测准确性的情况下做出了更公平的职业推荐。然而,一项针对200多名大学生的在线用户研究显示,参与者平均更喜欢原始的有偏见的系统,而不是去偏见的系统。具体来说,我们发现感知到的性别差异是接受推荐的决定性因素。换句话说,如果不解决人类的性别偏见,我们就不能完全解决人工智能推荐中的性别偏见问题。我们进行了一项后续调查,以进一步了解各种设计方案的有效性,这些方案可以帮助参与者克服自己的偏见。我们的研究结果表明,让公平的人工智能具有可解释性对于提高其在现实世界中的应用至关重要。
{"title":"When Biased Humans Meet Debiased AI: A Case Study in College Major Recommendation","authors":"Clarice Wang, Kathryn Wang, Andrew Bian, Rashidul Islam, Kamrun Keya, James R. Foulds, Shimei Pan","doi":"10.1145/3611313","DOIUrl":"https://doi.org/10.1145/3611313","url":null,"abstract":"Currently, there is a surge of interest in fair Artificial Intelligence (AI) and Machine Learning (ML) research which aims to mitigate discriminatory bias in AI algorithms, e.g., along lines of gender, age, and race. While most research in this domain focuses on developing fair AI algorithms, in this work, we examine the challenges which arise when humans and fair AI interact. Our results show that due to an apparent conflict between human preferences and fairness, a fair AI algorithm on its own may be insufficient to achieve its intended results in the real world. Using college major recommendation as a case study, we build a fair AI recommender by employing gender debiasing machine learning techniques. Our offline evaluation showed that the debiased recommender makes fairer career recommendations without sacrificing its accuracy in prediction. Nevertheless, an online user study of more than 200 college students revealed that participants on average prefer the original biased system over the debiased system. Specifically, we found that perceived gender disparity is a determining factor for the acceptance of a recommendation. In other words, we cannot fully address the gender bias issue in AI recommendations without addressing the gender bias in humans. We conducted a follow-up survey to gain additional insights into the effectiveness of various design options that can help participants to overcome their own biases. Our results suggest that making fair AI explainable is crucial for increasing its adoption in the real world.","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73796948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrity Based Explanations for Fostering Appropriate Trust in AI Agents 基于诚信的人工智能主体适当信任培养解释
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-07-24 DOI: https://dl.acm.org/doi/10.1145/3610578
Siddharth Mehrotra, Carolina Centeio Jorge, Catholijn M. Jonker, Myrthe L. Tielman

Appropriate trust is an important component of the interaction between people and AI systems, in that ‘inappropriate’ trust can cause disuse, misuse or abuse of AI. To foster appropriate trust in AI, we need to understand how AI systems can elicit appropriate levels of trust from their users. Out of the aspects that influence trust, this paper focuses on the effect of showing integrity. In particular, this paper presents a study of how different integrity-based explanations made by an AI agent affect the appropriateness of trust of a human in that agent. To explore this, (1) we provide a formal definition to measure appropriate trust, (2) present a between-subject user study with 160 participants who collaborated with an AI agent in such a task. In the study, the AI agent assisted its human partner in estimating calories on a food plate by expressing its integrity through explanations focusing on either honesty, transparency or fairness. Our results show that (a) an agent who displays its integrity by being explicit about potential biases in data or algorithms achieved appropriate trust more often compared to being honest about capability or transparent about the decision-making process, and (b) subjective trust builds up and recovers better with honesty-like integrity explanations. Our results contribute to the design of agent-based AI systems that guide humans to appropriately trust them, a formal method to measure appropriate trust, and how to support humans in calibrating their trust in AI.

适当的信任是人与人工智能系统之间互动的重要组成部分,因为“不适当”的信任可能导致人工智能的废弃、误用或滥用。为了培养对人工智能的适当信任,我们需要了解人工智能系统如何从用户那里获得适当程度的信任。在影响信任的几个方面中,本文着重研究诚信表现的作用。特别是,本文提出了一项研究,研究人工智能代理所做的不同的基于完整性的解释如何影响人类对该代理的信任的适当性。为了探讨这一点,(1)我们提供了一个正式的定义来衡量适当的信任,(2)提出了一个有160名参与者的主题间用户研究,他们在这样的任务中与人工智能代理合作。在这项研究中,人工智能代理通过专注于诚实、透明或公平的解释来表达其完整性,帮助其人类伙伴估算食物盘中的卡路里。我们的研究结果表明:(a)与对能力诚实或对决策过程透明相比,通过明确数据或算法中的潜在偏见来展示其完整性的代理更容易获得适当的信任,并且(b)主观信任通过诚实的完整性解释建立和恢复得更好。我们的研究结果有助于设计基于代理的人工智能系统,指导人类适当地信任它们,一种衡量适当信任的正式方法,以及如何支持人类校准他们对人工智能的信任。
{"title":"Integrity Based Explanations for Fostering Appropriate Trust in AI Agents","authors":"Siddharth Mehrotra, Carolina Centeio Jorge, Catholijn M. Jonker, Myrthe L. Tielman","doi":"https://dl.acm.org/doi/10.1145/3610578","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3610578","url":null,"abstract":"<p>Appropriate trust is an important component of the interaction between people and AI systems, in that ‘inappropriate’ trust can cause disuse, misuse or abuse of AI. To foster appropriate trust in AI, we need to understand how AI systems can elicit appropriate levels of trust from their users. Out of the aspects that influence trust, this paper focuses on the effect of showing integrity. In particular, this paper presents a study of how different integrity-based explanations made by an AI agent affect the appropriateness of trust of a human in that agent. To explore this, (1) we provide a formal definition to measure appropriate trust, (2) present a between-subject user study with 160 participants who collaborated with an AI agent in such a task. In the study, the AI agent assisted its human partner in estimating calories on a food plate by expressing its integrity through explanations focusing on either honesty, transparency or fairness. Our results show that (a) an agent who displays its integrity by being explicit about potential biases in data or algorithms achieved appropriate trust more often compared to being honest about capability or transparent about the decision-making process, and (b) subjective trust builds up and recovers better with honesty-like integrity explanations. Our results contribute to the design of agent-based AI systems that guide humans to appropriately trust them, a formal method to measure appropriate trust, and how to support humans in calibrating their trust in AI.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrity Based Explanations for Fostering Appropriate Trust in AI Agents 基于诚信的人工智能主体适当信任培养解释
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-07-24 DOI: 10.1145/3610578
Siddharth Mehrotra, Carolina Centeio Jorge, C. Jonker, M. Tielman
Appropriate trust is an important component of the interaction between people and AI systems, in that ‘inappropriate’ trust can cause disuse, misuse or abuse of AI. To foster appropriate trust in AI, we need to understand how AI systems can elicit appropriate levels of trust from their users. Out of the aspects that influence trust, this paper focuses on the effect of showing integrity. In particular, this paper presents a study of how different integrity-based explanations made by an AI agent affect the appropriateness of trust of a human in that agent. To explore this, (1) we provide a formal definition to measure appropriate trust, (2) present a between-subject user study with 160 participants who collaborated with an AI agent in such a task. In the study, the AI agent assisted its human partner in estimating calories on a food plate by expressing its integrity through explanations focusing on either honesty, transparency or fairness. Our results show that (a) an agent who displays its integrity by being explicit about potential biases in data or algorithms achieved appropriate trust more often compared to being honest about capability or transparent about the decision-making process, and (b) subjective trust builds up and recovers better with honesty-like integrity explanations. Our results contribute to the design of agent-based AI systems that guide humans to appropriately trust them, a formal method to measure appropriate trust, and how to support humans in calibrating their trust in AI.
适当的信任是人与人工智能系统之间互动的重要组成部分,因为“不适当”的信任可能导致人工智能的废弃、误用或滥用。为了培养对人工智能的适当信任,我们需要了解人工智能系统如何从用户那里获得适当程度的信任。在影响信任的几个方面中,本文着重研究诚信表现的作用。特别是,本文提出了一项研究,研究人工智能代理所做的不同的基于完整性的解释如何影响人类对该代理的信任的适当性。为了探讨这一点,(1)我们提供了一个正式的定义来衡量适当的信任,(2)提出了一个有160名参与者的主题间用户研究,他们在这样的任务中与人工智能代理合作。在这项研究中,人工智能代理通过专注于诚实、透明或公平的解释来表达其完整性,帮助其人类伙伴估算食物盘中的卡路里。我们的研究结果表明:(a)与对能力诚实或对决策过程透明相比,通过明确数据或算法中的潜在偏见来展示其完整性的代理更容易获得适当的信任,并且(b)主观信任通过诚实的完整性解释建立和恢复得更好。我们的研究结果有助于设计基于代理的人工智能系统,指导人类适当地信任它们,一种衡量适当信任的正式方法,以及如何支持人类校准他们对人工智能的信任。
{"title":"Integrity Based Explanations for Fostering Appropriate Trust in AI Agents","authors":"Siddharth Mehrotra, Carolina Centeio Jorge, C. Jonker, M. Tielman","doi":"10.1145/3610578","DOIUrl":"https://doi.org/10.1145/3610578","url":null,"abstract":"Appropriate trust is an important component of the interaction between people and AI systems, in that ‘inappropriate’ trust can cause disuse, misuse or abuse of AI. To foster appropriate trust in AI, we need to understand how AI systems can elicit appropriate levels of trust from their users. Out of the aspects that influence trust, this paper focuses on the effect of showing integrity. In particular, this paper presents a study of how different integrity-based explanations made by an AI agent affect the appropriateness of trust of a human in that agent. To explore this, (1) we provide a formal definition to measure appropriate trust, (2) present a between-subject user study with 160 participants who collaborated with an AI agent in such a task. In the study, the AI agent assisted its human partner in estimating calories on a food plate by expressing its integrity through explanations focusing on either honesty, transparency or fairness. Our results show that (a) an agent who displays its integrity by being explicit about potential biases in data or algorithms achieved appropriate trust more often compared to being honest about capability or transparent about the decision-making process, and (b) subjective trust builds up and recovers better with honesty-like integrity explanations. Our results contribute to the design of agent-based AI systems that guide humans to appropriately trust them, a formal method to measure appropriate trust, and how to support humans in calibrating their trust in AI.","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76625860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
ACM Transactions on Interactive Intelligent Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1