首页 > 最新文献

ACM Transactions on Interactive Intelligent Systems最新文献

英文 中文
XAutoML: A Visual Analytics Tool for Understanding and Validating Automated Machine Learning XAutoML:用于理解和验证自动机器学习的可视化分析工具
4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-09-28 DOI: 10.1145/3625240
Marc-André Zöller, Waldemar Titov, Thomas Schlegel, Marco F. Huber
In the last ten years, various automated machine learning (AutoML) systems have been proposed to build end-to-end machine learning (ML) pipelines with minimal human interaction. Even though such automatically synthesized ML pipelines are able to achieve competitive performance, recent studies have shown that users do not trust models constructed by AutoML due to missing transparency of AutoML systems and missing explanations for the constructed ML pipelines. In a requirements analysis study with 36 domain experts, data scientists, and AutoML researchers from different professions with vastly different expertise in ML, we collect detailed informational needs for AutoML. We propose XAutoML, an interactive visual analytics tool for explaining arbitrary AutoML optimization procedures and ML pipelines constructed by AutoML. XAutoML combines interactive visualizations with established techniques from explainable artificial intelligence (XAI) to make the complete AutoML procedure transparent and explainable. By integrating XAutoML with JupyterLab, experienced users can extend the visual analytics with ad-hoc visualizations based on information extracted from XAutoML. We validate our approach in a user study with the same diverse user group from the requirements analysis. All participants were able to extract useful information from XAutoML, leading to a significantly increased understanding of ML pipelines produced by AutoML and the AutoML optimization itself.
在过去的十年里,人们提出了各种自动化机器学习(AutoML)系统来构建端到端的机器学习(ML)管道,而人工交互最少。尽管这种自动合成的ML管道能够达到有竞争力的性能,但最近的研究表明,由于AutoML系统缺乏透明度和对构建的ML管道缺乏解释,用户不信任AutoML构建的模型。在对36名领域专家、数据科学家和AutoML研究人员的需求分析研究中,我们收集了AutoML的详细信息需求,这些专家来自不同的行业,在ML方面的专业知识差异很大。我们提出了一个交互式可视化分析工具XAutoML,用于解释任意AutoML优化过程和由AutoML构建的ML管道。XAutoML将交互式可视化与来自可解释人工智能(XAI)的成熟技术相结合,使整个AutoML过程透明且可解释。通过将XAutoML与JupyterLab集成,有经验的用户可以使用基于从XAutoML提取的信息的特别可视化来扩展可视化分析。我们在一个用户研究中验证我们的方法,该研究使用来自需求分析的相同的不同用户组。所有参与者都能够从XAutoML中提取有用的信息,从而大大提高了对AutoML生成的ML管道和AutoML优化本身的理解。
{"title":"XAutoML: A Visual Analytics Tool for Understanding and Validating Automated Machine Learning","authors":"Marc-André Zöller, Waldemar Titov, Thomas Schlegel, Marco F. Huber","doi":"10.1145/3625240","DOIUrl":"https://doi.org/10.1145/3625240","url":null,"abstract":"In the last ten years, various automated machine learning (AutoML) systems have been proposed to build end-to-end machine learning (ML) pipelines with minimal human interaction. Even though such automatically synthesized ML pipelines are able to achieve competitive performance, recent studies have shown that users do not trust models constructed by AutoML due to missing transparency of AutoML systems and missing explanations for the constructed ML pipelines. In a requirements analysis study with 36 domain experts, data scientists, and AutoML researchers from different professions with vastly different expertise in ML, we collect detailed informational needs for AutoML. We propose XAutoML, an interactive visual analytics tool for explaining arbitrary AutoML optimization procedures and ML pipelines constructed by AutoML. XAutoML combines interactive visualizations with established techniques from explainable artificial intelligence (XAI) to make the complete AutoML procedure transparent and explainable. By integrating XAutoML with JupyterLab, experienced users can extend the visual analytics with ad-hoc visualizations based on information extracted from XAutoML. We validate our approach in a user study with the same diverse user group from the requirements analysis. All participants were able to extract useful information from XAutoML, leading to a significantly increased understanding of ML pipelines produced by AutoML and the AutoML optimization itself.","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135343679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
2022 TiiS Best Paper Announcement 2022年度最佳论文公告
4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-09-11 DOI: 10.1145/3615590
Michelle Zhou, Shlomo Berkovsky
The IEEE TRANSACTIONS ON SIGNAL PROCESSING is fortunate to attract submissions of the highest quality and to publish articles that deal with topics that are at the forefront of what is happening in the field of signal processing and its adjacent areas. ...
IEEE TRANSACTIONS ON SIGNAL PROCESSING很幸运地吸引了最高质量的投稿,并发表了涉及信号处理领域及其邻近领域中正在发生的前沿主题的文章. ...
{"title":"2022 TiiS Best Paper Announcement","authors":"Michelle Zhou, Shlomo Berkovsky","doi":"10.1145/3615590","DOIUrl":"https://doi.org/10.1145/3615590","url":null,"abstract":"The IEEE TRANSACTIONS ON SIGNAL PROCESSING is fortunate to attract submissions of the highest quality and to publish articles that deal with topics that are at the forefront of what is happening in the field of signal processing and its adjacent areas. ...","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135980610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalisable Dialogue-based Approach for Active Learning of Activities of Daily Living 基于对话的日常生活活动主动学习方法
IF 3.4 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-08-14 DOI: 10.1145/3616017
Ronnie Smith, M. Dragone
While Human Activity Recognition systems may benefit from Active Learning by allowing users to self-annotate their Activities of Daily Living (ADLs), many proposed methods for collecting such annotations are for short-term data collection campaigns for specific datasets. We present a reusable dialogue-based approach to user interaction for active learning in activity recognition systems, which utilises semantic similarity measures and a dataset of natural language descriptions of common activities (which we make publicly available). Our approach involves system-initiated dialogue, including follow-up questions to reduce ambiguity in user responses where appropriate. We apply this approach to two active learning scenarios: (i) using an existing CASAS dataset, demonstrating long-term usage; and (ii) using an online activity recognition system, which tackles the issue of online segmentation and labelling. We demonstrate our work in context, in which a natural language interface provides knowledge that can help interpret other multi-modal sensor data. We provide results highlighting the potential of our dialogue- and semantic similarity-based approach. We evaluate our work: (i) quantitatively, as an efficient way to seek users’ input for active learning of ADLs; and (ii) qualitatively, through a user study in which users were asked to compare our approach and an established method. Results show the potential of our approach as a hands-free interface for annotation of sensor data as part of an active learning system. We provide insights into the challenges of active learning for activity recognition under real-world conditions and identify potential ways to address them.
虽然人类活动识别系统可以通过允许用户自我注释他们的日常生活活动(adl)而受益于主动学习,但许多收集此类注释的建议方法都是针对特定数据集的短期数据收集活动。我们提出了一种可重用的基于对话的用户交互方法,用于活动识别系统中的主动学习,该方法利用语义相似性度量和常见活动的自然语言描述数据集(我们公开提供)。我们的方法包括系统发起的对话,包括后续问题,以减少在适当情况下用户回答的模糊性。我们将这种方法应用于两个主动学习场景:(i)使用现有的CASAS数据集,展示长期使用情况;(ii)使用在线活动识别系统,该系统解决了在线分割和标签问题。我们在上下文中展示了我们的工作,其中自然语言界面提供了可以帮助解释其他多模态传感器数据的知识。我们提供的结果突出了我们基于对话和语义相似性的方法的潜力。我们评估我们的工作:(i)定量地,作为一种有效的方式来寻求用户对adl的主动学习的输入;(ii)定性地,通过用户研究,要求用户比较我们的方法和既定的方法。结果表明,作为主动学习系统的一部分,我们的方法具有作为传感器数据注释的免提接口的潜力。我们提供了在现实世界条件下主动学习对活动识别的挑战的见解,并确定了解决这些挑战的潜在方法。
{"title":"Generalisable Dialogue-based Approach for Active Learning of Activities of Daily Living","authors":"Ronnie Smith, M. Dragone","doi":"10.1145/3616017","DOIUrl":"https://doi.org/10.1145/3616017","url":null,"abstract":"While Human Activity Recognition systems may benefit from Active Learning by allowing users to self-annotate their Activities of Daily Living (ADLs), many proposed methods for collecting such annotations are for short-term data collection campaigns for specific datasets. We present a reusable dialogue-based approach to user interaction for active learning in activity recognition systems, which utilises semantic similarity measures and a dataset of natural language descriptions of common activities (which we make publicly available). Our approach involves system-initiated dialogue, including follow-up questions to reduce ambiguity in user responses where appropriate. We apply this approach to two active learning scenarios: (i) using an existing CASAS dataset, demonstrating long-term usage; and (ii) using an online activity recognition system, which tackles the issue of online segmentation and labelling. We demonstrate our work in context, in which a natural language interface provides knowledge that can help interpret other multi-modal sensor data. We provide results highlighting the potential of our dialogue- and semantic similarity-based approach. We evaluate our work: (i) quantitatively, as an efficient way to seek users’ input for active learning of ADLs; and (ii) qualitatively, through a user study in which users were asked to compare our approach and an established method. Results show the potential of our approach as a hands-free interface for annotation of sensor data as part of an active learning system. We provide insights into the challenges of active learning for activity recognition under real-world conditions and identify potential ways to address them.","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":"110 1","pages":"1 - 37"},"PeriodicalIF":3.4,"publicationDate":"2023-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82427498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
When Biased Humans Meet Debiased AI: A Case Study in College Major Recommendation 当有偏见的人类遇到无偏见的人工智能:大学专业推荐的案例研究
IF 3.4 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-08-01 DOI: 10.1145/3611313
Clarice Wang, Kathryn Wang, Andrew Bian, Rashidul Islam, Kamrun Keya, James R. Foulds, Shimei Pan
Currently, there is a surge of interest in fair Artificial Intelligence (AI) and Machine Learning (ML) research which aims to mitigate discriminatory bias in AI algorithms, e.g., along lines of gender, age, and race. While most research in this domain focuses on developing fair AI algorithms, in this work, we examine the challenges which arise when humans and fair AI interact. Our results show that due to an apparent conflict between human preferences and fairness, a fair AI algorithm on its own may be insufficient to achieve its intended results in the real world. Using college major recommendation as a case study, we build a fair AI recommender by employing gender debiasing machine learning techniques. Our offline evaluation showed that the debiased recommender makes fairer career recommendations without sacrificing its accuracy in prediction. Nevertheless, an online user study of more than 200 college students revealed that participants on average prefer the original biased system over the debiased system. Specifically, we found that perceived gender disparity is a determining factor for the acceptance of a recommendation. In other words, we cannot fully address the gender bias issue in AI recommendations without addressing the gender bias in humans. We conducted a follow-up survey to gain additional insights into the effectiveness of various design options that can help participants to overcome their own biases. Our results suggest that making fair AI explainable is crucial for increasing its adoption in the real world.
目前,人们对公平的人工智能(AI)和机器学习(ML)研究的兴趣激增,这些研究旨在减轻人工智能算法中的歧视性偏见,例如性别,年龄和种族。虽然这一领域的大多数研究都集中在开发公平的人工智能算法上,但在这项工作中,我们研究了人类和公平的人工智能交互时出现的挑战。我们的研究结果表明,由于人类偏好与公平之间存在明显的冲突,一个公平的人工智能算法本身可能不足以在现实世界中达到预期的结果。以大学专业推荐为例,我们采用消除性别偏见的机器学习技术构建了一个公平的人工智能推荐系统。我们的离线评估表明,去偏见推荐器在不牺牲预测准确性的情况下做出了更公平的职业推荐。然而,一项针对200多名大学生的在线用户研究显示,参与者平均更喜欢原始的有偏见的系统,而不是去偏见的系统。具体来说,我们发现感知到的性别差异是接受推荐的决定性因素。换句话说,如果不解决人类的性别偏见,我们就不能完全解决人工智能推荐中的性别偏见问题。我们进行了一项后续调查,以进一步了解各种设计方案的有效性,这些方案可以帮助参与者克服自己的偏见。我们的研究结果表明,让公平的人工智能具有可解释性对于提高其在现实世界中的应用至关重要。
{"title":"When Biased Humans Meet Debiased AI: A Case Study in College Major Recommendation","authors":"Clarice Wang, Kathryn Wang, Andrew Bian, Rashidul Islam, Kamrun Keya, James R. Foulds, Shimei Pan","doi":"10.1145/3611313","DOIUrl":"https://doi.org/10.1145/3611313","url":null,"abstract":"Currently, there is a surge of interest in fair Artificial Intelligence (AI) and Machine Learning (ML) research which aims to mitigate discriminatory bias in AI algorithms, e.g., along lines of gender, age, and race. While most research in this domain focuses on developing fair AI algorithms, in this work, we examine the challenges which arise when humans and fair AI interact. Our results show that due to an apparent conflict between human preferences and fairness, a fair AI algorithm on its own may be insufficient to achieve its intended results in the real world. Using college major recommendation as a case study, we build a fair AI recommender by employing gender debiasing machine learning techniques. Our offline evaluation showed that the debiased recommender makes fairer career recommendations without sacrificing its accuracy in prediction. Nevertheless, an online user study of more than 200 college students revealed that participants on average prefer the original biased system over the debiased system. Specifically, we found that perceived gender disparity is a determining factor for the acceptance of a recommendation. In other words, we cannot fully address the gender bias issue in AI recommendations without addressing the gender bias in humans. We conducted a follow-up survey to gain additional insights into the effectiveness of various design options that can help participants to overcome their own biases. Our results suggest that making fair AI explainable is crucial for increasing its adoption in the real world.","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":"93 1","pages":"1 - 28"},"PeriodicalIF":3.4,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73796948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrity Based Explanations for Fostering Appropriate Trust in AI Agents 基于诚信的人工智能主体适当信任培养解释
IF 3.4 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-07-24 DOI: https://dl.acm.org/doi/10.1145/3610578
Siddharth Mehrotra, Carolina Centeio Jorge, Catholijn M. Jonker, Myrthe L. Tielman

Appropriate trust is an important component of the interaction between people and AI systems, in that ‘inappropriate’ trust can cause disuse, misuse or abuse of AI. To foster appropriate trust in AI, we need to understand how AI systems can elicit appropriate levels of trust from their users. Out of the aspects that influence trust, this paper focuses on the effect of showing integrity. In particular, this paper presents a study of how different integrity-based explanations made by an AI agent affect the appropriateness of trust of a human in that agent. To explore this, (1) we provide a formal definition to measure appropriate trust, (2) present a between-subject user study with 160 participants who collaborated with an AI agent in such a task. In the study, the AI agent assisted its human partner in estimating calories on a food plate by expressing its integrity through explanations focusing on either honesty, transparency or fairness. Our results show that (a) an agent who displays its integrity by being explicit about potential biases in data or algorithms achieved appropriate trust more often compared to being honest about capability or transparent about the decision-making process, and (b) subjective trust builds up and recovers better with honesty-like integrity explanations. Our results contribute to the design of agent-based AI systems that guide humans to appropriately trust them, a formal method to measure appropriate trust, and how to support humans in calibrating their trust in AI.

适当的信任是人与人工智能系统之间互动的重要组成部分,因为“不适当”的信任可能导致人工智能的废弃、误用或滥用。为了培养对人工智能的适当信任,我们需要了解人工智能系统如何从用户那里获得适当程度的信任。在影响信任的几个方面中,本文着重研究诚信表现的作用。特别是,本文提出了一项研究,研究人工智能代理所做的不同的基于完整性的解释如何影响人类对该代理的信任的适当性。为了探讨这一点,(1)我们提供了一个正式的定义来衡量适当的信任,(2)提出了一个有160名参与者的主题间用户研究,他们在这样的任务中与人工智能代理合作。在这项研究中,人工智能代理通过专注于诚实、透明或公平的解释来表达其完整性,帮助其人类伙伴估算食物盘中的卡路里。我们的研究结果表明:(a)与对能力诚实或对决策过程透明相比,通过明确数据或算法中的潜在偏见来展示其完整性的代理更容易获得适当的信任,并且(b)主观信任通过诚实的完整性解释建立和恢复得更好。我们的研究结果有助于设计基于代理的人工智能系统,指导人类适当地信任它们,一种衡量适当信任的正式方法,以及如何支持人类校准他们对人工智能的信任。
{"title":"Integrity Based Explanations for Fostering Appropriate Trust in AI Agents","authors":"Siddharth Mehrotra, Carolina Centeio Jorge, Catholijn M. Jonker, Myrthe L. Tielman","doi":"https://dl.acm.org/doi/10.1145/3610578","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3610578","url":null,"abstract":"<p>Appropriate trust is an important component of the interaction between people and AI systems, in that ‘inappropriate’ trust can cause disuse, misuse or abuse of AI. To foster appropriate trust in AI, we need to understand how AI systems can elicit appropriate levels of trust from their users. Out of the aspects that influence trust, this paper focuses on the effect of showing integrity. In particular, this paper presents a study of how different integrity-based explanations made by an AI agent affect the appropriateness of trust of a human in that agent. To explore this, (1) we provide a formal definition to measure appropriate trust, (2) present a between-subject user study with 160 participants who collaborated with an AI agent in such a task. In the study, the AI agent assisted its human partner in estimating calories on a food plate by expressing its integrity through explanations focusing on either honesty, transparency or fairness. Our results show that (a) an agent who displays its integrity by being explicit about potential biases in data or algorithms achieved appropriate trust more often compared to being honest about capability or transparent about the decision-making process, and (b) subjective trust builds up and recovers better with honesty-like integrity explanations. Our results contribute to the design of agent-based AI systems that guide humans to appropriately trust them, a formal method to measure appropriate trust, and how to support humans in calibrating their trust in AI.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":"55 5","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrity Based Explanations for Fostering Appropriate Trust in AI Agents 基于诚信的人工智能主体适当信任培养解释
IF 3.4 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-07-24 DOI: 10.1145/3610578
Siddharth Mehrotra, Carolina Centeio Jorge, C. Jonker, M. Tielman
Appropriate trust is an important component of the interaction between people and AI systems, in that ‘inappropriate’ trust can cause disuse, misuse or abuse of AI. To foster appropriate trust in AI, we need to understand how AI systems can elicit appropriate levels of trust from their users. Out of the aspects that influence trust, this paper focuses on the effect of showing integrity. In particular, this paper presents a study of how different integrity-based explanations made by an AI agent affect the appropriateness of trust of a human in that agent. To explore this, (1) we provide a formal definition to measure appropriate trust, (2) present a between-subject user study with 160 participants who collaborated with an AI agent in such a task. In the study, the AI agent assisted its human partner in estimating calories on a food plate by expressing its integrity through explanations focusing on either honesty, transparency or fairness. Our results show that (a) an agent who displays its integrity by being explicit about potential biases in data or algorithms achieved appropriate trust more often compared to being honest about capability or transparent about the decision-making process, and (b) subjective trust builds up and recovers better with honesty-like integrity explanations. Our results contribute to the design of agent-based AI systems that guide humans to appropriately trust them, a formal method to measure appropriate trust, and how to support humans in calibrating their trust in AI.
适当的信任是人与人工智能系统之间互动的重要组成部分,因为“不适当”的信任可能导致人工智能的废弃、误用或滥用。为了培养对人工智能的适当信任,我们需要了解人工智能系统如何从用户那里获得适当程度的信任。在影响信任的几个方面中,本文着重研究诚信表现的作用。特别是,本文提出了一项研究,研究人工智能代理所做的不同的基于完整性的解释如何影响人类对该代理的信任的适当性。为了探讨这一点,(1)我们提供了一个正式的定义来衡量适当的信任,(2)提出了一个有160名参与者的主题间用户研究,他们在这样的任务中与人工智能代理合作。在这项研究中,人工智能代理通过专注于诚实、透明或公平的解释来表达其完整性,帮助其人类伙伴估算食物盘中的卡路里。我们的研究结果表明:(a)与对能力诚实或对决策过程透明相比,通过明确数据或算法中的潜在偏见来展示其完整性的代理更容易获得适当的信任,并且(b)主观信任通过诚实的完整性解释建立和恢复得更好。我们的研究结果有助于设计基于代理的人工智能系统,指导人类适当地信任它们,一种衡量适当信任的正式方法,以及如何支持人类校准他们对人工智能的信任。
{"title":"Integrity Based Explanations for Fostering Appropriate Trust in AI Agents","authors":"Siddharth Mehrotra, Carolina Centeio Jorge, C. Jonker, M. Tielman","doi":"10.1145/3610578","DOIUrl":"https://doi.org/10.1145/3610578","url":null,"abstract":"Appropriate trust is an important component of the interaction between people and AI systems, in that ‘inappropriate’ trust can cause disuse, misuse or abuse of AI. To foster appropriate trust in AI, we need to understand how AI systems can elicit appropriate levels of trust from their users. Out of the aspects that influence trust, this paper focuses on the effect of showing integrity. In particular, this paper presents a study of how different integrity-based explanations made by an AI agent affect the appropriateness of trust of a human in that agent. To explore this, (1) we provide a formal definition to measure appropriate trust, (2) present a between-subject user study with 160 participants who collaborated with an AI agent in such a task. In the study, the AI agent assisted its human partner in estimating calories on a food plate by expressing its integrity through explanations focusing on either honesty, transparency or fairness. Our results show that (a) an agent who displays its integrity by being explicit about potential biases in data or algorithms achieved appropriate trust more often compared to being honest about capability or transparent about the decision-making process, and (b) subjective trust builds up and recovers better with honesty-like integrity explanations. Our results contribute to the design of agent-based AI systems that guide humans to appropriately trust them, a formal method to measure appropriate trust, and how to support humans in calibrating their trust in AI.","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":"65 1","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76625860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Does this Explanation Help? Designing Local Model-agnostic Explanation Representations and an Experimental Evaluation using Eye-tracking Technology 这个解释有帮助吗?基于眼动追踪技术的局部模型不可知解释表征设计与实验评价
IF 3.4 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-07-13 DOI: 10.1145/3607145
Miguel Angel Meza Martínez, Mario Nadj, Moritz Langner, Peyman Toreini, A. Maedche
In Explainable Artificial Intelligence (XAI) research, various local model-agnostic methods have been proposed to explain individual predictions to users in order to increase the transparency of the underlying Artificial Intelligence (AI) systems. However, the user perspective has received less attention in XAI research, leading to a (1) lack of involvement of users in the design process of local model-agnostic explanations representations and (2) a limited understanding of how users visually attend them. Against this backdrop, we refined representations of local explanations from four well-established model-agnostic XAI methods in an iterative design process with users. Moreover, we evaluated the refined explanation representations in a laboratory experiment using eye-tracking technology as well as self-reports and interviews. Our results show that users do not necessarily prefer simple explanations and that their individual characteristics, such as gender and previous experience with AI systems, strongly influence their preferences. In addition, users find that some explanations are only useful in certain scenarios making the selection of an appropriate explanation highly dependent on context. With our work, we contribute to ongoing research to improve transparency in AI.
在可解释人工智能(XAI)研究中,已经提出了各种局部模型不可知论方法来向用户解释个人预测,以增加底层人工智能(AI)系统的透明度。然而,用户视角在XAI研究中受到的关注较少,导致(1)用户在局部模型不可知的解释表示的设计过程中缺乏参与;(2)对用户如何在视觉上参与它们的理解有限。在此背景下,我们在与用户的迭代设计过程中,从四种已建立的与模型无关的XAI方法中改进了局部解释的表示。此外,我们在实验室实验中使用眼动追踪技术、自我报告和访谈来评估精炼的解释表征。我们的研究结果表明,用户不一定喜欢简单的解释,他们的个人特征,如性别和以前使用人工智能系统的经验,强烈地影响了他们的偏好。此外,用户发现一些解释只在某些情况下有用,这使得选择适当的解释高度依赖于上下文。通过我们的工作,我们为正在进行的提高人工智能透明度的研究做出了贡献。
{"title":"Does this Explanation Help? Designing Local Model-agnostic Explanation Representations and an Experimental Evaluation using Eye-tracking Technology","authors":"Miguel Angel Meza Martínez, Mario Nadj, Moritz Langner, Peyman Toreini, A. Maedche","doi":"10.1145/3607145","DOIUrl":"https://doi.org/10.1145/3607145","url":null,"abstract":"In Explainable Artificial Intelligence (XAI) research, various local model-agnostic methods have been proposed to explain individual predictions to users in order to increase the transparency of the underlying Artificial Intelligence (AI) systems. However, the user perspective has received less attention in XAI research, leading to a (1) lack of involvement of users in the design process of local model-agnostic explanations representations and (2) a limited understanding of how users visually attend them. Against this backdrop, we refined representations of local explanations from four well-established model-agnostic XAI methods in an iterative design process with users. Moreover, we evaluated the refined explanation representations in a laboratory experiment using eye-tracking technology as well as self-reports and interviews. Our results show that users do not necessarily prefer simple explanations and that their individual characteristics, such as gender and previous experience with AI systems, strongly influence their preferences. In addition, users find that some explanations are only useful in certain scenarios making the selection of an appropriate explanation highly dependent on context. With our work, we contribute to ongoing research to improve transparency in AI.","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":"9 1","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73060419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Does this Explanation Help? Designing Local Model-agnostic Explanation Representations and an Experimental Evaluation using Eye-tracking Technology 这个解释有帮助吗?基于眼动追踪技术的局部模型不可知解释表征设计与实验评价
IF 3.4 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-07-13 DOI: https://dl.acm.org/doi/10.1145/3607145
Miguel Angel Meza Martínez, Mario Nadj, Moritz Langner, Peyman Toreini, Alexander Maedche

In Explainable Artificial Intelligence (XAI) research, various local model-agnostic methods have been proposed to explain individual predictions to users in order to increase the transparency of the underlying Artificial Intelligence (AI) systems. However, the user perspective has received less attention in XAI research, leading to a (1) lack of involvement of users in the design process of local model-agnostic explanations representations and (2) a limited understanding of how users visually attend them. Against this backdrop, we refined representations of local explanations from four well-established model-agnostic XAI methods in an iterative design process with users. Moreover, we evaluated the refined explanation representations in a laboratory experiment using eye-tracking technology as well as self-reports and interviews. Our results show that users do not necessarily prefer simple explanations and that their individual characteristics, such as gender and previous experience with AI systems, strongly influence their preferences. In addition, users find that some explanations are only useful in certain scenarios making the selection of an appropriate explanation highly dependent on context. With our work, we contribute to ongoing research to improve transparency in AI.

在可解释人工智能(XAI)研究中,已经提出了各种局部模型不可知论方法来向用户解释个人预测,以增加底层人工智能(AI)系统的透明度。然而,用户视角在XAI研究中受到的关注较少,导致(1)用户在局部模型不可知的解释表示的设计过程中缺乏参与;(2)对用户如何在视觉上参与它们的理解有限。在此背景下,我们在与用户的迭代设计过程中,从四种已建立的与模型无关的XAI方法中改进了局部解释的表示。此外,我们在实验室实验中使用眼动追踪技术、自我报告和访谈来评估精炼的解释表征。我们的研究结果表明,用户不一定喜欢简单的解释,他们的个人特征,如性别和以前使用人工智能系统的经验,强烈地影响了他们的偏好。此外,用户发现一些解释只在某些情况下有用,这使得选择适当的解释高度依赖于上下文。通过我们的工作,我们为正在进行的提高人工智能透明度的研究做出了贡献。
{"title":"Does this Explanation Help? Designing Local Model-agnostic Explanation Representations and an Experimental Evaluation using Eye-tracking Technology","authors":"Miguel Angel Meza Martínez, Mario Nadj, Moritz Langner, Peyman Toreini, Alexander Maedche","doi":"https://dl.acm.org/doi/10.1145/3607145","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3607145","url":null,"abstract":"<p>In Explainable Artificial Intelligence (XAI) research, various local model-agnostic methods have been proposed to explain individual predictions to users in order to increase the transparency of the underlying Artificial Intelligence (AI) systems. However, the user perspective has received less attention in XAI research, leading to a (1) lack of involvement of users in the design process of local model-agnostic explanations representations and (2) a limited understanding of how users visually attend them. Against this backdrop, we refined representations of local explanations from four well-established model-agnostic XAI methods in an iterative design process with users. Moreover, we evaluated the refined explanation representations in a laboratory experiment using eye-tracking technology as well as self-reports and interviews. Our results show that users do not necessarily prefer simple explanations and that their individual characteristics, such as gender and previous experience with AI systems, strongly influence their preferences. In addition, users find that some explanations are only useful in certain scenarios making the selection of an appropriate explanation highly dependent on context. With our work, we contribute to ongoing research to improve transparency in AI.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":"57 2","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VERB: Visualizing and Interpreting Bias Mitigation Techniques Geometrically for Word Representations 动词:可视化和解释的偏见缓解技术的几何字表示
IF 3.4 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-06-22 DOI: https://dl.acm.org/doi/10.1145/3604433
Archit Rathore, Sunipa Dev, Jeff M. Phillips, Vivek Srikumar, Yan Zheng, Chin-Chia Michael Yeh, Junpeng Wang, Wei Zhang, Bei Wang

Word vector embeddings have been shown to contain and amplify biases in the data they are extracted from. Consequently, many techniques have been proposed to identify, mitigate, and attenuate these biases in word representations. In this paper, we utilize interactive visualization to increase the interpretability and accessibility of a collection of state-of-the-art debiasing techniques. To aid this, we present the Visualization of Embedding Representations for deBiasing (“VERB”) system, an open-source web-based visualization tool that helps users gain a technical understanding and visual intuition of the inner workings of debiasing techniques, with a focus on their geometric properties. In particular, VERB offers easy-to-follow examples that explore the effects of these debiasing techniques on the geometry of high-dimensional word vectors. To help understand how various debiasing techniques change the underlying geometry, VERB decomposes each technique into interpretable sequences of primitive transformations and highlights their effect on the word vectors using dimensionality reduction and interactive visual exploration. VERB is designed to target natural language processing (NLP) practitioners who are designing decision-making systems on top of word embeddings, and also researchers working with the fairness and ethics of machine learning systems in NLP. It can also serve as a visual medium for education, which helps an NLP novice understand and mitigate biases in word embeddings.

词向量嵌入已经被证明包含并放大了它们所提取的数据中的偏见。因此,人们提出了许多技术来识别、减轻和减弱单词表示中的这些偏差。在本文中,我们利用交互式可视化来增加可解释性和可访问性的最先进的脱偏技术的集合。为了帮助实现这一点,我们提出了嵌入表示的可视化去偏(“VERB”)系统,这是一个基于web的开源可视化工具,可以帮助用户获得技术理解和视觉直觉的内部工作的去偏技术,重点是他们的几何属性。特别是,VERB提供了易于理解的示例,探索这些去偏技术对高维词向量几何的影响。为了帮助理解各种去bias技术如何改变底层几何,VERB将每种技术分解为可解释的原始转换序列,并使用降维和交互式视觉探索来突出它们对单词向量的影响。VERB旨在针对在词嵌入基础上设计决策系统的自然语言处理(NLP)从业者,以及研究NLP中机器学习系统的公平性和伦理的研究人员。它也可以作为教育的视觉媒介,帮助NLP新手理解和减轻词嵌入中的偏见。
{"title":"VERB: Visualizing and Interpreting Bias Mitigation Techniques Geometrically for Word Representations","authors":"Archit Rathore, Sunipa Dev, Jeff M. Phillips, Vivek Srikumar, Yan Zheng, Chin-Chia Michael Yeh, Junpeng Wang, Wei Zhang, Bei Wang","doi":"https://dl.acm.org/doi/10.1145/3604433","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3604433","url":null,"abstract":"<p>Word vector embeddings have been shown to contain and amplify biases in the data they are extracted from. Consequently, many techniques have been proposed to identify, mitigate, and attenuate these biases in word representations. In this paper, we utilize interactive visualization to increase the interpretability and accessibility of a collection of state-of-the-art debiasing techniques. To aid this, we present the Visualization of Embedding Representations for deBiasing (“VERB”) system, an open-source web-based visualization tool that helps users gain a technical understanding and visual intuition of the inner workings of debiasing techniques, with a focus on their geometric properties. In particular, VERB offers easy-to-follow examples that explore the effects of these debiasing techniques on the geometry of high-dimensional word vectors. To help understand how various debiasing techniques change the underlying geometry, VERB decomposes each technique into interpretable sequences of primitive transformations and highlights their effect on the word vectors using dimensionality reduction and interactive visual exploration. VERB is designed to target natural language processing (NLP) practitioners who are designing decision-making systems on top of word embeddings, and also researchers working with the fairness and ethics of machine learning systems in NLP. It can also serve as a visual medium for education, which helps an NLP novice understand and mitigate biases in word embeddings.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":"54 4","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VERB: Visualizing and Interpreting Bias Mitigation Techniques Geometrically for Word Representations 动词:可视化和解释的偏见缓解技术的几何字表示
IF 3.4 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-06-22 DOI: 10.1145/3604433
Archit Rathore, Yan Zheng, Chin-Chia Michael Yeh
Word vector embeddings have been shown to contain and amplify biases in the data they are extracted from. Consequently, many techniques have been proposed to identify, mitigate, and attenuate these biases in word representations. In this paper, we utilize interactive visualization to increase the interpretability and accessibility of a collection of state-of-the-art debiasing techniques. To aid this, we present the Visualization of Embedding Representations for deBiasing (“VERB”) system, an open-source web-based visualization tool that helps users gain a technical understanding and visual intuition of the inner workings of debiasing techniques, with a focus on their geometric properties. In particular, VERB offers easy-to-follow examples that explore the effects of these debiasing techniques on the geometry of high-dimensional word vectors. To help understand how various debiasing techniques change the underlying geometry, VERB decomposes each technique into interpretable sequences of primitive transformations and highlights their effect on the word vectors using dimensionality reduction and interactive visual exploration. VERB is designed to target natural language processing (NLP) practitioners who are designing decision-making systems on top of word embeddings, and also researchers working with the fairness and ethics of machine learning systems in NLP. It can also serve as a visual medium for education, which helps an NLP novice understand and mitigate biases in word embeddings.
词向量嵌入已经被证明包含并放大了它们所提取的数据中的偏见。因此,人们提出了许多技术来识别、减轻和减弱单词表示中的这些偏差。在本文中,我们利用交互式可视化来增加可解释性和可访问性的最先进的脱偏技术的集合。为了帮助实现这一点,我们提出了嵌入表示的可视化去偏(“VERB”)系统,这是一个基于web的开源可视化工具,可以帮助用户获得技术理解和视觉直觉的内部工作的去偏技术,重点是他们的几何属性。特别是,VERB提供了易于理解的示例,探索这些去偏技术对高维词向量几何的影响。为了帮助理解各种去bias技术如何改变底层几何,VERB将每种技术分解为可解释的原始转换序列,并使用降维和交互式视觉探索来突出它们对单词向量的影响。VERB旨在针对在词嵌入基础上设计决策系统的自然语言处理(NLP)从业者,以及研究NLP中机器学习系统的公平性和伦理的研究人员。它也可以作为教育的视觉媒介,帮助NLP新手理解和减轻词嵌入中的偏见。
{"title":"VERB: Visualizing and Interpreting Bias Mitigation Techniques Geometrically for Word Representations","authors":"Archit Rathore, Yan Zheng, Chin-Chia Michael Yeh","doi":"10.1145/3604433","DOIUrl":"https://doi.org/10.1145/3604433","url":null,"abstract":"Word vector embeddings have been shown to contain and amplify biases in the data they are extracted from. Consequently, many techniques have been proposed to identify, mitigate, and attenuate these biases in word representations. In this paper, we utilize interactive visualization to increase the interpretability and accessibility of a collection of state-of-the-art debiasing techniques. To aid this, we present the Visualization of Embedding Representations for deBiasing (“VERB”) system, an open-source web-based visualization tool that helps users gain a technical understanding and visual intuition of the inner workings of debiasing techniques, with a focus on their geometric properties. In particular, VERB offers easy-to-follow examples that explore the effects of these debiasing techniques on the geometry of high-dimensional word vectors. To help understand how various debiasing techniques change the underlying geometry, VERB decomposes each technique into interpretable sequences of primitive transformations and highlights their effect on the word vectors using dimensionality reduction and interactive visual exploration. VERB is designed to target natural language processing (NLP) practitioners who are designing decision-making systems on top of word embeddings, and also researchers working with the fairness and ethics of machine learning systems in NLP. It can also serve as a visual medium for education, which helps an NLP novice understand and mitigate biases in word embeddings.","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":"39 1","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90094350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
ACM Transactions on Interactive Intelligent Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1