首页 > 最新文献

Proceedings of the 28th International Conference on Intelligent User Interfaces最新文献

英文 中文
The Role of Lexical Alignment in Human Understanding of Explanations by Conversational Agents 词汇对齐在人类理解会话代理解释中的作用
Pub Date : 2023-03-27 DOI: 10.1145/3581641.3584086
S. Srivastava, M. Theune, Alejandro Catalá
Explainable Artificial Intelligence (XAI) focuses on research and technology that can explain an AI system’s functioning and its underlying methods, and also on making these explanations better through personalization. Our research study investigates a natural language personalization method called lexical alignment in understanding an explanation provided by a conversational agent. The study setup was online and navigated the participants through an interaction with a conversational agent. Participants faced either an agent designed to align its responses to those of the participants, a misaligned agent, or a control condition that did not involve any dialogue. The dialogue delivered an explanation based on a pre-defined set of causes and effects. The recall and understanding of the explanations was evaluated using a combination of Yes-No questions, a Cloze test (fill-in-the-blanks), and What-style questions. The analysis of the test scores revealed a significant advantage in information recall for those who interacted with an aligning agent against the participants who either interacted with a non-aligning agent or did not go through any dialogue. The Yes-No type questions that included probes on higher-order inferences (understanding) also reflected an advantage for the participants who had an aligned dialogue against both non-aligned and no dialogue conditions. The results overall suggest a positive effect of lexical alignment on understanding of explanations.
可解释人工智能(XAI)专注于能够解释人工智能系统功能及其底层方法的研究和技术,以及通过个性化使这些解释更好。我们的研究探讨了一种被称为词汇对齐的自然语言个性化方法,用于理解会话代理提供的解释。研究设置是在线的,并通过与对话代理的互动来引导参与者。参与者面对的是一个旨在使其反应与参与者的反应一致的代理,一个不一致的代理,或者一个不涉及任何对话的控制条件。对话提供了一个基于预先定义的一系列因果关系的解释。对解释的回忆和理解是用是-否问题、完形填空测试和what类型问题的组合来评估的。对测试分数的分析显示,与与非对齐代理互动或没有进行任何对话的参与者相比,与对齐代理互动的参与者在信息回忆方面具有显著优势。包括对高阶推理(理解)的探究的是-否型问题也反映了与不结盟和无对话条件下进行结盟对话的参与者的优势。总体而言,结果表明词汇对齐对理解解释有积极影响。
{"title":"The Role of Lexical Alignment in Human Understanding of Explanations by Conversational Agents","authors":"S. Srivastava, M. Theune, Alejandro Catalá","doi":"10.1145/3581641.3584086","DOIUrl":"https://doi.org/10.1145/3581641.3584086","url":null,"abstract":"Explainable Artificial Intelligence (XAI) focuses on research and technology that can explain an AI system’s functioning and its underlying methods, and also on making these explanations better through personalization. Our research study investigates a natural language personalization method called lexical alignment in understanding an explanation provided by a conversational agent. The study setup was online and navigated the participants through an interaction with a conversational agent. Participants faced either an agent designed to align its responses to those of the participants, a misaligned agent, or a control condition that did not involve any dialogue. The dialogue delivered an explanation based on a pre-defined set of causes and effects. The recall and understanding of the explanations was evaluated using a combination of Yes-No questions, a Cloze test (fill-in-the-blanks), and What-style questions. The analysis of the test scores revealed a significant advantage in information recall for those who interacted with an aligning agent against the participants who either interacted with a non-aligning agent or did not go through any dialogue. The Yes-No type questions that included probes on higher-order inferences (understanding) also reflected an advantage for the participants who had an aligned dialogue against both non-aligned and no dialogue conditions. The results overall suggest a positive effect of lexical alignment on understanding of explanations.","PeriodicalId":118159,"journal":{"name":"Proceedings of the 28th International Conference on Intelligent User Interfaces","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116854402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Embodied Agents for Obstetric Simulation Training 产科模拟训练的具身代理
Pub Date : 2023-03-27 DOI: 10.1145/3581641.3584100
Carlos Pereira Santos, Joey Relouw, Kevin Hutchinson-Lhuissier, A. V. Buggenum, A. Boudry, A. Fransen, M. V. D. Ven, Igor Mayer
.Post-partum hemorrhaging is a medical emergency that occurs during childbirth and, in extreme cases, can be life-threatening. It is the number one cause of maternal mortality worldwide. High-quality training of medical staff can contribute to early diagnosis and work towards preventing escalation towards more serious cases. Healthcare education uses manikin-based simulators to train obstetricians for various childbirth scenarios before training on real patients. However, these medical simulators lack certain key features portraying important symptoms and are incapable of communicating with the trainees. The authors present a digital embodiment agent that can improve the current state of the art by providing a specification of the requirements as well as an extensive design and development approach. This digital embodiment allows educators to respond and role-play as the patient in real time and can easily be integrated with existing training procedures. This research was performed in collaboration with medical experts, making a new contribution to medical training by bringing digital humans and the representation of affective interfaces to the field of healthcare.
产后大出血是一种发生在分娩过程中的医疗紧急情况,在极端情况下可能危及生命。它是全世界孕产妇死亡的头号原因。对医务人员进行高质量培训有助于早期诊断,并努力防止病情升级为更严重的病例。在对真实病人进行培训之前,医疗保健教育使用基于人体模型的模拟器对产科医生进行各种分娩情景的培训。然而,这些医疗模拟器缺乏描绘重要症状的某些关键特征,无法与受训者沟通。作者提出了一种数字实施体代理,可以通过提供需求规范以及广泛的设计和开发方法来改善当前的技术状态。这种数字化体现使教育工作者能够实时响应和扮演患者的角色,并且可以很容易地与现有的培训程序集成。这项研究是与医学专家合作进行的,通过将数字人类和情感界面的表示引入医疗保健领域,为医学培训做出了新的贡献。
{"title":"Embodied Agents for Obstetric Simulation Training","authors":"Carlos Pereira Santos, Joey Relouw, Kevin Hutchinson-Lhuissier, A. V. Buggenum, A. Boudry, A. Fransen, M. V. D. Ven, Igor Mayer","doi":"10.1145/3581641.3584100","DOIUrl":"https://doi.org/10.1145/3581641.3584100","url":null,"abstract":".Post-partum hemorrhaging is a medical emergency that occurs during childbirth and, in extreme cases, can be life-threatening. It is the number one cause of maternal mortality worldwide. High-quality training of medical staff can contribute to early diagnosis and work towards preventing escalation towards more serious cases. Healthcare education uses manikin-based simulators to train obstetricians for various childbirth scenarios before training on real patients. However, these medical simulators lack certain key features portraying important symptoms and are incapable of communicating with the trainees. The authors present a digital embodiment agent that can improve the current state of the art by providing a specification of the requirements as well as an extensive design and development approach. This digital embodiment allows educators to respond and role-play as the patient in real time and can easily be integrated with existing training procedures. This research was performed in collaboration with medical experts, making a new contribution to medical training by bringing digital humans and the representation of affective interfaces to the field of healthcare.","PeriodicalId":118159,"journal":{"name":"Proceedings of the 28th International Conference on Intelligent User Interfaces","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117066100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Living Memories: AI-Generated Characters as Digital Mementos 活着的记忆:人工智能生成的角色作为数字纪念品
Pub Date : 2023-03-27 DOI: 10.1145/3581641.3584065
Pat Pataranutaporn, Valdemar Danry, Lancelot Blanchard, Lavanay Thakral, Naoki Ohsugi, P. Maes, Misha Sra
Every human culture has developed practices and rituals associated with remembering people of the past - be it for mourning, cultural preservation, or learning about historical events. In this paper, we present the concept of “Living Memories”: interactive digital mementos that are created from journals, letters and data that an individual have left behind. Like an interactive photograph, living memories can be talked to and asked questions, making accessing the knowledge, attitudes and past experiences of a person easily accessible. To demonstrate our concept, we created an AI-based system for generating living memories from any data source and implemented living memories of the three historical figures “Leonardo Da Vinci”, “Murasaki Shikibu”, and “Captain Robert Scott”. As a second key contribution, we present a novel metrics scheme for evaluating the accuracy of living memory architectures and show the accuracy of our pipeline to improve over baselines. Finally, we compare the user experience and learning effects of interacting with the living memory of Leonardo Da Vinci to reading his journal. Our results show that interacting with the living memory, in addition to simply reading a journal, increases learning effectiveness and motivation to learn about the character.
每一种人类文化都发展了与纪念过去的人有关的习俗和仪式——无论是哀悼、文化保护还是了解历史事件。在本文中,我们提出了“生活记忆”的概念:由个人留下的日记、信件和数据创建的交互式数字纪念品。就像一张互动照片一样,活着的记忆可以被交谈,也可以被提问,从而很容易地获取一个人的知识、态度和过去的经历。为了证明我们的概念,我们创建了一个基于人工智能的系统,可以从任何数据源生成活记忆,并实现了三位历史人物“莱昂纳多·达·芬奇”、“Murasaki Shikibu”和“Robert Scott船长”的活记忆。作为第二个关键贡献,我们提出了一种新的度量方案来评估活记忆架构的准确性,并展示了我们的管道在基线上改进的准确性。最后,我们比较了与达芬奇生前记忆互动和阅读他的日记的用户体验和学习效果。我们的研究结果表明,除了简单地阅读日记外,与生活记忆互动可以提高学习效率和学习人物的动力。
{"title":"Living Memories: AI-Generated Characters as Digital Mementos","authors":"Pat Pataranutaporn, Valdemar Danry, Lancelot Blanchard, Lavanay Thakral, Naoki Ohsugi, P. Maes, Misha Sra","doi":"10.1145/3581641.3584065","DOIUrl":"https://doi.org/10.1145/3581641.3584065","url":null,"abstract":"Every human culture has developed practices and rituals associated with remembering people of the past - be it for mourning, cultural preservation, or learning about historical events. In this paper, we present the concept of “Living Memories”: interactive digital mementos that are created from journals, letters and data that an individual have left behind. Like an interactive photograph, living memories can be talked to and asked questions, making accessing the knowledge, attitudes and past experiences of a person easily accessible. To demonstrate our concept, we created an AI-based system for generating living memories from any data source and implemented living memories of the three historical figures “Leonardo Da Vinci”, “Murasaki Shikibu”, and “Captain Robert Scott”. As a second key contribution, we present a novel metrics scheme for evaluating the accuracy of living memory architectures and show the accuracy of our pipeline to improve over baselines. Finally, we compare the user experience and learning effects of interacting with the living memory of Leonardo Da Vinci to reading his journal. Our results show that interacting with the living memory, in addition to simply reading a journal, increases learning effectiveness and motivation to learn about the character.","PeriodicalId":118159,"journal":{"name":"Proceedings of the 28th International Conference on Intelligent User Interfaces","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121685612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Dataset and Machine Learning Approach to Classify and Augment Interface Elements of Household Appliances to Support People with Visual Impairment 用数据集和机器学习方法对家用电器界面元素进行分类和增强,以支持视障人士
Pub Date : 2023-03-27 DOI: 10.1145/3581641.3584038
Hanna Tschakert, Florian Lang, Markus Wieland, Albrecht Schmidt, Tonja Machulla
Many modern household appliances are challenging to operate for people with visual impairment. Low-contrast designs and insufficient tactile feedback make it difficult to distinguish interface elements and to recognize their function. Augmented reality (AR) can be used to visually highlight such elements and provide assistance to people with residual vision. To realize this goal, we (1) created a dataset consisting of 13,702 images of interfaces from household appliances and manually labeled control elements; (2) trained a neural network to recognize control elements and to distinguish between PushButton, TouchButton, Knob, Slider, and Toggle; and (3) designed various contrast-rich and visually simple AR augmentations for these elements. The results were implemented as a screen-based assistive AR application, which we tested in a user study with six individuals with visual impairment. Participants were able to recognize control elements that were imperceptible without the assistive application. The approach was well received, especially for the potential of familiarizing oneself with novel devices. The automatic parsing and augmentation of interfaces provide an important step toward the independent interaction of people with visual impairments with their everyday environment.
许多现代家用电器对视障人士来说操作起来很有挑战性。低对比度的设计和不充分的触觉反馈使得区分界面元素和识别它们的功能变得困难。增强现实(AR)可以用来在视觉上突出这些元素,并为残障人士提供帮助。为了实现这一目标,我们(1)创建了一个由13702张来自家用电器和手动标记的控制元素的接口图像组成的数据集;(2)训练神经网络识别控制元素,区分PushButton、TouchButton、Knob、Slider和Toggle;(3)针对这些元素设计了各种对比度丰富、视觉简单的AR增强。研究结果以基于屏幕的辅助AR应用程序的形式实现,我们在六名视力障碍患者的用户研究中对其进行了测试。参与者能够识别出在没有辅助应用程序的情况下难以察觉的控制元素。这种方法很受欢迎,特别是因为它有可能使自己熟悉新的设备。界面的自动解析和增强为视障人士与日常环境的独立交互提供了重要的一步。
{"title":"A Dataset and Machine Learning Approach to Classify and Augment Interface Elements of Household Appliances to Support People with Visual Impairment","authors":"Hanna Tschakert, Florian Lang, Markus Wieland, Albrecht Schmidt, Tonja Machulla","doi":"10.1145/3581641.3584038","DOIUrl":"https://doi.org/10.1145/3581641.3584038","url":null,"abstract":"Many modern household appliances are challenging to operate for people with visual impairment. Low-contrast designs and insufficient tactile feedback make it difficult to distinguish interface elements and to recognize their function. Augmented reality (AR) can be used to visually highlight such elements and provide assistance to people with residual vision. To realize this goal, we (1) created a dataset consisting of 13,702 images of interfaces from household appliances and manually labeled control elements; (2) trained a neural network to recognize control elements and to distinguish between PushButton, TouchButton, Knob, Slider, and Toggle; and (3) designed various contrast-rich and visually simple AR augmentations for these elements. The results were implemented as a screen-based assistive AR application, which we tested in a user study with six individuals with visual impairment. Participants were able to recognize control elements that were imperceptible without the assistive application. The approach was well received, especially for the potential of familiarizing oneself with novel devices. The automatic parsing and augmentation of interfaces provide an important step toward the independent interaction of people with visual impairments with their everyday environment.","PeriodicalId":118159,"journal":{"name":"Proceedings of the 28th International Conference on Intelligent User Interfaces","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126304921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Investigating the Intelligibility of Plural Counterfactual Examples for Non-Expert Users: an Explanation User Interface Proposition and User Study 调查非专家用户的复数反事实示例的可理解性:一个解释用户界面命题和用户研究
Pub Date : 2023-03-27 DOI: 10.1145/3581641.3584082
Clara Bove, Marie-Jeanne Lesot, C. Tijus, Marcin Detyniecki
Plural counterfactual examples have been proposed to explain the prediction of a classifier by offering a user several instances of minimal modifications that may be performed to change the prediction. Yet, such explanations may provide too much information, generating potential confusion for the end-users with no specific knowledge, neither on the machine learning, nor on the application domains. In this paper, we investigate the design of explanation user interfaces for plural counterfactual examples offering comparative analysis features to mitigate this potential confusion and improve the intelligibility of such explanations for non-expert users. We propose an implementation of such an enhanced explanation user interface, illustrating it in a financial scenario related to a loan application. We then present the results of a lab user study conducted with 112 participants to evaluate the effectiveness of having plural examples and of offering comparative analysis principles, both on the objective understanding and satisfaction of such explanations. The results demonstrate the effectiveness of the plural condition, both on objective understanding and satisfaction scores, as compared to having a single counterfactual example. Beside the statistical analysis, we perform a thematic analysis of the participants’ responses to the open-response questions, that also shows encouraging results for the comparative analysis features on the objective understanding.
已经提出了多个反事实示例来解释分类器的预测,通过向用户提供可以执行的最小修改的几个实例来改变预测。然而,这样的解释可能会提供太多的信息,对没有特定知识的最终用户产生潜在的困惑,无论是在机器学习还是在应用领域。在本文中,我们研究了复数反事实示例的解释用户界面的设计,提供了比较分析功能,以减轻这种潜在的混淆,并提高非专业用户对此类解释的可理解性。我们提出了这样一个增强的解释用户界面的实现,并在一个与贷款申请相关的金融场景中对其进行了说明。然后,我们提出了一项由112名参与者进行的实验室用户研究的结果,以评估使用复数例子和提供比较分析原则的有效性,包括对这些解释的客观理解和满意度。结果表明,与单一反事实例子相比,复数条件在客观理解和满意度得分方面都是有效的。在统计分析的基础上,我们对参与者对开放式问题的回答进行了专题分析,在客观认识上的比较分析特征也显示出令人鼓舞的结果。
{"title":"Investigating the Intelligibility of Plural Counterfactual Examples for Non-Expert Users: an Explanation User Interface Proposition and User Study","authors":"Clara Bove, Marie-Jeanne Lesot, C. Tijus, Marcin Detyniecki","doi":"10.1145/3581641.3584082","DOIUrl":"https://doi.org/10.1145/3581641.3584082","url":null,"abstract":"Plural counterfactual examples have been proposed to explain the prediction of a classifier by offering a user several instances of minimal modifications that may be performed to change the prediction. Yet, such explanations may provide too much information, generating potential confusion for the end-users with no specific knowledge, neither on the machine learning, nor on the application domains. In this paper, we investigate the design of explanation user interfaces for plural counterfactual examples offering comparative analysis features to mitigate this potential confusion and improve the intelligibility of such explanations for non-expert users. We propose an implementation of such an enhanced explanation user interface, illustrating it in a financial scenario related to a loan application. We then present the results of a lab user study conducted with 112 participants to evaluate the effectiveness of having plural examples and of offering comparative analysis principles, both on the objective understanding and satisfaction of such explanations. The results demonstrate the effectiveness of the plural condition, both on objective understanding and satisfaction scores, as compared to having a single counterfactual example. Beside the statistical analysis, we perform a thematic analysis of the participants’ responses to the open-response questions, that also shows encouraging results for the comparative analysis features on the objective understanding.","PeriodicalId":118159,"journal":{"name":"Proceedings of the 28th International Conference on Intelligent User Interfaces","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126878205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Supporting High-Uncertainty Decisions through AI and Logic-Style Explanations 通过人工智能和逻辑式解释支持高不确定性决策
Pub Date : 2023-03-27 DOI: 10.1145/3581641.3584080
Federico Maria Cau, H. Hauptmann, L. D. Spano, N. Tintarev
A common criteria for Explainable AI (XAI) is to support users in establishing appropriate trust in the AI – rejecting advice when it is incorrect, and accepting advice when it is correct. Previous findings suggest that explanations can cause an over-reliance on AI (overly accepting advice). Explanations that evoke appropriate trust are even more challenging for decision-making tasks that are difficult for humans and AI. For this reason, we study decision-making by non-experts in the high-uncertainty domain of stock trading. We compare the effectiveness of three different explanation styles (influenced by inductive, abductive, and deductive reasoning) and the role of AI confidence in terms of a) the users’ reliance on the XAI interface elements (charts with indicators, AI prediction, explanation), b) the correctness of the decision (task performance), and c) the agreement with the AI’s prediction. In contrast to previous work, we look at interactions between different aspects of decision-making, including AI correctness, and the combined effects of AI confidence and explanations styles. Our results show that specific explanation styles (abductive and deductive) improve the user’s task performance in the case of high AI confidence compared to inductive explanations. In other words, these styles of explanations were able to invoke correct decisions (for both positive and negative decisions) when the system was certain. In such a condition, the agreement between the user’s decision and the AI prediction confirms this finding, highlighting a significant agreement increase when the AI is correct. This suggests that both explanation styles are suitable for evoking appropriate trust in a confident AI. Our findings further indicate a need to consider AI confidence as a criterion for including or excluding explanations from AI interfaces. In addition, this paper highlights the importance of carefully selecting an explanation style according to the characteristics of the task and data.
可解释AI (XAI)的一个共同标准是支持用户建立对AI的适当信任——在建议不正确时拒绝建议,在建议正确时接受建议。之前的研究表明,解释会导致过度依赖人工智能(过度接受建议)。对于人类和人工智能都很困难的决策任务来说,唤起适当信任的解释更具挑战性。因此,我们研究非专家在股票交易高不确定性领域的决策。我们比较了三种不同的解释风格(受归纳、溯因和演绎推理的影响)的有效性,以及AI置信度在以下方面的作用:a)用户对XAI界面元素(带有指标的图表、AI预测、解释)的依赖程度,b)决策的正确性(任务性能),以及c)与AI预测的一致性。与之前的工作相比,我们研究了决策的不同方面之间的相互作用,包括人工智能的正确性,以及人工智能信心和解释风格的综合影响。我们的研究结果表明,与归纳解释相比,特定的解释风格(溯因和演绎)在人工智能置信度高的情况下提高了用户的任务表现。换句话说,当系统确定时,这些类型的解释能够调用正确的决策(对于积极和消极的决策)。在这种情况下,用户的决定和人工智能预测之间的一致性证实了这一发现,当人工智能是正确的时,一致性显著增加。这表明这两种解释风格都适合唤起自信的AI的适当信任。我们的研究结果进一步表明,需要将人工智能置信度作为包括或排除人工智能界面解释的标准。此外,本文还强调了根据任务和数据的特点仔细选择解释风格的重要性。
{"title":"Supporting High-Uncertainty Decisions through AI and Logic-Style Explanations","authors":"Federico Maria Cau, H. Hauptmann, L. D. Spano, N. Tintarev","doi":"10.1145/3581641.3584080","DOIUrl":"https://doi.org/10.1145/3581641.3584080","url":null,"abstract":"A common criteria for Explainable AI (XAI) is to support users in establishing appropriate trust in the AI – rejecting advice when it is incorrect, and accepting advice when it is correct. Previous findings suggest that explanations can cause an over-reliance on AI (overly accepting advice). Explanations that evoke appropriate trust are even more challenging for decision-making tasks that are difficult for humans and AI. For this reason, we study decision-making by non-experts in the high-uncertainty domain of stock trading. We compare the effectiveness of three different explanation styles (influenced by inductive, abductive, and deductive reasoning) and the role of AI confidence in terms of a) the users’ reliance on the XAI interface elements (charts with indicators, AI prediction, explanation), b) the correctness of the decision (task performance), and c) the agreement with the AI’s prediction. In contrast to previous work, we look at interactions between different aspects of decision-making, including AI correctness, and the combined effects of AI confidence and explanations styles. Our results show that specific explanation styles (abductive and deductive) improve the user’s task performance in the case of high AI confidence compared to inductive explanations. In other words, these styles of explanations were able to invoke correct decisions (for both positive and negative decisions) when the system was certain. In such a condition, the agreement between the user’s decision and the AI prediction confirms this finding, highlighting a significant agreement increase when the AI is correct. This suggests that both explanation styles are suitable for evoking appropriate trust in a confident AI. Our findings further indicate a need to consider AI confidence as a criterion for including or excluding explanations from AI interfaces. In addition, this paper highlights the importance of carefully selecting an explanation style according to the characteristics of the task and data.","PeriodicalId":118159,"journal":{"name":"Proceedings of the 28th International Conference on Intelligent User Interfaces","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132671917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Perspective: Leveraging Human Understanding for Identifying and Characterizing Image Atypicality 透视:利用人类的理解识别和表征图像的非典型性
Pub Date : 2023-03-27 DOI: 10.1145/3581641.3584096
Shahin Sharifi Noorian, S. Qiu, Burcu Sayin, Agathe Balayn, U. Gadiraju, Jie Yang, A. Bozzon
High-quality data plays a vital role in developing reliable image classification models. Despite that, what makes an image difficult to classify remains an unstudied topic. This paper provides a first-of-its-kind, model-agnostic characterization of image atypicality based on human understanding. We consider the setting of image classification “in the wild”, where a large number of unlabeled images are accessible, and introduce a scalable and effective human computation approach for proactive identification and characterization of atypical images. Our approach consists of i) an image atypicality identification and characterization task that presents to the human worker both a local view of visually similar images and a global view of images from the class of interest and ii) an automatic image sampling method that selects a diverse set of atypical images based on both visual and semantic features. We demonstrate the effectiveness and cost-efficiency of our approach through controlled crowdsourcing experiments and provide a characterization of image atypicality based on human annotations of 10K images. We showcase the utility of the identified atypical images by testing state-of-the-art image classification services against such images and provide an in-depth comparative analysis of the alignment between human- and machine-perceived image atypicality. Our findings have important implications for developing and deploying reliable image classification systems.
高质量的数据对于建立可靠的图像分类模型至关重要。尽管如此,是什么使图像难以分类仍然是一个未研究的话题。本文提供了一种基于人类理解的图像非典型性的首个与模型无关的表征。我们考虑了“野外”图像分类的设置,其中大量未标记的图像是可访问的,并引入了一种可扩展和有效的人工计算方法来主动识别和表征非典型图像。我们的方法包括i)图像非典型性识别和表征任务,向人类工作人员展示视觉上相似图像的局部视图和感兴趣类别图像的全局视图;ii)自动图像采样方法,根据视觉和语义特征选择不同的非典型图像集。我们通过控制众包实验证明了我们方法的有效性和成本效益,并提供了基于10K图像的人工注释的图像非典型化特征。我们通过测试最先进的图像分类服务来展示识别的非典型图像的效用,并对人类和机器感知的图像非典型性之间的一致性进行了深入的比较分析。我们的发现对开发和部署可靠的图像分类系统具有重要意义。
{"title":"Perspective: Leveraging Human Understanding for Identifying and Characterizing Image Atypicality","authors":"Shahin Sharifi Noorian, S. Qiu, Burcu Sayin, Agathe Balayn, U. Gadiraju, Jie Yang, A. Bozzon","doi":"10.1145/3581641.3584096","DOIUrl":"https://doi.org/10.1145/3581641.3584096","url":null,"abstract":"High-quality data plays a vital role in developing reliable image classification models. Despite that, what makes an image difficult to classify remains an unstudied topic. This paper provides a first-of-its-kind, model-agnostic characterization of image atypicality based on human understanding. We consider the setting of image classification “in the wild”, where a large number of unlabeled images are accessible, and introduce a scalable and effective human computation approach for proactive identification and characterization of atypical images. Our approach consists of i) an image atypicality identification and characterization task that presents to the human worker both a local view of visually similar images and a global view of images from the class of interest and ii) an automatic image sampling method that selects a diverse set of atypical images based on both visual and semantic features. We demonstrate the effectiveness and cost-efficiency of our approach through controlled crowdsourcing experiments and provide a characterization of image atypicality based on human annotations of 10K images. We showcase the utility of the identified atypical images by testing state-of-the-art image classification services against such images and provide an in-depth comparative analysis of the alignment between human- and machine-perceived image atypicality. Our findings have important implications for developing and deploying reliable image classification systems.","PeriodicalId":118159,"journal":{"name":"Proceedings of the 28th International Conference on Intelligent User Interfaces","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126189417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advice Provision in Teleoperation of Autonomous Vehicles 自动驾驶汽车远程操作中的建议提供
Pub Date : 2023-03-27 DOI: 10.1145/3581641.3584068
Yohai Trabelsi, Or Shabat, J. Lanir, Oleg Maksimov, Sarit Kraus
Teleoperation of autonomous vehicles has been gaining a lot of attention recently and is expected to play an important role in helping autonomous vehicles handle difficult situations which they cannot handle on their own. In such cases, a remote driver located in a teleoperation center can remotely drive the vehicle until the situation is resolved. However, teledriving is a challenging task and requires many cognitive resources from the teleoperator. Our goal is to assist the remote driver in some complex situations by giving the driver appropriate advice. The advice is displayed on the driver’s screen to help her make the right decision. To this end, we introduce the TeleOperator Advisor (TOA), an adaptive agent that provides assisting advice to a remote driver. We evaluate the TOA in a simulation-based setting in two scenarios: overtaking a slow vehicle and passing through a traffic light. Results indicate that our advice helps to reduce the cognitive load of the remote driver and improve driving performance.
无人驾驶汽车的远程操作技术最近备受关注,预计将在无人驾驶汽车处理无法独自处理的困难情况方面发挥重要作用。在这种情况下,位于远程操作中心的远程驾驶员可以远程驾驶车辆,直到情况得到解决。然而,远程驾驶是一项具有挑战性的任务,需要远程驾驶人的大量认知资源。我们的目标是在一些复杂的情况下通过给司机适当的建议来帮助远程司机。这些建议会显示在司机的屏幕上,帮助她做出正确的决定。为此,我们引入了TeleOperator Advisor (TOA),这是一种自适应代理,可以为远程驾驶员提供辅助建议。我们在基于模拟的两种情况下评估TOA:超车和通过红绿灯。结果表明,该建议有助于减轻远程驾驶员的认知负荷,提高驾驶性能。
{"title":"Advice Provision in Teleoperation of Autonomous Vehicles","authors":"Yohai Trabelsi, Or Shabat, J. Lanir, Oleg Maksimov, Sarit Kraus","doi":"10.1145/3581641.3584068","DOIUrl":"https://doi.org/10.1145/3581641.3584068","url":null,"abstract":"Teleoperation of autonomous vehicles has been gaining a lot of attention recently and is expected to play an important role in helping autonomous vehicles handle difficult situations which they cannot handle on their own. In such cases, a remote driver located in a teleoperation center can remotely drive the vehicle until the situation is resolved. However, teledriving is a challenging task and requires many cognitive resources from the teleoperator. Our goal is to assist the remote driver in some complex situations by giving the driver appropriate advice. The advice is displayed on the driver’s screen to help her make the right decision. To this end, we introduce the TeleOperator Advisor (TOA), an adaptive agent that provides assisting advice to a remote driver. We evaluate the TOA in a simulation-based setting in two scenarios: overtaking a slow vehicle and passing through a traffic light. Results indicate that our advice helps to reduce the cognitive load of the remote driver and improve driving performance.","PeriodicalId":118159,"journal":{"name":"Proceedings of the 28th International Conference on Intelligent User Interfaces","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126547256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
EarPPG: Securing Your Identity with Your Ears EarPPG:用耳朵保护你的身份
Pub Date : 2023-03-27 DOI: 10.1145/3581641.3584070
Seokmin Choi, Junghwan Yim, Yincheng Jin, Yang Gao, Jiyang Li, Zhanpeng Jin
Wearable devices have become indispensable gadgets in people’s daily lives nowadays; especially wireless earphones have experienced unprecedented growth in recent years, which lead to increasing interest and explorations of user authentication techniques. Conventional user authentication methods embedded in wireless earphones that use microphones or other modalities are vulnerable to environmental factors, such as loud noises or occlusions. To address this limitation, we introduce EarPPG, a new biometric modality that takes advantage of the unique in-ear photoplethysmography (PPG) signals, altered by a user’s unique speaking behaviors. When the user is speaking, muscle movements cause changes in the blood vessel geometry, inducing unique PPG signal variations. As speaking behaviors and PPG signals are unique, the EarPPG combines both biometric traits and presents a secure and obscure authentication solution. The system first detects and segments EarPPG signals and proceeds to extract effective features to construct a user authentication model with the 1D ReGRU network. We conducted comprehensive real-world evaluations with 25 human participants and achieved 94.84% accuracy, 0.95 precision, recall, and f1-score, respectively. Moreover, considering the practical implications, we conducted several extensive in-the-wild experiments, including body motions, occlusions, lighting, and permanence. Overall outcomes of this study possess the potential to be embedded in future smart earable devices.
如今,可穿戴设备已经成为人们日常生活中不可或缺的小工具;尤其是无线耳机,近年来有了前所未有的发展,引起了人们对用户认证技术的兴趣和探索。嵌入在使用麦克风或其他模式的无线耳机中的传统用户身份验证方法容易受到环境因素的影响,例如大声噪音或闭塞。为了解决这一限制,我们引入了EarPPG,这是一种新的生物识别模式,利用独特的耳内光电体积脉搏波(PPG)信号,由用户独特的说话行为改变。当使用者说话时,肌肉运动引起血管几何形状的变化,引起独特的PPG信号变化。由于说话行为和PPG信号都是独一无二的,EarPPG结合了生物特征,提供了一种安全而模糊的认证解决方案。该系统首先对EarPPG信号进行检测和分割,提取有效特征,利用1D ReGRU网络构建用户认证模型。我们对25名人类参与者进行了全面的真实世界评估,分别达到了94.84%的准确率、0.95的准确率、召回率和f1分。此外,考虑到实际意义,我们进行了几个广泛的野外实验,包括身体运动、遮挡、照明和持久性。这项研究的总体结果有可能嵌入到未来的智能耳机设备中。
{"title":"EarPPG: Securing Your Identity with Your Ears","authors":"Seokmin Choi, Junghwan Yim, Yincheng Jin, Yang Gao, Jiyang Li, Zhanpeng Jin","doi":"10.1145/3581641.3584070","DOIUrl":"https://doi.org/10.1145/3581641.3584070","url":null,"abstract":"Wearable devices have become indispensable gadgets in people’s daily lives nowadays; especially wireless earphones have experienced unprecedented growth in recent years, which lead to increasing interest and explorations of user authentication techniques. Conventional user authentication methods embedded in wireless earphones that use microphones or other modalities are vulnerable to environmental factors, such as loud noises or occlusions. To address this limitation, we introduce EarPPG, a new biometric modality that takes advantage of the unique in-ear photoplethysmography (PPG) signals, altered by a user’s unique speaking behaviors. When the user is speaking, muscle movements cause changes in the blood vessel geometry, inducing unique PPG signal variations. As speaking behaviors and PPG signals are unique, the EarPPG combines both biometric traits and presents a secure and obscure authentication solution. The system first detects and segments EarPPG signals and proceeds to extract effective features to construct a user authentication model with the 1D ReGRU network. We conducted comprehensive real-world evaluations with 25 human participants and achieved 94.84% accuracy, 0.95 precision, recall, and f1-score, respectively. Moreover, considering the practical implications, we conducted several extensive in-the-wild experiments, including body motions, occlusions, lighting, and permanence. Overall outcomes of this study possess the potential to be embedded in future smart earable devices.","PeriodicalId":118159,"journal":{"name":"Proceedings of the 28th International Conference on Intelligent User Interfaces","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133786523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding Uncertainty: How Lay Decision-makers Perceive and Interpret Uncertainty in Human-AI Decision Making 理解不确定性:外行决策者如何感知和解释人类人工智能决策中的不确定性
Pub Date : 2023-03-27 DOI: 10.1145/3581641.3584033
Snehal Prabhudesai, Leyao Yang, Sumit Asthana, Xun Huan, Q. Liao, Nikola Banovic
Decision Support Systems (DSS) based on Machine Learning (ML) often aim to assist lay decision-makers, who are not math-savvy, in making high-stakes decisions. However, existing ML-based DSS are not always transparent about the probabilistic nature of ML predictions and how uncertain each prediction is. This lack of transparency could give lay decision-makers a false sense of reliability. Growing calls for AI transparency have led to increasing efforts to quantify and communicate model uncertainty. However, there are still gaps in knowledge regarding how and why the decision-makers utilize ML uncertainty information in their decision process. Here, we conducted a qualitative, think-aloud user study with 17 lay decision-makers who interacted with three different DSS: 1) interactive visualization, 2) DSS based on an ML model that provides predictions without uncertainty information, and 3) the same DSS with uncertainty information. Our qualitative analysis found that communicating uncertainty about ML predictions forced participants to slow down and think analytically about their decisions. This in turn made participants more vigilant, resulting in reduction in over-reliance on ML-based DSS. Our work contributes empirical knowledge on how lay decision-makers perceive, interpret, and make use of uncertainty information when interacting with DSS. Such foundational knowledge informs the design of future ML-based DSS that embrace transparent uncertainty communication.
基于机器学习(ML)的决策支持系统(DSS)通常旨在帮助不懂数学的外行决策者做出高风险决策。然而,现有的基于机器学习的决策支持系统对于机器学习预测的概率性质和每个预测的不确定性并不总是透明的。缺乏透明度可能会给外行决策者一种错误的可靠感。越来越多的人呼吁提高人工智能的透明度,这导致越来越多的人努力量化和传达模型的不确定性。然而,关于决策者如何以及为什么在决策过程中利用机器学习不确定性信息,在知识方面仍然存在空白。在这里,我们对17位外行决策者进行了定性的、有声思考的用户研究,他们与三种不同的决策支持系统进行了互动:1)交互式可视化,2)基于ML模型的决策支持系统(提供不确定信息的预测),以及3)具有不确定信息的相同决策支持系统。我们的定性分析发现,传达机器学习预测的不确定性迫使参与者放慢速度,对他们的决定进行分析性思考。这反过来又使参与者更加警惕,从而减少了对基于ml的决策支持系统的过度依赖。我们的工作为外行决策者在与决策支持系统互动时如何感知、解释和利用不确定性信息提供了经验知识。这些基础知识为未来基于ml的决策支持系统的设计提供了信息,这些决策支持系统包含透明的不确定性通信。
{"title":"Understanding Uncertainty: How Lay Decision-makers Perceive and Interpret Uncertainty in Human-AI Decision Making","authors":"Snehal Prabhudesai, Leyao Yang, Sumit Asthana, Xun Huan, Q. Liao, Nikola Banovic","doi":"10.1145/3581641.3584033","DOIUrl":"https://doi.org/10.1145/3581641.3584033","url":null,"abstract":"Decision Support Systems (DSS) based on Machine Learning (ML) often aim to assist lay decision-makers, who are not math-savvy, in making high-stakes decisions. However, existing ML-based DSS are not always transparent about the probabilistic nature of ML predictions and how uncertain each prediction is. This lack of transparency could give lay decision-makers a false sense of reliability. Growing calls for AI transparency have led to increasing efforts to quantify and communicate model uncertainty. However, there are still gaps in knowledge regarding how and why the decision-makers utilize ML uncertainty information in their decision process. Here, we conducted a qualitative, think-aloud user study with 17 lay decision-makers who interacted with three different DSS: 1) interactive visualization, 2) DSS based on an ML model that provides predictions without uncertainty information, and 3) the same DSS with uncertainty information. Our qualitative analysis found that communicating uncertainty about ML predictions forced participants to slow down and think analytically about their decisions. This in turn made participants more vigilant, resulting in reduction in over-reliance on ML-based DSS. Our work contributes empirical knowledge on how lay decision-makers perceive, interpret, and make use of uncertainty information when interacting with DSS. Such foundational knowledge informs the design of future ML-based DSS that embrace transparent uncertainty communication.","PeriodicalId":118159,"journal":{"name":"Proceedings of the 28th International Conference on Intelligent User Interfaces","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123287326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Proceedings of the 28th International Conference on Intelligent User Interfaces
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1