首页 > 最新文献

ACM Transactions on Interactive Intelligent Systems最新文献

英文 中文
How do Users Experience Traceability of AI Systems? Examining Subjective Information Processing Awareness in Automated Insulin Delivery (AID) Systems 用户如何体验人工智能系统的可追溯性?胰岛素自动输送(AID)系统的主观信息处理意识研究
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-03-24 DOI: 10.1145/3588594
Tim Schrills, T. Franke
When interacting with artificial intelligence (AI) in the medical domain, users frequently face automated information processing, which can remain opaque to them. For example, users with diabetes may interact daily with automated insulin delivery (AID). However, effective AID therapy requires traceability of automated decisions for diverse users. Grounded in research on human-automation interaction, we study Subjective Information Processing Awareness (SIPA) as a key construct to research users’ experience of explainable AI. The objective of the present research was to examine how users experience differing levels of traceability of an AI algorithm. We developed a basic AID simulation to create realistic scenarios for an experiment with N = 80, where we examined the effect of three levels of information disclosure on SIPA and performance. Attributes serving as the basis for insulin needs calculation were shown to users, who predicted the AID system’s calculation after over 60 observations. Results showed a difference in SIPA after repeated observations, associated with a general decline of SIPA ratings over time. Supporting scale validity, SIPA was strongly correlated with trust and satisfaction with explanations. The present research indicates that the effect of different levels of information disclosure may need several repetitions before it manifests. Additionally, high levels of information disclosure may lead to a miscalibration between SIPA and performance in predicting the system’s results. The results indicate that for a responsible design of XAI, system designers could utilize prediction tasks in order to calibrate experienced traceability.
在与医疗领域的人工智能(AI)交互时,用户经常面临自动化的信息处理,而这些信息对他们来说可能仍然是不透明的。例如,糖尿病患者可能每天都与自动胰岛素输送(AID)互动。然而,有效的艾滋病治疗需要不同用户的自动决策的可追溯性。在人机交互研究的基础上,我们研究了主观信息处理意识(SIPA)作为研究可解释人工智能用户体验的关键结构。本研究的目的是研究用户如何体验不同程度的人工智能算法的可追溯性。我们开发了一个基本的AID模拟,为一个N = 80的实验创建了真实的场景,在这个实验中,我们检查了三个级别的信息披露对SIPA和绩效的影响。将作为胰岛素需求计算基础的属性显示给用户,用户经过60多次观察后预测AID系统的计算结果。反复观察后,结果显示SIPA的差异,与SIPA评分随时间的普遍下降有关。支持量表效度,SIPA与信任和对解释的满意度呈强相关。本研究表明,不同层次的信息披露效应可能需要多次重复才能显现。此外,高水平的信息披露可能导致SIPA与预测系统结果的性能之间的错误校准。结果表明,对于负责任的XAI设计,系统设计者可以利用预测任务来校准经验可追溯性。
{"title":"How do Users Experience Traceability of AI Systems? Examining Subjective Information Processing Awareness in Automated Insulin Delivery (AID) Systems","authors":"Tim Schrills, T. Franke","doi":"10.1145/3588594","DOIUrl":"https://doi.org/10.1145/3588594","url":null,"abstract":"When interacting with artificial intelligence (AI) in the medical domain, users frequently face automated information processing, which can remain opaque to them. For example, users with diabetes may interact daily with automated insulin delivery (AID). However, effective AID therapy requires traceability of automated decisions for diverse users. Grounded in research on human-automation interaction, we study Subjective Information Processing Awareness (SIPA) as a key construct to research users’ experience of explainable AI. The objective of the present research was to examine how users experience differing levels of traceability of an AI algorithm. We developed a basic AID simulation to create realistic scenarios for an experiment with N = 80, where we examined the effect of three levels of information disclosure on SIPA and performance. Attributes serving as the basis for insulin needs calculation were shown to users, who predicted the AID system’s calculation after over 60 observations. Results showed a difference in SIPA after repeated observations, associated with a general decline of SIPA ratings over time. Supporting scale validity, SIPA was strongly correlated with trust and satisfaction with explanations. The present research indicates that the effect of different levels of information disclosure may need several repetitions before it manifests. Additionally, high levels of information disclosure may lead to a miscalibration between SIPA and performance in predicting the system’s results. The results indicate that for a responsible design of XAI, system designers could utilize prediction tasks in order to calibrate experienced traceability.","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85212145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Conversational Context-sensitive Ad Generation with a Few Core-Queries 会话上下文敏感广告生成与一些核心查询
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-03-23 DOI: 10.1145/3588578
Ryoichi Shibata, Shoya Matsumori, Yosuke Fukuchi, Tomoyuki Maekawa, Mitsuhiko Kimoto, M. Imai
When people are talking together in front of digital signage, advertisements that are aware of the context of the dialogue will work the most effectively. However, it has been challenging for computer systems to retrieve the appropriate advertisement from among the many options presented in large databases. Our proposed system, the Conversational Context-sensitive Advertisement generator (CoCoA), is the first attempt to apply masked word prediction to web information retrieval that takes into account the dialogue context. The novelty of CoCoA is that advertisers simply need to prepare a few abstract phrases, called Core-Queries, and then CoCoA automatically generates a context-sensitive expression as a complete search query by utilizing a masked word prediction technique that adds a word related to the dialogue context to one of the prepared Core-Queries. This automatic generation frees the advertisers from having to come up with context-sensitive phrases to attract users’ attention. Another unique point is that the modified Core-Query offers users speaking in front of the CoCoA system a list of context-sensitive advertisements. CoCoA was evaluated by crowd workers regarding the context-sensitivity of the generated search queries against the dialogue text of multiple domains prepared in advance. The results indicated that CoCoA could present more contextual and practical advertisements than other web-retrieval systems. Moreover, CoCoA acquired a higher evaluation in a particular conversation that included many travel topics to which the Core-Queries were designated, implying that it succeeded in adapting the Core-Queries for the specific ongoing context better than the compared method without any effort on the part of the advertisers. In addition, case studies with users and advertisers revealed that the context-sensitive advertisements generated by CoCoA also had an effect on the content of the ongoing dialogue. Specifically, since pairs unfamiliar with each other more frequently referred to the advertisement CoCoA displayed, the advertisements had an effect on the topics about which the pairs spoke. Moreover, participants of an advertiser role recognized that some of the search queries generated by CoCoA fit the context of a conversation and that CoCoA improved the effect of the advertisement. In particular, they learned how to design of designing a good Core-Query at ease by observing the users’ response to the advertisements retrieved with the generated search queries.
当人们在数字标牌前交谈时,了解对话背景的广告将最有效地发挥作用。然而,对于计算机系统来说,从大型数据库中提供的众多选项中检索适当的广告一直是一项挑战。我们提出的会话上下文敏感广告生成器(conversation context -sensitive advertising generator, CoCoA)是首次尝试将掩码词预测应用于考虑对话上下文的web信息检索。CoCoA的新颖之处在于,广告商只需要准备几个抽象短语(称为核心查询),然后CoCoA利用掩码词预测技术(将与对话上下文相关的单词添加到一个准备好的核心查询中),自动生成一个上下文敏感的表达式作为完整的搜索查询。这种自动生成功能使广告商不必为了吸引用户的注意力而想出与上下文相关的短语。另一个独特之处在于,修改后的Core-Query为在CoCoA系统前发言的用户提供了一个上下文敏感的广告列表。CoCoA是由人群工作人员根据预先准备的多个域的对话文本对生成的搜索查询的上下文敏感性进行评估的。结果表明,与其他网络检索系统相比,CoCoA可以提供更多的情境性和实用性广告。此外,在包含许多指定的核心查询的旅游主题的特定对话中,CoCoA获得了更高的评价,这意味着它比比较的方法更成功地使核心查询适应特定的持续上下文,而无需广告商的任何努力。此外,对用户和广告商的案例研究表明,由CoCoA生成的上下文敏感广告也对正在进行的对话的内容产生影响。具体来说,由于彼此不熟悉的配对更频繁地提到CoCoA展示的广告,广告对配对谈论的话题有影响。此外,广告角色的参与者认识到,由CoCoA生成的一些搜索查询符合对话的上下文,并且CoCoA提高了广告的效果。特别是,他们学会了如何通过观察用户对生成的搜索查询所检索到的广告的反应来轻松地设计一个好的核心查询。
{"title":"Conversational Context-sensitive Ad Generation with a Few Core-Queries","authors":"Ryoichi Shibata, Shoya Matsumori, Yosuke Fukuchi, Tomoyuki Maekawa, Mitsuhiko Kimoto, M. Imai","doi":"10.1145/3588578","DOIUrl":"https://doi.org/10.1145/3588578","url":null,"abstract":"When people are talking together in front of digital signage, advertisements that are aware of the context of the dialogue will work the most effectively. However, it has been challenging for computer systems to retrieve the appropriate advertisement from among the many options presented in large databases. Our proposed system, the Conversational Context-sensitive Advertisement generator (CoCoA), is the first attempt to apply masked word prediction to web information retrieval that takes into account the dialogue context. The novelty of CoCoA is that advertisers simply need to prepare a few abstract phrases, called Core-Queries, and then CoCoA automatically generates a context-sensitive expression as a complete search query by utilizing a masked word prediction technique that adds a word related to the dialogue context to one of the prepared Core-Queries. This automatic generation frees the advertisers from having to come up with context-sensitive phrases to attract users’ attention. Another unique point is that the modified Core-Query offers users speaking in front of the CoCoA system a list of context-sensitive advertisements. CoCoA was evaluated by crowd workers regarding the context-sensitivity of the generated search queries against the dialogue text of multiple domains prepared in advance. The results indicated that CoCoA could present more contextual and practical advertisements than other web-retrieval systems. Moreover, CoCoA acquired a higher evaluation in a particular conversation that included many travel topics to which the Core-Queries were designated, implying that it succeeded in adapting the Core-Queries for the specific ongoing context better than the compared method without any effort on the part of the advertisers. In addition, case studies with users and advertisers revealed that the context-sensitive advertisements generated by CoCoA also had an effect on the content of the ongoing dialogue. Specifically, since pairs unfamiliar with each other more frequently referred to the advertisement CoCoA displayed, the advertisements had an effect on the topics about which the pairs spoke. Moreover, participants of an advertiser role recognized that some of the search queries generated by CoCoA fit the context of a conversation and that CoCoA improved the effect of the advertisement. In particular, they learned how to design of designing a good Core-Query at ease by observing the users’ response to the advertisements retrieved with the generated search queries.","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89068206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conversational Context-Sensitive Ad Generation With a Few Core-Queries 会话上下文敏感广告生成与一些核心查询
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-03-23 DOI: https://dl.acm.org/doi/10.1145/3588578
Ryoichi Shibata, Shoya Matsumori, Yosuke Fukuchi, Tomoyuki Maekawa, Mitsuhiko Kimoto, Michita Imai

When people are talking together in front of digital signage, advertisements that are aware of the context of the dialogue will work the most effectively. However, it has been challenging for computer systems to retrieve the appropriate advertisement from among the many options presented in large databases. Our proposed system, the Conversational Context-sensitive Advertisement generator (CoCoA), is the first attempt to apply masked word prediction to web information retrieval that takes into account the dialogue context. The novelty of CoCoA is that advertisers simply need to prepare a few abstract phrases, called Core-Queries, and then CoCoA automatically generates a context-sensitive expression as a complete search query by utilizing a masked word prediction technique that adds a word related to the dialogue context to one of the prepared Core-Queries. This automatic generation frees the advertisers from having to come up with context-sensitive phrases to attract users’ attention. Another unique point is that the modified Core-Query offers users speaking in front of the CoCoA system a list of context-sensitive advertisements. CoCoA was evaluated by crowd workers regarding the context-sensitivity of the generated search queries against the dialogue text of multiple domains prepared in advance. The results indicated that CoCoA could present more contextual and practical advertisements than other web-retrieval systems. Moreover, CoCoA acquired a higher evaluation in a particular conversation that included many travel topics to which the Core-Queries were designated, implying that it succeeded in adapting the Core-Queries for the specific ongoing context better than the compared method without any effort on the part of the advertisers. In addition, case studies with users and advertisers revealed that the context-sensitive advertisements generated by CoCoA also had an effect on the content of the ongoing dialogue. Specifically, since pairs unfamiliar with each other more frequently referred to the advertisement CoCoA displayed, the advertisements had an effect on the topics about which the pairs spoke. Moreover, participants of an advertiser role recognized that some of the search queries generated by CoCoA fitted the context of a conversation and that CoCoA improved the effect of the advertisement. In particular, they assimilated the hang of designing a good Core-Query at ease by observing the users’ response to the advertisements retrieved with the generated search queries.

当人们在数字标牌前交谈时,了解对话背景的广告将最有效地发挥作用。然而,对于计算机系统来说,从大型数据库中提供的众多选项中检索适当的广告一直是一项挑战。我们提出的会话上下文敏感广告生成器(conversation context -sensitive advertising generator, CoCoA)是首次尝试将掩码词预测应用于考虑对话上下文的web信息检索。CoCoA的新颖之处在于,广告商只需要准备几个抽象短语(称为核心查询),然后CoCoA利用掩码词预测技术(将与对话上下文相关的单词添加到一个准备好的核心查询中),自动生成一个上下文敏感的表达式作为完整的搜索查询。这种自动生成功能使广告商不必为了吸引用户的注意力而想出与上下文相关的短语。另一个独特之处在于,修改后的Core-Query为在CoCoA系统前发言的用户提供了一个上下文敏感的广告列表。CoCoA是由人群工作人员根据预先准备的多个域的对话文本对生成的搜索查询的上下文敏感性进行评估的。结果表明,与其他网络检索系统相比,CoCoA可以提供更多的情境性和实用性广告。此外,在包含许多指定的核心查询的旅游主题的特定对话中,CoCoA获得了更高的评价,这意味着它比比较的方法更成功地使核心查询适应特定的持续上下文,而无需广告商的任何努力。此外,对用户和广告商的案例研究表明,由CoCoA生成的上下文敏感广告也对正在进行的对话的内容产生影响。具体来说,由于彼此不熟悉的配对更频繁地提到CoCoA展示的广告,广告对配对谈论的话题有影响。此外,广告角色的参与者认识到,由CoCoA生成的一些搜索查询符合对话的上下文,并且CoCoA提高了广告的效果。特别是,他们通过观察用户对生成的搜索查询检索到的广告的反应,轻松地吸收了设计一个好的核心查询的诀窍。
{"title":"Conversational Context-Sensitive Ad Generation With a Few Core-Queries","authors":"Ryoichi Shibata, Shoya Matsumori, Yosuke Fukuchi, Tomoyuki Maekawa, Mitsuhiko Kimoto, Michita Imai","doi":"https://dl.acm.org/doi/10.1145/3588578","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3588578","url":null,"abstract":"<p>When people are talking together in front of digital signage, advertisements that are aware of the context of the dialogue will work the most effectively. However, it has been challenging for computer systems to retrieve the appropriate advertisement from among the many options presented in large databases. Our proposed system, the Conversational Context-sensitive Advertisement generator (CoCoA), is the first attempt to apply masked word prediction to web information retrieval that takes into account the dialogue context. The novelty of CoCoA is that advertisers simply need to prepare a few abstract phrases, called Core-Queries, and then CoCoA automatically generates a context-sensitive expression as a complete search query by utilizing a masked word prediction technique that adds a word related to the dialogue context to one of the prepared Core-Queries. This automatic generation frees the advertisers from having to come up with context-sensitive phrases to attract users’ attention. Another unique point is that the modified Core-Query offers users speaking in front of the CoCoA system a list of context-sensitive advertisements. CoCoA was evaluated by crowd workers regarding the context-sensitivity of the generated search queries against the dialogue text of multiple domains prepared in advance. The results indicated that CoCoA could present more contextual and practical advertisements than other web-retrieval systems. Moreover, CoCoA acquired a higher evaluation in a particular conversation that included many travel topics to which the Core-Queries were designated, implying that it succeeded in adapting the Core-Queries for the specific ongoing context better than the compared method without any effort on the part of the advertisers. In addition, case studies with users and advertisers revealed that the context-sensitive advertisements generated by CoCoA also had an effect on the content of the ongoing dialogue. Specifically, since pairs unfamiliar with each other more frequently referred to the advertisement CoCoA displayed, the advertisements had an effect on the topics about which the pairs spoke. Moreover, participants of an advertiser role recognized that some of the search queries generated by CoCoA fitted the context of a conversation and that CoCoA improved the effect of the advertisement. In particular, they assimilated the hang of designing a good Core-Query at ease by observing the users’ response to the advertisements retrieved with the generated search queries.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of AI and Logic-Style Explanations on Users’ Decisions under Different Levels of Uncertainty 不同不确定性水平下人工智能与逻辑式解释对用户决策的影响
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-03-16 DOI: 10.1145/3588320
Federico Maria Cau, H. Hauptmann, L. D. Spano, N. Tintarev
Existing eXplainable Artificial Intelligence (XAI) techniques support people in interpreting AI advice. However, while previous work evaluates the users’ understanding of explanations, factors influencing the decision support are largely overlooked in the literature. This paper addresses this gap by studying the impact of user uncertainty, AI correctness, and the interaction between AI uncertainty and explanation logic-styles, for classification tasks. We conducted two separate studies: one requesting participants to recognise hand-written digits and one to classify the sentiment of reviews. To assess the decision making, we analysed the task performance, agreement with the AI suggestion, and the user’s reliance on the XAI interface elements. Participants make their decision relying on three pieces of information in the XAI interface (image or text instance, AI prediction, and explanation). Participants were shown one explanation style (between-participants design): according to three styles of logical reasoning (inductive, deductive, and abductive). This allowed us to study how different levels of AI uncertainty influence the effectiveness of different explanation styles. The results show that user uncertainty and AI correctness on predictions significantly affected users’ classification decisions considering the analysed metrics. In both domains (images and text), users relied mainly on the instance to decide. Users were usually overconfident about their choices, and this evidence was more pronounced for text. Furthermore, the inductive style explanations led to over-reliance on the AI advice in both domains – it was the most persuasive, even when the AI was incorrect. The abductive and deductive styles have complex effects depending on the domain and the AI uncertainty levels.
现有的可解释人工智能(XAI)技术支持人们解释人工智能的建议。然而,虽然以前的研究评估了用户对解释的理解,但文献中很大程度上忽视了影响决策支持的因素。本文通过研究用户不确定性、人工智能正确性以及人工智能不确定性与解释逻辑风格之间的交互对分类任务的影响来解决这一差距。我们进行了两项独立的研究:一项要求参与者识别手写的数字,另一项要求参与者对评论的情绪进行分类。为了评估决策,我们分析了任务执行情况、与AI建议的一致性以及用户对XAI界面元素的依赖程度。参与者根据XAI界面中的三个信息(图像或文本实例、AI预测和解释)做出决策。参与者被展示了一种解释风格(参与者之间的设计):根据三种逻辑推理风格(归纳、演绎和溯因)。这使我们能够研究不同程度的人工智能不确定性如何影响不同解释风格的有效性。结果表明,考虑所分析的指标,用户的不确定性和人工智能对预测的正确性显著影响用户的分类决策。在这两个领域(图像和文本)中,用户主要依靠实例来决定。用户通常对自己的选择过于自信,这一点在文本方面表现得更为明显。此外,归纳式解释导致在这两个领域过度依赖人工智能的建议——它是最有说服力的,即使人工智能是不正确的。溯因和演绎风格具有复杂的影响,取决于领域和人工智能的不确定性水平。
{"title":"Effects of AI and Logic-Style Explanations on Users’ Decisions under Different Levels of Uncertainty","authors":"Federico Maria Cau, H. Hauptmann, L. D. Spano, N. Tintarev","doi":"10.1145/3588320","DOIUrl":"https://doi.org/10.1145/3588320","url":null,"abstract":"Existing eXplainable Artificial Intelligence (XAI) techniques support people in interpreting AI advice. However, while previous work evaluates the users’ understanding of explanations, factors influencing the decision support are largely overlooked in the literature. This paper addresses this gap by studying the impact of user uncertainty, AI correctness, and the interaction between AI uncertainty and explanation logic-styles, for classification tasks. We conducted two separate studies: one requesting participants to recognise hand-written digits and one to classify the sentiment of reviews. To assess the decision making, we analysed the task performance, agreement with the AI suggestion, and the user’s reliance on the XAI interface elements. Participants make their decision relying on three pieces of information in the XAI interface (image or text instance, AI prediction, and explanation). Participants were shown one explanation style (between-participants design): according to three styles of logical reasoning (inductive, deductive, and abductive). This allowed us to study how different levels of AI uncertainty influence the effectiveness of different explanation styles. The results show that user uncertainty and AI correctness on predictions significantly affected users’ classification decisions considering the analysed metrics. In both domains (images and text), users relied mainly on the instance to decide. Users were usually overconfident about their choices, and this evidence was more pronounced for text. Furthermore, the inductive style explanations led to over-reliance on the AI advice in both domains – it was the most persuasive, even when the AI was incorrect. The abductive and deductive styles have complex effects depending on the domain and the AI uncertainty levels.","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80461681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Effects of AI and Logic-Style Explanations on Users’ Decisions under Different Levels of Uncertainty 不同不确定性水平下人工智能与逻辑式解释对用户决策的影响
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-03-16 DOI: https://dl.acm.org/doi/10.1145/3588320
Federico Maria Cau, Hanna Hauptmann, Lucio Davide Spano, Nava Tintarev

Existing eXplainable Artificial Intelligence (XAI) techniques support people in interpreting AI advice. However, while previous work evaluates the users’ understanding of explanations, factors influencing the decision support are largely overlooked in the literature. This paper addresses this gap by studying the impact of user uncertainty, AI correctness, and the interaction between AI uncertainty and explanation logic-styles, for classification tasks. We conducted two separate studies: one requesting participants to recognise hand-written digits and one to classify the sentiment of reviews. To assess the decision making, we analysed the task performance, agreement with the AI suggestion, and the user’s reliance on the XAI interface elements. Participants make their decision relying on three pieces of information in the XAI interface (image or text instance, AI prediction, and explanation). Participants were shown one explanation style (between-participants design): according to three styles of logical reasoning (inductive, deductive, and abductive). This allowed us to study how different levels of AI uncertainty influence the effectiveness of different explanation styles. The results show that user uncertainty and AI correctness on predictions significantly affected users’ classification decisions considering the analysed metrics. In both domains (images and text), users relied mainly on the instance to decide. Users were usually overconfident about their choices, and this evidence was more pronounced for text. Furthermore, the inductive style explanations led to over-reliance on the AI advice in both domains – it was the most persuasive, even when the AI was incorrect. The abductive and deductive styles have complex effects depending on the domain and the AI uncertainty levels.

现有的可解释人工智能(XAI)技术支持人们解释人工智能的建议。然而,虽然以前的研究评估了用户对解释的理解,但文献中很大程度上忽视了影响决策支持的因素。本文通过研究用户不确定性、人工智能正确性以及人工智能不确定性与解释逻辑风格之间的交互对分类任务的影响来解决这一差距。我们进行了两项独立的研究:一项要求参与者识别手写的数字,另一项要求参与者对评论的情绪进行分类。为了评估决策,我们分析了任务执行情况、与AI建议的一致性以及用户对XAI界面元素的依赖程度。参与者根据XAI界面中的三个信息(图像或文本实例、AI预测和解释)做出决策。参与者被展示了一种解释风格(参与者之间的设计):根据三种逻辑推理风格(归纳、演绎和溯因)。这使我们能够研究不同程度的人工智能不确定性如何影响不同解释风格的有效性。结果表明,考虑所分析的指标,用户的不确定性和人工智能对预测的正确性显著影响用户的分类决策。在这两个领域(图像和文本)中,用户主要依靠实例来决定。用户通常对自己的选择过于自信,这一点在文本方面表现得更为明显。此外,归纳式解释导致在这两个领域过度依赖人工智能的建议——它是最有说服力的,即使人工智能是不正确的。溯因和演绎风格具有复杂的影响,取决于领域和人工智能的不确定性水平。
{"title":"Effects of AI and Logic-Style Explanations on Users’ Decisions under Different Levels of Uncertainty","authors":"Federico Maria Cau, Hanna Hauptmann, Lucio Davide Spano, Nava Tintarev","doi":"https://dl.acm.org/doi/10.1145/3588320","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3588320","url":null,"abstract":"<p>Existing eXplainable Artificial Intelligence (XAI) techniques support people in interpreting AI advice. However, while previous work evaluates the users’ understanding of explanations, factors influencing the decision support are largely overlooked in the literature. This paper addresses this gap by studying the impact of <i>user uncertainty</i>, <i>AI correctness</i>, and the interaction between <i>AI uncertainty</i> and <i>explanation logic-styles</i>, for classification tasks. We conducted two separate studies: one requesting participants to recognise hand-written digits and one to classify the sentiment of reviews. To assess the decision making, we analysed the <i>task performance, agreement</i> with the AI suggestion, and the user’s <i>reliance</i> on the XAI interface elements. Participants make their decision relying on three pieces of information in the XAI interface (image or text instance, AI prediction, and explanation). Participants were shown one explanation style (between-participants design): according to three styles of logical reasoning (inductive, deductive, and abductive). This allowed us to study how different levels of AI uncertainty influence the effectiveness of different explanation styles. The results show that user uncertainty and AI correctness on predictions significantly affected users’ classification decisions considering the analysed metrics. In both domains (images and text), users relied mainly on the instance to decide. Users were usually overconfident about their choices, and this evidence was more pronounced for text. Furthermore, the inductive style explanations led to over-reliance on the AI advice in both domains – it was the most persuasive, even when the AI was incorrect. The abductive and deductive styles have complex effects depending on the domain and the AI uncertainty levels.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual Analytics of Neuron Vulnerability to Adversarial Attacks on Convolutional Neural Networks 卷积神经网络对抗性攻击下神经元脆弱性的可视化分析
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-03-15 DOI: https://dl.acm.org/doi/10.1145/3587470
Yiran Li, Junpeng Wang, Takanori Fujiwara, Kwan-Liu Ma

Adversarial attacks on a convolutional neural network (CNN)—injecting human-imperceptible perturbations into an input image—could fool a high-performance CNN into making incorrect predictions. The success of adversarial attacks raises serious concerns about the robustness of CNNs, and prevents them from being used in safety-critical applications, such as medical diagnosis and autonomous driving. Our work introduces a visual analytics approach to understanding adversarial attacks by answering two questions: (1) which neurons are more vulnerable to attacks and (2) which image features do these vulnerable neurons capture during the prediction?For the first question, we introduce multiple perturbation-based measures to break down the attacking magnitude into individual CNN neurons and rank the neurons by their vulnerability levels. For the second, we identify image features (e.g., cat ears) that highly stimulate a user-selected neuron to augment and validate the neuron’s responsibility. Furthermore, we support an interactive exploration of a large number of neurons by aiding with hierarchical clustering based on the neurons’ roles in the prediction. To this end, a visual analytics system is designed to incorporate visual reasoning for interpreting adversarial attacks. We validate the effectiveness of our system through multiple case studies as well as feedback from domain experts.

对卷积神经网络(CNN)的对抗性攻击——在输入图像中注入人类难以察觉的扰动——可能会欺骗高性能的CNN做出错误的预测。对抗性攻击的成功引发了人们对cnn鲁棒性的严重担忧,并阻碍了它们在医疗诊断和自动驾驶等安全关键应用中的应用。我们的工作引入了一种视觉分析方法,通过回答两个问题来理解对抗性攻击:(1)哪些神经元更容易受到攻击;(2)这些脆弱的神经元在预测过程中捕捉到哪些图像特征?对于第一个问题,我们引入了多个基于微扰的度量,将攻击幅度分解为单个CNN神经元,并根据其脆弱性等级对神经元进行排名。其次,我们识别高度刺激用户选择的神经元的图像特征(例如,猫耳),以增强和验证神经元的职责。此外,我们通过基于神经元在预测中的作用的分层聚类来支持对大量神经元的交互式探索。为此,设计了一个视觉分析系统来结合视觉推理来解释对抗性攻击。我们通过多个案例研究以及来自领域专家的反馈来验证我们系统的有效性。
{"title":"Visual Analytics of Neuron Vulnerability to Adversarial Attacks on Convolutional Neural Networks","authors":"Yiran Li, Junpeng Wang, Takanori Fujiwara, Kwan-Liu Ma","doi":"https://dl.acm.org/doi/10.1145/3587470","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3587470","url":null,"abstract":"<p>Adversarial attacks on a convolutional neural network (CNN)—injecting human-imperceptible perturbations into an input image—could fool a high-performance CNN into making incorrect predictions. The success of adversarial attacks raises serious concerns about the robustness of CNNs, and prevents them from being used in safety-critical applications, such as medical diagnosis and autonomous driving. Our work introduces a visual analytics approach to understanding adversarial attacks by answering two questions: (1) <i>which neurons are more vulnerable to attacks</i> and (2) <i>which image features do these vulnerable neurons capture during the prediction?</i>\u0000For the first question, we introduce multiple perturbation-based measures to break down the attacking magnitude into individual CNN neurons and rank the neurons by their vulnerability levels. For the second, we identify image features (e.g., cat ears) that highly stimulate a user-selected neuron to augment and validate the neuron’s responsibility. Furthermore, we support an interactive exploration of a large number of neurons by aiding with hierarchical clustering based on the neurons’ roles in the prediction. To this end, a visual analytics system is designed to incorporate visual reasoning for interpreting adversarial attacks. We validate the effectiveness of our system through multiple case studies as well as feedback from domain experts.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Co-design of human-centered, explainable AI for clinical decision support 为临床决策支持共同设计以人为本、可解释的人工智能
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-03-14 DOI: https://dl.acm.org/doi/10.1145/3587271
Cecilia Panigutti, Andrea Beretta, Daniele Fadda, Fosca Giannotti, Dino Pedreschi, Alan Perotti, Salvatore Rinzivillo

eXplainable AI (XAI) involves two intertwined but separate challenges: the development of techniques to extract explanations from black-box AI models, and the way such explanations are presented to users, i.e., the explanation user interface. Despite its importance, the second aspect has received limited attention so far in the literature. Effective AI explanation interfaces are fundamental for allowing human decision-makers to take advantage and oversee high-risk AI systems effectively. Following an iterative design approach, we present the first cycle of prototyping-testing-redesigning of an explainable AI technique, and its explanation user interface for clinical Decision Support Systems (DSS). We first present an XAI technique that meets the technical requirements of the healthcare domain: sequential, ontology-linked patient data, and multi-label classification tasks. We demonstrate its applicability to explain a clinical DSS, and we design a first prototype of an explanation user interface. Next, we test such a prototype with healthcare providers and collect their feedback, with a two-fold outcome: first, we obtain evidence that explanations increase users’ trust in the XAI system, and second, we obtain useful insights on the perceived deficiencies of their interaction with the system, so that we can re-design a better, more human-centered explanation interface.

可解释的人工智能(XAI)涉及两个相互交织但又相互独立的挑战:开发从黑盒人工智能模型中提取解释的技术,以及将这些解释呈现给用户的方式,即解释用户界面。尽管第二方面很重要,但迄今为止在文献中得到的关注有限。有效的人工智能解释界面是允许人类决策者有效利用和监督高风险人工智能系统的基础。遵循迭代设计方法,我们提出了可解释的人工智能技术的原型-测试-重新设计的第一个周期,以及临床决策支持系统(DSS)的解释用户界面。我们首先提出了一种满足医疗保健领域技术需求的XAI技术:顺序的、本体链接的患者数据和多标签分类任务。我们论证了它在临床决策支持系统解释中的适用性,并设计了一个解释用户界面的第一个原型。接下来,我们与医疗保健提供者一起测试这样的原型并收集他们的反馈,结果有两个方面:首先,我们获得了解释增加用户对XAI系统信任的证据,其次,我们获得了关于他们与系统交互的感知缺陷的有用见解,以便我们可以重新设计更好、更以人为本的解释界面。
{"title":"Co-design of human-centered, explainable AI for clinical decision support","authors":"Cecilia Panigutti, Andrea Beretta, Daniele Fadda, Fosca Giannotti, Dino Pedreschi, Alan Perotti, Salvatore Rinzivillo","doi":"https://dl.acm.org/doi/10.1145/3587271","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3587271","url":null,"abstract":"<p>eXplainable AI (XAI) involves two intertwined but separate challenges: the development of techniques to extract explanations from black-box AI models, and the way such explanations are presented to users, i.e., the explanation user interface. Despite its importance, the second aspect has received limited attention so far in the literature. Effective AI explanation interfaces are fundamental for allowing human decision-makers to take advantage and oversee high-risk AI systems effectively. Following an iterative design approach, we present the first cycle of prototyping-testing-redesigning of an explainable AI technique, and its explanation user interface for clinical Decision Support Systems (DSS). We first present an XAI technique that meets the technical requirements of the healthcare domain: sequential, ontology-linked patient data, and multi-label classification tasks. We demonstrate its applicability to explain a clinical DSS, and we design a first prototype of an explanation user interface. Next, we test such a prototype with healthcare providers and collect their feedback, with a two-fold outcome: first, we obtain evidence that explanations increase users’ trust in the XAI system, and second, we obtain useful insights on the perceived deficiencies of their interaction with the system, so that we can re-design a better, more human-centered explanation interface.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Co-design of human-centered, explainable AI for clinical decision support 为临床决策支持共同设计以人为本、可解释的人工智能
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-03-14 DOI: 10.1145/3587271
Cecilia Panigutti, Andrea Beretta, D. Fadda, F. Giannotti, D. Pedreschi, A. Perotti, S. Rinzivillo
eXplainable AI (XAI) involves two intertwined but separate challenges: the development of techniques to extract explanations from black-box AI models, and the way such explanations are presented to users, i.e., the explanation user interface. Despite its importance, the second aspect has received limited attention so far in the literature. Effective AI explanation interfaces are fundamental for allowing human decision-makers to take advantage and oversee high-risk AI systems effectively. Following an iterative design approach, we present the first cycle of prototyping-testing-redesigning of an explainable AI technique, and its explanation user interface for clinical Decision Support Systems (DSS). We first present an XAI technique that meets the technical requirements of the healthcare domain: sequential, ontology-linked patient data, and multi-label classification tasks. We demonstrate its applicability to explain a clinical DSS, and we design a first prototype of an explanation user interface. Next, we test such a prototype with healthcare providers and collect their feedback, with a two-fold outcome: first, we obtain evidence that explanations increase users’ trust in the XAI system, and second, we obtain useful insights on the perceived deficiencies of their interaction with the system, so that we can re-design a better, more human-centered explanation interface.
可解释的人工智能(XAI)涉及两个相互交织但又相互独立的挑战:开发从黑盒人工智能模型中提取解释的技术,以及将这些解释呈现给用户的方式,即解释用户界面。尽管第二方面很重要,但迄今为止在文献中得到的关注有限。有效的人工智能解释界面是允许人类决策者有效利用和监督高风险人工智能系统的基础。遵循迭代设计方法,我们提出了可解释的人工智能技术的原型-测试-重新设计的第一个周期,以及临床决策支持系统(DSS)的解释用户界面。我们首先提出了一种满足医疗保健领域技术需求的XAI技术:顺序的、本体链接的患者数据和多标签分类任务。我们论证了它在临床决策支持系统解释中的适用性,并设计了一个解释用户界面的第一个原型。接下来,我们与医疗保健提供者一起测试这样的原型并收集他们的反馈,结果有两个方面:首先,我们获得了解释增加用户对XAI系统信任的证据,其次,我们获得了关于他们与系统交互的感知缺陷的有用见解,以便我们可以重新设计更好、更以人为本的解释界面。
{"title":"Co-design of human-centered, explainable AI for clinical decision support","authors":"Cecilia Panigutti, Andrea Beretta, D. Fadda, F. Giannotti, D. Pedreschi, A. Perotti, S. Rinzivillo","doi":"10.1145/3587271","DOIUrl":"https://doi.org/10.1145/3587271","url":null,"abstract":"eXplainable AI (XAI) involves two intertwined but separate challenges: the development of techniques to extract explanations from black-box AI models, and the way such explanations are presented to users, i.e., the explanation user interface. Despite its importance, the second aspect has received limited attention so far in the literature. Effective AI explanation interfaces are fundamental for allowing human decision-makers to take advantage and oversee high-risk AI systems effectively. Following an iterative design approach, we present the first cycle of prototyping-testing-redesigning of an explainable AI technique, and its explanation user interface for clinical Decision Support Systems (DSS). We first present an XAI technique that meets the technical requirements of the healthcare domain: sequential, ontology-linked patient data, and multi-label classification tasks. We demonstrate its applicability to explain a clinical DSS, and we design a first prototype of an explanation user interface. Next, we test such a prototype with healthcare providers and collect their feedback, with a two-fold outcome: first, we obtain evidence that explanations increase users’ trust in the XAI system, and second, we obtain useful insights on the perceived deficiencies of their interaction with the system, so that we can re-design a better, more human-centered explanation interface.","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78927228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The Influence of Personality Traits on User Interaction with Recommendation Interfaces 人格特质对用户与推荐界面交互的影响
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-03-10 DOI: https://dl.acm.org/doi/10.1145/3558772
Dongning Yan, Li Chen

Users’ personality traits can take an active role in affecting their behavior when they interact with a computer interface. However, in the area of recommender systems (RS), though personality-based RS has been extensively studied, most works focus on algorithm design, with little attention paid to studying whether and how the personality may influence users’ interaction with the recommendation interface. In this manuscript, we report the results of a user study (with 108 participants) that not only measured the influence of users’ personality traits on their perception and performance when using the recommendation interface but also employed an eye-tracker to in-depth reveal how personality may influence users’ eye-movement behavior. Moreover, being different from related work that has mainly been conducted in a single product domain, our user study was performed in three typical application domains (i.e., electronics like smartphones, entertainment like movies, and tourism like hotels). Our results show that mainly three personality traits, i.e., Openness to experience, Conscientiousness, and Agreeableness, significantly influence users’ perception and eye-movement behavior, but the exact influences vary across the domains. Finally, we provide a set of guidelines that might be constructive for designing a more effective recommendation interface based on user personality.

当用户与计算机界面交互时,他们的个性特征会对他们的行为产生积极影响。然而,在推荐系统(RS)领域,虽然基于个性的推荐系统已经得到了广泛的研究,但大多数工作都集中在算法设计上,很少关注人格是否以及如何影响用户与推荐界面的交互。在本文中,我们报告了一项用户研究(108名参与者)的结果,该研究不仅测量了用户的人格特质对他们使用推荐界面时的感知和表现的影响,而且采用眼动仪深入揭示了人格如何影响用户的眼动行为。此外,与主要在单一产品领域进行的相关工作不同,我们的用户研究是在三个典型的应用领域进行的(即,电子产品,如智能手机,娱乐,如电影,旅游,如酒店)。研究结果表明,开放性、尽责性和亲和性对用户的感知和眼动行为有显著影响,但具体影响程度在不同领域有所不同。最后,我们提供了一组指导方针,这些指导方针可能有助于设计基于用户个性的更有效的推荐界面。
{"title":"The Influence of Personality Traits on User Interaction with Recommendation Interfaces","authors":"Dongning Yan, Li Chen","doi":"https://dl.acm.org/doi/10.1145/3558772","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3558772","url":null,"abstract":"<p>Users’ personality traits can take an active role in affecting their behavior when they interact with a computer interface. However, in the area of <b>recommender systems (RS)</b>, though <i>personality-based RS</i> has been extensively studied, most works focus on algorithm design, with little attention paid to studying <i>whether</i> and <i>how</i> the personality may influence users’ interaction with the recommendation interface. In this manuscript, we report the results of a user study (with 108 participants) that not only measured the influence of users’ personality traits on their perception and performance when using the recommendation interface but also employed an eye-tracker to in-depth reveal how personality may influence users’ eye-movement behavior. Moreover, being different from related work that has mainly been conducted in a single product domain, our user study was performed in three typical application domains (i.e., electronics like smartphones, entertainment like movies, and tourism like hotels). Our results show that mainly three personality traits, i.e., <i>Openness to experience</i>, <i>Conscientiousness</i>, and <i>Agreeableness</i>, significantly influence users’ perception and eye-movement behavior, but the exact influences vary across the domains. Finally, we provide a set of guidelines that might be constructive for designing a more effective recommendation interface based on user personality.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Influence of Personality Traits on User Interaction with Recommendation Interfaces 人格特质对用户与推荐界面交互的影响
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-03-10 DOI: 10.1145/3558772
Dongning Yan, Li Chen
Users’ personality traits can take an active role in affecting their behavior when they interact with a computer interface. However, in the area of recommender systems (RS), though personality-based RS has been extensively studied, most works focus on algorithm design, with little attention paid to studying whether and how the personality may influence users’ interaction with the recommendation interface. In this manuscript, we report the results of a user study (with 108 participants) that not only measured the influence of users’ personality traits on their perception and performance when using the recommendation interface but also employed an eye-tracker to in-depth reveal how personality may influence users’ eye-movement behavior. Moreover, being different from related work that has mainly been conducted in a single product domain, our user study was performed in three typical application domains (i.e., electronics like smartphones, entertainment like movies, and tourism like hotels). Our results show that mainly three personality traits, i.e., Openness to experience, Conscientiousness, and Agreeableness, significantly influence users’ perception and eye-movement behavior, but the exact influences vary across the domains. Finally, we provide a set of guidelines that might be constructive for designing a more effective recommendation interface based on user personality.
当用户与计算机界面交互时,他们的个性特征会对他们的行为产生积极影响。然而,在推荐系统(RS)领域,虽然基于个性的推荐系统已经得到了广泛的研究,但大多数工作都集中在算法设计上,很少关注人格是否以及如何影响用户与推荐界面的交互。在本文中,我们报告了一项用户研究(108名参与者)的结果,该研究不仅测量了用户的人格特质对他们使用推荐界面时的感知和表现的影响,而且采用眼动仪深入揭示了人格如何影响用户的眼动行为。此外,与主要在单一产品领域进行的相关工作不同,我们的用户研究是在三个典型的应用领域进行的(即,电子产品,如智能手机,娱乐,如电影,旅游,如酒店)。研究结果表明,开放性、尽责性和亲和性对用户的感知和眼动行为有显著影响,但具体影响程度在不同领域有所不同。最后,我们提供了一组指导方针,这些指导方针可能有助于设计基于用户个性的更有效的推荐界面。
{"title":"The Influence of Personality Traits on User Interaction with Recommendation Interfaces","authors":"Dongning Yan, Li Chen","doi":"10.1145/3558772","DOIUrl":"https://doi.org/10.1145/3558772","url":null,"abstract":"Users’ personality traits can take an active role in affecting their behavior when they interact with a computer interface. However, in the area of recommender systems (RS), though personality-based RS has been extensively studied, most works focus on algorithm design, with little attention paid to studying whether and how the personality may influence users’ interaction with the recommendation interface. In this manuscript, we report the results of a user study (with 108 participants) that not only measured the influence of users’ personality traits on their perception and performance when using the recommendation interface but also employed an eye-tracker to in-depth reveal how personality may influence users’ eye-movement behavior. Moreover, being different from related work that has mainly been conducted in a single product domain, our user study was performed in three typical application domains (i.e., electronics like smartphones, entertainment like movies, and tourism like hotels). Our results show that mainly three personality traits, i.e., Openness to experience, Conscientiousness, and Agreeableness, significantly influence users’ perception and eye-movement behavior, but the exact influences vary across the domains. Finally, we provide a set of guidelines that might be constructive for designing a more effective recommendation interface based on user personality.","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74557204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
ACM Transactions on Interactive Intelligent Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1