首页 > 最新文献

ACM Transactions on Interactive Intelligent Systems最新文献

英文 中文
Does this Explanation Help? Designing Local Model-agnostic Explanation Representations and an Experimental Evaluation using Eye-tracking Technology 这个解释有帮助吗?基于眼动追踪技术的局部模型不可知解释表征设计与实验评价
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-07-13 DOI: 10.1145/3607145
Miguel Angel Meza Martínez, Mario Nadj, Moritz Langner, Peyman Toreini, A. Maedche
In Explainable Artificial Intelligence (XAI) research, various local model-agnostic methods have been proposed to explain individual predictions to users in order to increase the transparency of the underlying Artificial Intelligence (AI) systems. However, the user perspective has received less attention in XAI research, leading to a (1) lack of involvement of users in the design process of local model-agnostic explanations representations and (2) a limited understanding of how users visually attend them. Against this backdrop, we refined representations of local explanations from four well-established model-agnostic XAI methods in an iterative design process with users. Moreover, we evaluated the refined explanation representations in a laboratory experiment using eye-tracking technology as well as self-reports and interviews. Our results show that users do not necessarily prefer simple explanations and that their individual characteristics, such as gender and previous experience with AI systems, strongly influence their preferences. In addition, users find that some explanations are only useful in certain scenarios making the selection of an appropriate explanation highly dependent on context. With our work, we contribute to ongoing research to improve transparency in AI.
在可解释人工智能(XAI)研究中,已经提出了各种局部模型不可知论方法来向用户解释个人预测,以增加底层人工智能(AI)系统的透明度。然而,用户视角在XAI研究中受到的关注较少,导致(1)用户在局部模型不可知的解释表示的设计过程中缺乏参与;(2)对用户如何在视觉上参与它们的理解有限。在此背景下,我们在与用户的迭代设计过程中,从四种已建立的与模型无关的XAI方法中改进了局部解释的表示。此外,我们在实验室实验中使用眼动追踪技术、自我报告和访谈来评估精炼的解释表征。我们的研究结果表明,用户不一定喜欢简单的解释,他们的个人特征,如性别和以前使用人工智能系统的经验,强烈地影响了他们的偏好。此外,用户发现一些解释只在某些情况下有用,这使得选择适当的解释高度依赖于上下文。通过我们的工作,我们为正在进行的提高人工智能透明度的研究做出了贡献。
{"title":"Does this Explanation Help? Designing Local Model-agnostic Explanation Representations and an Experimental Evaluation using Eye-tracking Technology","authors":"Miguel Angel Meza Martínez, Mario Nadj, Moritz Langner, Peyman Toreini, A. Maedche","doi":"10.1145/3607145","DOIUrl":"https://doi.org/10.1145/3607145","url":null,"abstract":"In Explainable Artificial Intelligence (XAI) research, various local model-agnostic methods have been proposed to explain individual predictions to users in order to increase the transparency of the underlying Artificial Intelligence (AI) systems. However, the user perspective has received less attention in XAI research, leading to a (1) lack of involvement of users in the design process of local model-agnostic explanations representations and (2) a limited understanding of how users visually attend them. Against this backdrop, we refined representations of local explanations from four well-established model-agnostic XAI methods in an iterative design process with users. Moreover, we evaluated the refined explanation representations in a laboratory experiment using eye-tracking technology as well as self-reports and interviews. Our results show that users do not necessarily prefer simple explanations and that their individual characteristics, such as gender and previous experience with AI systems, strongly influence their preferences. In addition, users find that some explanations are only useful in certain scenarios making the selection of an appropriate explanation highly dependent on context. With our work, we contribute to ongoing research to improve transparency in AI.","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73060419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Does this Explanation Help? Designing Local Model-agnostic Explanation Representations and an Experimental Evaluation using Eye-tracking Technology 这个解释有帮助吗?基于眼动追踪技术的局部模型不可知解释表征设计与实验评价
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-07-13 DOI: https://dl.acm.org/doi/10.1145/3607145
Miguel Angel Meza Martínez, Mario Nadj, Moritz Langner, Peyman Toreini, Alexander Maedche

In Explainable Artificial Intelligence (XAI) research, various local model-agnostic methods have been proposed to explain individual predictions to users in order to increase the transparency of the underlying Artificial Intelligence (AI) systems. However, the user perspective has received less attention in XAI research, leading to a (1) lack of involvement of users in the design process of local model-agnostic explanations representations and (2) a limited understanding of how users visually attend them. Against this backdrop, we refined representations of local explanations from four well-established model-agnostic XAI methods in an iterative design process with users. Moreover, we evaluated the refined explanation representations in a laboratory experiment using eye-tracking technology as well as self-reports and interviews. Our results show that users do not necessarily prefer simple explanations and that their individual characteristics, such as gender and previous experience with AI systems, strongly influence their preferences. In addition, users find that some explanations are only useful in certain scenarios making the selection of an appropriate explanation highly dependent on context. With our work, we contribute to ongoing research to improve transparency in AI.

在可解释人工智能(XAI)研究中,已经提出了各种局部模型不可知论方法来向用户解释个人预测,以增加底层人工智能(AI)系统的透明度。然而,用户视角在XAI研究中受到的关注较少,导致(1)用户在局部模型不可知的解释表示的设计过程中缺乏参与;(2)对用户如何在视觉上参与它们的理解有限。在此背景下,我们在与用户的迭代设计过程中,从四种已建立的与模型无关的XAI方法中改进了局部解释的表示。此外,我们在实验室实验中使用眼动追踪技术、自我报告和访谈来评估精炼的解释表征。我们的研究结果表明,用户不一定喜欢简单的解释,他们的个人特征,如性别和以前使用人工智能系统的经验,强烈地影响了他们的偏好。此外,用户发现一些解释只在某些情况下有用,这使得选择适当的解释高度依赖于上下文。通过我们的工作,我们为正在进行的提高人工智能透明度的研究做出了贡献。
{"title":"Does this Explanation Help? Designing Local Model-agnostic Explanation Representations and an Experimental Evaluation using Eye-tracking Technology","authors":"Miguel Angel Meza Martínez, Mario Nadj, Moritz Langner, Peyman Toreini, Alexander Maedche","doi":"https://dl.acm.org/doi/10.1145/3607145","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3607145","url":null,"abstract":"<p>In Explainable Artificial Intelligence (XAI) research, various local model-agnostic methods have been proposed to explain individual predictions to users in order to increase the transparency of the underlying Artificial Intelligence (AI) systems. However, the user perspective has received less attention in XAI research, leading to a (1) lack of involvement of users in the design process of local model-agnostic explanations representations and (2) a limited understanding of how users visually attend them. Against this backdrop, we refined representations of local explanations from four well-established model-agnostic XAI methods in an iterative design process with users. Moreover, we evaluated the refined explanation representations in a laboratory experiment using eye-tracking technology as well as self-reports and interviews. Our results show that users do not necessarily prefer simple explanations and that their individual characteristics, such as gender and previous experience with AI systems, strongly influence their preferences. In addition, users find that some explanations are only useful in certain scenarios making the selection of an appropriate explanation highly dependent on context. With our work, we contribute to ongoing research to improve transparency in AI.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VERB: Visualizing and Interpreting Bias Mitigation Techniques Geometrically for Word Representations 动词:可视化和解释的偏见缓解技术的几何字表示
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-06-22 DOI: https://dl.acm.org/doi/10.1145/3604433
Archit Rathore, Sunipa Dev, Jeff M. Phillips, Vivek Srikumar, Yan Zheng, Chin-Chia Michael Yeh, Junpeng Wang, Wei Zhang, Bei Wang

Word vector embeddings have been shown to contain and amplify biases in the data they are extracted from. Consequently, many techniques have been proposed to identify, mitigate, and attenuate these biases in word representations. In this paper, we utilize interactive visualization to increase the interpretability and accessibility of a collection of state-of-the-art debiasing techniques. To aid this, we present the Visualization of Embedding Representations for deBiasing (“VERB”) system, an open-source web-based visualization tool that helps users gain a technical understanding and visual intuition of the inner workings of debiasing techniques, with a focus on their geometric properties. In particular, VERB offers easy-to-follow examples that explore the effects of these debiasing techniques on the geometry of high-dimensional word vectors. To help understand how various debiasing techniques change the underlying geometry, VERB decomposes each technique into interpretable sequences of primitive transformations and highlights their effect on the word vectors using dimensionality reduction and interactive visual exploration. VERB is designed to target natural language processing (NLP) practitioners who are designing decision-making systems on top of word embeddings, and also researchers working with the fairness and ethics of machine learning systems in NLP. It can also serve as a visual medium for education, which helps an NLP novice understand and mitigate biases in word embeddings.

词向量嵌入已经被证明包含并放大了它们所提取的数据中的偏见。因此,人们提出了许多技术来识别、减轻和减弱单词表示中的这些偏差。在本文中,我们利用交互式可视化来增加可解释性和可访问性的最先进的脱偏技术的集合。为了帮助实现这一点,我们提出了嵌入表示的可视化去偏(“VERB”)系统,这是一个基于web的开源可视化工具,可以帮助用户获得技术理解和视觉直觉的内部工作的去偏技术,重点是他们的几何属性。特别是,VERB提供了易于理解的示例,探索这些去偏技术对高维词向量几何的影响。为了帮助理解各种去bias技术如何改变底层几何,VERB将每种技术分解为可解释的原始转换序列,并使用降维和交互式视觉探索来突出它们对单词向量的影响。VERB旨在针对在词嵌入基础上设计决策系统的自然语言处理(NLP)从业者,以及研究NLP中机器学习系统的公平性和伦理的研究人员。它也可以作为教育的视觉媒介,帮助NLP新手理解和减轻词嵌入中的偏见。
{"title":"VERB: Visualizing and Interpreting Bias Mitigation Techniques Geometrically for Word Representations","authors":"Archit Rathore, Sunipa Dev, Jeff M. Phillips, Vivek Srikumar, Yan Zheng, Chin-Chia Michael Yeh, Junpeng Wang, Wei Zhang, Bei Wang","doi":"https://dl.acm.org/doi/10.1145/3604433","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3604433","url":null,"abstract":"<p>Word vector embeddings have been shown to contain and amplify biases in the data they are extracted from. Consequently, many techniques have been proposed to identify, mitigate, and attenuate these biases in word representations. In this paper, we utilize interactive visualization to increase the interpretability and accessibility of a collection of state-of-the-art debiasing techniques. To aid this, we present the Visualization of Embedding Representations for deBiasing (“VERB”) system, an open-source web-based visualization tool that helps users gain a technical understanding and visual intuition of the inner workings of debiasing techniques, with a focus on their geometric properties. In particular, VERB offers easy-to-follow examples that explore the effects of these debiasing techniques on the geometry of high-dimensional word vectors. To help understand how various debiasing techniques change the underlying geometry, VERB decomposes each technique into interpretable sequences of primitive transformations and highlights their effect on the word vectors using dimensionality reduction and interactive visual exploration. VERB is designed to target natural language processing (NLP) practitioners who are designing decision-making systems on top of word embeddings, and also researchers working with the fairness and ethics of machine learning systems in NLP. It can also serve as a visual medium for education, which helps an NLP novice understand and mitigate biases in word embeddings.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VERB: Visualizing and Interpreting Bias Mitigation Techniques Geometrically for Word Representations 动词:可视化和解释的偏见缓解技术的几何字表示
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-06-22 DOI: 10.1145/3604433
Archit Rathore, Yan Zheng, Chin-Chia Michael Yeh
Word vector embeddings have been shown to contain and amplify biases in the data they are extracted from. Consequently, many techniques have been proposed to identify, mitigate, and attenuate these biases in word representations. In this paper, we utilize interactive visualization to increase the interpretability and accessibility of a collection of state-of-the-art debiasing techniques. To aid this, we present the Visualization of Embedding Representations for deBiasing (“VERB”) system, an open-source web-based visualization tool that helps users gain a technical understanding and visual intuition of the inner workings of debiasing techniques, with a focus on their geometric properties. In particular, VERB offers easy-to-follow examples that explore the effects of these debiasing techniques on the geometry of high-dimensional word vectors. To help understand how various debiasing techniques change the underlying geometry, VERB decomposes each technique into interpretable sequences of primitive transformations and highlights their effect on the word vectors using dimensionality reduction and interactive visual exploration. VERB is designed to target natural language processing (NLP) practitioners who are designing decision-making systems on top of word embeddings, and also researchers working with the fairness and ethics of machine learning systems in NLP. It can also serve as a visual medium for education, which helps an NLP novice understand and mitigate biases in word embeddings.
词向量嵌入已经被证明包含并放大了它们所提取的数据中的偏见。因此,人们提出了许多技术来识别、减轻和减弱单词表示中的这些偏差。在本文中,我们利用交互式可视化来增加可解释性和可访问性的最先进的脱偏技术的集合。为了帮助实现这一点,我们提出了嵌入表示的可视化去偏(“VERB”)系统,这是一个基于web的开源可视化工具,可以帮助用户获得技术理解和视觉直觉的内部工作的去偏技术,重点是他们的几何属性。特别是,VERB提供了易于理解的示例,探索这些去偏技术对高维词向量几何的影响。为了帮助理解各种去bias技术如何改变底层几何,VERB将每种技术分解为可解释的原始转换序列,并使用降维和交互式视觉探索来突出它们对单词向量的影响。VERB旨在针对在词嵌入基础上设计决策系统的自然语言处理(NLP)从业者,以及研究NLP中机器学习系统的公平性和伦理的研究人员。它也可以作为教育的视觉媒介,帮助NLP新手理解和减轻词嵌入中的偏见。
{"title":"VERB: Visualizing and Interpreting Bias Mitigation Techniques Geometrically for Word Representations","authors":"Archit Rathore, Yan Zheng, Chin-Chia Michael Yeh","doi":"10.1145/3604433","DOIUrl":"https://doi.org/10.1145/3604433","url":null,"abstract":"Word vector embeddings have been shown to contain and amplify biases in the data they are extracted from. Consequently, many techniques have been proposed to identify, mitigate, and attenuate these biases in word representations. In this paper, we utilize interactive visualization to increase the interpretability and accessibility of a collection of state-of-the-art debiasing techniques. To aid this, we present the Visualization of Embedding Representations for deBiasing (“VERB”) system, an open-source web-based visualization tool that helps users gain a technical understanding and visual intuition of the inner workings of debiasing techniques, with a focus on their geometric properties. In particular, VERB offers easy-to-follow examples that explore the effects of these debiasing techniques on the geometry of high-dimensional word vectors. To help understand how various debiasing techniques change the underlying geometry, VERB decomposes each technique into interpretable sequences of primitive transformations and highlights their effect on the word vectors using dimensionality reduction and interactive visual exploration. VERB is designed to target natural language processing (NLP) practitioners who are designing decision-making systems on top of word embeddings, and also researchers working with the fairness and ethics of machine learning systems in NLP. It can also serve as a visual medium for education, which helps an NLP novice understand and mitigate biases in word embeddings.","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90094350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Visual Analytics of Co-Occurrences to Discover Subspaces in Structured Data 结构化数据中发现子空间的共现可视化分析
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-06-19 DOI: https://dl.acm.org/doi/10.1145/3579031
Wolfgang Jentner, Giuliana Lindholz, Hanna Hauptmann, Mennatallah El-Assady, Kwan-Liu Ma, Daniel Keim

We present an approach that shows all relevant subspaces of categorical data condensed in a single picture. We model the categorical values of the attributes as co-occurrences with data partitions generated from structured data using pattern mining. We show that these co-occurrences are a-priori, allowing us to greatly reduce the search space, effectively generating the condensed picture where conventional approaches filter out several subspaces as these are deemed insignificant. The task of identifying interesting subspaces is common but difficult due to exponential search spaces and the curse of dimensionality. One application of such a task might be identifying a cohort of patients defined by attributes such as gender, age, and diabetes type that share a common patient history, which is modeled as event sequences. Filtering the data by these attributes is common but cumbersome and often does not allow a comparison of subspaces. We contribute a powerful multi-dimensional pattern exploration approach (MDPE-approach) agnostic to the structured data type that models multiple attributes and their characteristics as co-occurrences, allowing the user to identify and compare thousands of subspaces of interest in a single picture. In our MDPE-approach, we introduce two methods to dramatically reduce the search space, outputting only the boundaries of the search space in the form of two tables. We implement the MDPE-approach in an interactive visual interface (MDPE-vis) that provides a scalable, pixel-based visualization design allowing the identification, comparison, and sense-making of subspaces in structured data. Our case studies using a gold-standard dataset and external domain experts confirm our approach’s and implementation’s applicability. A third use case sheds light on the scalability of our approach and a user study with 15 participants underlines its usefulness and power.

我们提出了一种方法,显示所有相关的子空间的分类数据浓缩在一个单一的图片。我们将属性的分类值建模为与使用模式挖掘从结构化数据生成的数据分区共现。我们表明,这些共现是先验的,允许我们大大减少搜索空间,有效地生成压缩的图片,而传统的方法过滤掉了几个子空间,因为这些被认为是不重要的。识别感兴趣的子空间是一项常见的任务,但由于指数搜索空间和维度的诅咒,这一任务很困难。这种任务的一个应用程序可能是识别由诸如性别、年龄和糖尿病类型等属性定义的患者队列,这些属性具有共同的患者历史,并将其建模为事件序列。按这些属性过滤数据是很常见的,但很麻烦,而且通常不允许对子空间进行比较。我们提供了一种强大的多维模式探索方法(mdpe方法),该方法与结构化数据类型无关,该数据类型将多个属性及其特征建模为共现,允许用户识别和比较单个图片中感兴趣的数千个子空间。在我们的mdpe方法中,我们引入了两种方法来显著减少搜索空间,仅以两个表的形式输出搜索空间的边界。我们在交互式可视化界面(MDPE-vis)中实现mdpe方法,该界面提供了可扩展的、基于像素的可视化设计,允许对结构化数据中的子空间进行识别、比较和意义构建。我们使用黄金标准数据集和外部领域专家进行的案例研究证实了我们的方法和实现的适用性。第三个用例揭示了我们方法的可扩展性,一个有15个参与者的用户研究强调了它的有用性和强大性。
{"title":"Visual Analytics of Co-Occurrences to Discover Subspaces in Structured Data","authors":"Wolfgang Jentner, Giuliana Lindholz, Hanna Hauptmann, Mennatallah El-Assady, Kwan-Liu Ma, Daniel Keim","doi":"https://dl.acm.org/doi/10.1145/3579031","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3579031","url":null,"abstract":"<p>We present an approach that shows all relevant subspaces of categorical data condensed in a single picture. We model the categorical values of the attributes as co-occurrences with data partitions generated from structured data using pattern mining. We show that these co-occurrences are <i>a-priori</i>, allowing us to greatly reduce the search space, effectively generating the condensed picture where conventional approaches filter out several subspaces as these are deemed insignificant. The task of identifying interesting subspaces is common but difficult due to exponential search spaces and the curse of dimensionality. One application of such a task might be identifying a cohort of patients defined by attributes such as gender, age, and diabetes type that share a common patient history, which is modeled as event sequences. Filtering the data by these attributes is common but cumbersome and often does not allow a comparison of subspaces. We contribute a powerful <b>multi-dimensional pattern exploration approach (MDPE-approach)</b> agnostic to the structured data type that models multiple attributes and their characteristics as co-occurrences, allowing the user to identify and compare thousands of subspaces of interest in a single picture. In our MDPE-approach, we introduce two methods to dramatically reduce the search space, outputting only the boundaries of the search space in the form of two tables. We implement the MDPE-approach in an interactive visual interface (MDPE-vis) that provides a scalable, pixel-based visualization design allowing the identification, comparison, and sense-making of subspaces in structured data. Our case studies using a gold-standard dataset and external domain experts confirm our approach’s and implementation’s applicability. A third use case sheds light on the scalability of our approach and a user study with 15 participants underlines its usefulness and power.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Role of Explainable AI in the Research Field of AI Ethics 可解释人工智能在人工智能伦理研究领域中的作用
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-06-01 DOI: 10.1145/3599974
Heidi Vainio-Pekka, M. Agbese, Marianna Jantunen, Ville Vakkuri, T. Mikkonen, Rebekah A. Rousi, P. Abrahamsson
Ethics of Artificial Intelligence (AI) is a growing research field that has emerged in response to the challenges related to AI. Transparency poses a key challenge for implementing AI ethics in practice. One solution to transparency issues is AI systems that can explain their decisions. Explainable AI (XAI) refers to AI systems that are interpretable or understandable to humans. The research fields of AI ethics and XAI lack a common framework and conceptualization. There is no clarity of the field’s depth and versatility. A systematic approach to understanding the corpus is needed. A systematic review offers an opportunity to detect research gaps and focus points. This paper presents the results of a systematic mapping study (SMS) of the research field of the Ethics of AI. The focus is on understanding the role of XAI and how the topic has been studied empirically. An SMS is a tool for performing a repeatable and continuable literature search. This paper contributes to the research field with a Systematic Map that visualizes what, how, when, and why XAI has been studied empirically in the field of AI ethics. The mapping reveals research gaps in the area. Empirical contributions are drawn from the analysis. The contributions are reflected on in regards to theoretical and practical implications. As the scope of the SMS is a broader research area of AI ethics the collected dataset opens possibilities to continue the mapping process in other directions.
人工智能伦理是一个新兴的研究领域,是为了应对与人工智能相关的挑战而出现的。透明度是在实践中实施人工智能伦理的一个关键挑战。解决透明度问题的一个办法是人工智能系统可以解释他们的决定。可解释AI (Explainable AI, XAI)是指人类可以解释或理解的AI系统。人工智能伦理和人工智能的研究领域缺乏一个共同的框架和概念。该领域的深度和多样性还不清楚。需要一种系统的方法来理解语料库。系统评价提供了发现研究差距和重点的机会。本文介绍了人工智能伦理研究领域的系统映射研究(SMS)的结果。重点是理解XAI的作用以及如何对该主题进行实证研究。SMS是用于执行可重复和可持续的文献搜索的工具。本文通过一个系统的地图对研究领域做出了贡献,该地图可视化了在人工智能伦理领域对XAI进行了什么、如何、何时以及为什么进行了实证研究。地图显示了该地区的研究空白。从分析中得出经验贡献。这些贡献反映在理论和实践意义方面。由于SMS的范围是人工智能伦理的一个更广泛的研究领域,收集的数据集为在其他方向继续映射过程提供了可能性。
{"title":"The Role of Explainable AI in the Research Field of AI Ethics","authors":"Heidi Vainio-Pekka, M. Agbese, Marianna Jantunen, Ville Vakkuri, T. Mikkonen, Rebekah A. Rousi, P. Abrahamsson","doi":"10.1145/3599974","DOIUrl":"https://doi.org/10.1145/3599974","url":null,"abstract":"Ethics of Artificial Intelligence (AI) is a growing research field that has emerged in response to the challenges related to AI. Transparency poses a key challenge for implementing AI ethics in practice. One solution to transparency issues is AI systems that can explain their decisions. Explainable AI (XAI) refers to AI systems that are interpretable or understandable to humans. The research fields of AI ethics and XAI lack a common framework and conceptualization. There is no clarity of the field’s depth and versatility. A systematic approach to understanding the corpus is needed. A systematic review offers an opportunity to detect research gaps and focus points. This paper presents the results of a systematic mapping study (SMS) of the research field of the Ethics of AI. The focus is on understanding the role of XAI and how the topic has been studied empirically. An SMS is a tool for performing a repeatable and continuable literature search. This paper contributes to the research field with a Systematic Map that visualizes what, how, when, and why XAI has been studied empirically in the field of AI ethics. The mapping reveals research gaps in the area. Empirical contributions are drawn from the analysis. The contributions are reflected on in regards to theoretical and practical implications. As the scope of the SMS is a broader research area of AI ethics the collected dataset opens possibilities to continue the mapping process in other directions.","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86127768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The Role of Explainable AI in the Research Field of AI Ethics 可解释人工智能在人工智能伦理研究领域中的作用
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-06-01 DOI: https://dl.acm.org/doi/10.1145/3599974
Heidi Vainio-Pekka, Mamia Ori-otse Agbese, Marianna Jantunen, Ville Vakkuri, Tommi Mikkonen, Rebekah Rousi, Pekka Abrahamsson

Ethics of Artificial Intelligence (AI) is a growing research field that has emerged in response to the challenges related to AI. Transparency poses a key challenge for implementing AI ethics in practice. One solution to transparency issues is AI systems that can explain their decisions. Explainable AI (XAI) refers to AI systems that are interpretable or understandable to humans. The research fields of AI ethics and XAI lack a common framework and conceptualization. There is no clarity of the field’s depth and versatility. A systematic approach to understanding the corpus is needed. A systematic review offers an opportunity to detect research gaps and focus points. This paper presents the results of a systematic mapping study (SMS) of the research field of the Ethics of AI. The focus is on understanding the role of XAI and how the topic has been studied empirically. An SMS is a tool for performing a repeatable and continuable literature search. This paper contributes to the research field with a Systematic Map that visualizes what, how, when, and why XAI has been studied empirically in the field of AI ethics. The mapping reveals research gaps in the area. Empirical contributions are drawn from the analysis. The contributions are reflected on in regards to theoretical and practical implications. As the scope of the SMS is a broader research area of AI ethics the collected dataset opens possibilities to continue the mapping process in other directions.

人工智能伦理是一个新兴的研究领域,是为了应对与人工智能相关的挑战而出现的。透明度是在实践中实施人工智能伦理的一个关键挑战。解决透明度问题的一个办法是人工智能系统可以解释他们的决定。可解释AI (Explainable AI, XAI)是指人类可以解释或理解的AI系统。人工智能伦理和人工智能的研究领域缺乏一个共同的框架和概念。该领域的深度和多样性还不清楚。需要一种系统的方法来理解语料库。系统评价提供了发现研究差距和重点的机会。本文介绍了人工智能伦理研究领域的系统映射研究(SMS)的结果。重点是理解XAI的作用以及如何对该主题进行实证研究。SMS是用于执行可重复和可持续的文献搜索的工具。本文通过一个系统的地图对研究领域做出了贡献,该地图可视化了在人工智能伦理领域对XAI进行了什么、如何、何时以及为什么进行了实证研究。地图显示了该地区的研究空白。从分析中得出经验贡献。这些贡献反映在理论和实践意义方面。由于SMS的范围是人工智能伦理的一个更广泛的研究领域,收集的数据集为在其他方向继续映射过程提供了可能性。
{"title":"The Role of Explainable AI in the Research Field of AI Ethics","authors":"Heidi Vainio-Pekka, Mamia Ori-otse Agbese, Marianna Jantunen, Ville Vakkuri, Tommi Mikkonen, Rebekah Rousi, Pekka Abrahamsson","doi":"https://dl.acm.org/doi/10.1145/3599974","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3599974","url":null,"abstract":"<p>Ethics of Artificial Intelligence (AI) is a growing research field that has emerged in response to the challenges related to AI. Transparency poses a key challenge for implementing AI ethics in practice. One solution to transparency issues is AI systems that can explain their decisions. Explainable AI (XAI) refers to AI systems that are interpretable or understandable to humans. The research fields of AI ethics and XAI lack a common framework and conceptualization. There is no clarity of the field’s depth and versatility. A systematic approach to understanding the corpus is needed. A systematic review offers an opportunity to detect research gaps and focus points. This paper presents the results of a systematic mapping study (SMS) of the research field of the Ethics of AI. The focus is on understanding the role of XAI and how the topic has been studied empirically. An SMS is a tool for performing a repeatable and continuable literature search. This paper contributes to the research field with a Systematic Map that visualizes what, how, when, and why XAI has been studied empirically in the field of AI ethics. The mapping reveals research gaps in the area. Empirical contributions are drawn from the analysis. The contributions are reflected on in regards to theoretical and practical implications. As the scope of the SMS is a broader research area of AI ethics the collected dataset opens possibilities to continue the mapping process in other directions.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simulation-Based Optimization of User Interfaces for Quality-Assuring Machine Learning Model Predictions 基于仿真的用户界面优化,用于保证质量的机器学习模型预测
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-05-17 DOI: https://dl.acm.org/doi/10.1145/3594552
Yu Zhang, Martijn Tennekes, Tim de Jong, Lyana Curier, Bob Coecke, Min Chen

Quality-sensitive applications of machine learning (ML) require quality assurance (QA) by humans before the predictions of an ML model can be deployed. QA for ML (QA4ML) interfaces require users to view a large amount of data and perform many interactions to correct errors made by the ML model. An optimized user interface (UI) can significantly reduce interaction costs. While UI optimization can be informed by user studies evaluating design options, this approach is not scalable because there are typically numerous small variations that can affect the efficiency of a QA4ML interface. Hence, we propose using simulation to evaluate and aid the optimization of QA4ML interfaces. In particular, we focus on simulating the combined effects of human intelligence in initiating appropriate interaction commands and machine intelligence in providing algorithmic assistance for accelerating QA4ML processes. As QA4ML is usually labor-intensive, we use the simulated task completion time as the metric for UI optimization under different interface and algorithm setups. We demonstrate the usage of this UI design method in several QA4ML applications.

机器学习(ML)的质量敏感应用需要在部署ML模型的预测之前由人类进行质量保证(QA)。QA for ML (QA4ML)接口要求用户查看大量数据并执行许多交互以纠正ML模型所犯的错误。优化的用户界面(UI)可以显著降低交互成本。虽然UI优化可以通过用户研究来评估设计选项,但这种方法是不可伸缩的,因为通常有许多小的变化会影响QA4ML界面的效率。因此,我们建议使用仿真来评估和帮助QA4ML接口的优化。特别是,我们专注于模拟人类智能在启动适当的交互命令和机器智能在为加速QA4ML过程提供算法辅助方面的综合效果。由于QA4ML通常是劳动密集型的,因此我们使用模拟任务完成时间作为不同接口和算法设置下UI优化的度量。我们将演示在几个QA4ML应用程序中使用这种UI设计方法。
{"title":"Simulation-Based Optimization of User Interfaces for Quality-Assuring Machine Learning Model Predictions","authors":"Yu Zhang, Martijn Tennekes, Tim de Jong, Lyana Curier, Bob Coecke, Min Chen","doi":"https://dl.acm.org/doi/10.1145/3594552","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3594552","url":null,"abstract":"<p>Quality-sensitive applications of machine learning (ML) require quality assurance (QA) by humans before the predictions of an ML model can be deployed. QA for ML (QA4ML) interfaces require users to view a large amount of data and perform many interactions to correct errors made by the ML model. An optimized user interface (UI) can significantly reduce interaction costs. While UI optimization can be informed by user studies evaluating design options, this approach is not scalable because there are typically numerous small variations that can affect the efficiency of a QA4ML interface. Hence, we propose using simulation to evaluate and aid the optimization of QA4ML interfaces. In particular, we focus on simulating the combined effects of human intelligence in initiating appropriate interaction commands and machine intelligence in providing algorithmic assistance for accelerating QA4ML processes. As QA4ML is usually labor-intensive, we use the simulated task completion time as the metric for UI optimization under different interface and algorithm setups. We demonstrate the usage of this UI design method in several QA4ML applications.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combining the Projective Consciousness Model and Virtual Humans for Immersive Psychological Research: A Proof-of-concept Simulating a ToM Assessment 结合投射意识模型和虚拟人进行沉浸式心理学研究:模拟ToM评估的概念验证
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-05-05 DOI: https://dl.acm.org/doi/10.1145/3583886
D. Rudrauf, G. Sergeant-Perhtuis, Y. Tisserand, T. Monnor, V. De Gevigney, O. Belli

Relating explicit psychological mechanisms and observable behaviours is a central aim of psychological and behavioural science. One of the challenges is to understand and model the role of consciousness and, in particular, its subjective perspective as an internal level of representation (including for social cognition) in the governance of behaviour. Toward this aim, we implemented the principles of the Projective Consciousness Model (PCM) into artificial agents embodied as virtual humans, extending a previous implementation of the model. Our goal was to offer a proof-of-concept, based purely on simulations, as a basis for a future methodological framework. Its overarching aim is to be able to assess hidden psychological parameters in human participants, based on a model relevant to consciousness research, in the context of experiments in virtual reality. As an illustration of the approach, we focused on simulating the role of Theory of Mind (ToM) in the choice of strategic behaviours of approach and avoidance to optimise the satisfaction of agents’ preferences. We designed a main experiment in a virtual environment that could be used with real humans, allowing us to classify behaviours as a function of order of ToM, up to the second order. We show that agents using the PCM demonstrated expected behaviours with consistent parameters of ToM in this experiment. We also show that the agents could be used to estimate correctly each other’s order of ToM. Furthermore, in a supplementary experiment, we demonstrated how the agents could simultaneously estimate order of ToM and preferences attributed to others to optimize behavioural outcomes. Future studies will empirically assess and fine tune the framework with real humans in virtual reality experiments.

将显性心理机制和可观察的行为联系起来是心理和行为科学的中心目标。其中一个挑战是理解和模拟意识的作用,特别是它的主观视角作为行为治理中的内部表现水平(包括社会认知)。为了实现这一目标,我们将投射意识模型(PCM)的原理应用到虚拟人的人工代理中,扩展了该模型的先前实现。我们的目标是提供一个纯粹基于模拟的概念验证,作为未来方法论框架的基础。它的首要目标是能够在虚拟现实实验的背景下,基于与意识研究相关的模型,评估人类参与者的隐藏心理参数。为了说明这一方法,我们重点模拟了心理理论(ToM)在选择接近和回避的战略行为中所起的作用,以优化代理偏好的满意度。我们在虚拟环境中设计了一个主要的实验,可以用在真人身上,允许我们将行为分类为ToM阶的函数,直到二阶。我们表明,在本实验中,使用PCM的智能体表现出与ToM参数一致的预期行为。我们还证明了代理可以用来正确地估计彼此的ToM的顺序。此外,在一个补充实验中,我们展示了代理如何同时估计ToM的顺序和归因于他人的偏好以优化行为结果。未来的研究将在虚拟现实实验中对真实的人类进行经验评估和微调框架。
{"title":"Combining the Projective Consciousness Model and Virtual Humans for Immersive Psychological Research: A Proof-of-concept Simulating a ToM Assessment","authors":"D. Rudrauf, G. Sergeant-Perhtuis, Y. Tisserand, T. Monnor, V. De Gevigney, O. Belli","doi":"https://dl.acm.org/doi/10.1145/3583886","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3583886","url":null,"abstract":"<p>Relating explicit psychological mechanisms and observable behaviours is a central aim of psychological and behavioural science. One of the challenges is to understand and model the role of consciousness and, in particular, its subjective perspective as an internal level of representation (including for social cognition) in the governance of behaviour. Toward this aim, we implemented the principles of the Projective Consciousness Model (PCM) into artificial agents embodied as virtual humans, extending a previous implementation of the model. Our goal was to offer a proof-of-concept, based purely on simulations, as a basis for a future methodological framework. Its overarching aim is to be able to assess hidden psychological parameters in human participants, based on a model relevant to consciousness research, in the context of experiments in virtual reality. As an illustration of the approach, we focused on simulating the role of Theory of Mind (ToM) in the choice of strategic behaviours of approach and avoidance to optimise the satisfaction of agents’ preferences. We designed a main experiment in a virtual environment that could be used with real humans, allowing us to classify behaviours as a function of order of ToM, up to the second order. We show that agents using the PCM demonstrated expected behaviours with consistent parameters of ToM in this experiment. We also show that the agents could be used to estimate correctly each other’s order of ToM. Furthermore, in a supplementary experiment, we demonstrated how the agents could simultaneously estimate order of ToM and preferences attributed to others to optimize behavioural outcomes. Future studies will empirically assess and fine tune the framework with real humans in virtual reality experiments.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Explainable Activity Recognition for Smart Home Systems 智能家居系统的可解释活动识别
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-05-05 DOI: https://dl.acm.org/doi/10.1145/3561533
Devleena Das, Yasutaka Nishimura, Rajan P. Vivek, Naoto Takeda, Sean T. Fish, Thomas Plötz, Sonia Chernova

Smart home environments are designed to provide services that help improve the quality of life for the occupant via a variety of sensors and actuators installed throughout the space. Many automated actions taken by a smart home are governed by the output of an underlying activity recognition system. However, activity recognition systems may not be perfectly accurate, and therefore inconsistencies in smart home operations can lead users reliant on smart home predictions to wonder “Why did the smart home do that?” In this work, we build on insights from Explainable Artificial Intelligence (XAI) techniques and introduce an explainable activity recognition framework in which we leverage leading XAI methods (Local Interpretable Model-agnostic Explanations, SHapley Additive exPlanations (SHAP), Anchors) to generate natural language explanations that explain what about an activity led to the given classification. We evaluate our framework in the context of a commonly targeted smart home scenario: autonomous remote caregiver monitoring for individuals who are living alone or need assistance. Within the context of remote caregiver monitoring, we perform a two-step evaluation: (a) utilize Machine Learning experts to assess the sensibility of explanations and (b) recruit non-experts in two user remote caregiver monitoring scenarios, synchronous and asynchronous, to assess the effectiveness of explanations generated via our framework. Our results show that the XAI approach, SHAP, has a 92% success rate in generating sensible explanations. Moreover, in 83% of sampled scenarios users preferred natural language explanations over a simple activity label, underscoring the need for explainable activity recognition systems. Finally, we show that explanations generated by some XAI methods can lead users to lose confidence in the accuracy of the underlying activity recognition model, while others lead users to gain confidence. Taking all studied factors into consideration, we make a recommendation regarding which existing XAI method leads to the best performance in the domain of smart home automation and discuss a range of topics for future work to further improve explainable activity recognition.

智能家居环境旨在通过安装在整个空间中的各种传感器和执行器,为居住者提供有助于提高生活质量的服务。智能家居的许多自动操作都是由底层活动识别系统的输出控制的。然而,活动识别系统可能并不完全准确,因此智能家居操作的不一致性可能导致依赖智能家居预测的用户想知道“为什么智能家居会这样做?”在这项工作中,我们建立在可解释人工智能(XAI)技术的见解基础上,并引入了一个可解释的活动识别框架,在该框架中,我们利用领先的XAI方法(局部可解释模型不可知性解释,SHapley加性解释(SHAP),锚点)生成自然语言解释,解释活动导致给定分类的原因。我们在一个常见的目标智能家居场景中评估我们的框架:为独居或需要帮助的个人提供自主远程护理监控。在远程护理人员监测的背景下,我们执行两步评估:(a)利用机器学习专家评估解释的敏感性,(b)在两个用户远程护理人员监测场景(同步和异步)中招募非专家,以评估通过我们的框架生成的解释的有效性。我们的结果表明,XAI方法(SHAP)在生成合理解释方面有92%的成功率。此外,在83%的采样场景中,用户更喜欢自然语言解释而不是简单的活动标签,这强调了对可解释的活动识别系统的需求。最后,我们表明,一些XAI方法生成的解释可能导致用户对底层活动识别模型的准确性失去信心,而另一些方法则导致用户获得信心。考虑到所有研究的因素,我们就现有的XAI方法在智能家居自动化领域的最佳性能提出了建议,并讨论了未来工作的一系列主题,以进一步提高可解释的活动识别。
{"title":"Explainable Activity Recognition for Smart Home Systems","authors":"Devleena Das, Yasutaka Nishimura, Rajan P. Vivek, Naoto Takeda, Sean T. Fish, Thomas Plötz, Sonia Chernova","doi":"https://dl.acm.org/doi/10.1145/3561533","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3561533","url":null,"abstract":"<p>Smart home environments are designed to provide services that help improve the quality of life for the occupant via a variety of sensors and actuators installed throughout the space. Many automated actions taken by a smart home are governed by the output of an underlying activity recognition system. However, activity recognition systems may not be perfectly accurate, and therefore inconsistencies in smart home operations can lead users reliant on smart home predictions to wonder “Why did the smart home do that?” In this work, we build on insights from Explainable Artificial Intelligence (XAI) techniques and introduce an explainable activity recognition framework in which we leverage leading XAI methods (Local Interpretable Model-agnostic Explanations, SHapley Additive exPlanations (SHAP), Anchors) to generate natural language explanations that explain what about an activity led to the given classification. We evaluate our framework in the context of a commonly targeted smart home scenario: autonomous remote caregiver monitoring for individuals who are living alone or need assistance. Within the context of remote caregiver monitoring, we perform a two-step evaluation: (a) utilize Machine Learning experts to assess the sensibility of explanations and (b) recruit non-experts in two user remote caregiver monitoring scenarios, synchronous and asynchronous, to assess the effectiveness of explanations generated via our framework. Our results show that the XAI approach, SHAP, has a 92% success rate in generating sensible explanations. Moreover, in 83% of sampled scenarios users preferred natural language explanations over a simple activity label, underscoring the need for explainable activity recognition systems. Finally, we show that explanations generated by some XAI methods can lead users to lose confidence in the accuracy of the underlying activity recognition model, while others lead users to gain confidence. Taking all studied factors into consideration, we make a recommendation regarding which existing XAI method leads to the best performance in the domain of smart home automation and discuss a range of topics for future work to further improve explainable activity recognition.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Interactive Intelligent Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1