首页 > 最新文献

ACM Transactions on Interactive Intelligent Systems最新文献

英文 中文
GRAFS: Graphical Faceted Search System to Support Conceptual Understanding in Exploratory Search 图形面搜索系统,以支持探索性搜索的概念理解
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-05-05 DOI: https://dl.acm.org/doi/10.1145/3588319
Mengtian Guo, Zhilan Zhou, David Gotz, Yue Wang

When people search for information about a new topic within large document collections, they implicitly construct a mental model of the unfamiliar information space to represent what they currently know and guide their exploration into the unknown. Building this mental model can be challenging as it requires not only finding relevant documents but also synthesizing important concepts and the relationships that connect those concepts both within and across documents. This article describes a novel interactive approach designed to help users construct a mental model of an unfamiliar information space during exploratory search. We propose a new semantic search system to organize and visualize important concepts and their relations for a set of search results. A user study (n=20) was conducted to compare the proposed approach against a baseline faceted search system on exploratory literature search tasks. Experimental results show that the proposed approach is more effective in helping users recognize relationships between key concepts, leading to a more sophisticated understanding of the search topic while maintaining similar functionality and usability as a faceted search system.

当人们在大型文档集合中搜索关于新主题的信息时,他们隐式地构建了一个不熟悉的信息空间的心智模型,以表示他们目前知道的内容,并指导他们探索未知的内容。构建这种心智模型可能具有挑战性,因为它不仅需要找到相关文档,还需要综合重要概念以及在文档内部和跨文档连接这些概念的关系。本文描述了一种新的交互方法,旨在帮助用户在探索性搜索期间构建不熟悉的信息空间的心理模型。我们提出了一个新的语义搜索系统来组织和可视化重要的概念和它们之间的关系,为一组搜索结果。进行了一项用户研究(n=20),将所提出的方法与探索性文献搜索任务的基线分面搜索系统进行比较。实验结果表明,所提出的方法在帮助用户识别关键概念之间的关系方面更有效,从而更复杂地理解搜索主题,同时保持与分面搜索系统相似的功能和可用性。
{"title":"GRAFS: Graphical Faceted Search System to Support Conceptual Understanding in Exploratory Search","authors":"Mengtian Guo, Zhilan Zhou, David Gotz, Yue Wang","doi":"https://dl.acm.org/doi/10.1145/3588319","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3588319","url":null,"abstract":"<p>When people search for information about a new topic within large document collections, they implicitly construct a mental model of the unfamiliar information space to represent what they currently know and guide their exploration into the unknown. Building this mental model can be challenging as it requires not only finding relevant documents but also synthesizing important concepts and the relationships that connect those concepts both within and across documents. This article describes a novel interactive approach designed to help users construct a mental model of an unfamiliar information space during exploratory search. We propose a new semantic search system to organize and visualize important concepts and their relations for a set of search results. A user study (<i>n</i>=20) was conducted to compare the proposed approach against a baseline faceted search system on exploratory literature search tasks. Experimental results show that the proposed approach is more effective in helping users recognize relationships between key concepts, leading to a more sophisticated understanding of the search topic while maintaining similar functionality and usability as a faceted search system.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Addressing Ambiguous Interactions and Inferring User Intent with Dimension Reduction and Clustering Combinations in Visual Analytics 在视觉分析中使用降维和聚类组合来处理模糊交互和推断用户意图
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-04-17 DOI: https://dl.acm.org/doi/10.1145/3588565
John Wenskovitch, Michelle Dowling, Chris North

Direct manipulation interactions on projections are often incorporated in visual analytics applications. These interactions enable analysts to provide incremental feedback to the system in a semi-supervised manner, demonstrating relationships that the analyst wishes to find within the data. However, determining the precise intent of the analyst is a challenge. When an analyst interacts with a projection, the inherent ambiguity of interactions can lead to a variety of possible interpretations that the system can infer. Previous work has demonstrated the utility of clusters as an interaction target to address this “With Respect to What” problem in dimension-reduced projections. However, the introduction of clusters introduces interaction inference challenges as well. In this work, we discuss the interaction space for the simultaneous use of semi-supervised dimension reduction and clustering algorithms. We introduce a novel pipeline representation to disambiguate between interactions on observations and clusters, as well as which underlying model is responding to those analyst interactions. We use a prototype visual analytics tool to demonstrate the effects of these ambiguous interactions, their properties, and the insights that an analyst can glean from each.

投影上的直接操作交互通常被纳入可视化分析应用程序中。这些交互使分析人员能够以半监督的方式向系统提供增量反馈,展示分析人员希望在数据中找到的关系。然而,确定分析师的确切意图是一个挑战。当分析人员与投影交互时,交互的固有模糊性可能导致系统可以推断的各种可能的解释。以前的工作已经证明了集群作为一个交互目标的效用,以解决在降维投影中“关于什么”的问题。然而,集群的引入也带来了交互推理的挑战。在这项工作中,我们讨论了同时使用半监督降维和聚类算法的交互空间。我们引入了一种新的管道表示来消除观察和集群交互之间的歧义,以及哪个底层模型响应这些分析交互。我们使用一个原型可视化分析工具来演示这些模糊交互的影响、它们的属性,以及分析人员可以从中收集到的见解。
{"title":"Towards Addressing Ambiguous Interactions and Inferring User Intent with Dimension Reduction and Clustering Combinations in Visual Analytics","authors":"John Wenskovitch, Michelle Dowling, Chris North","doi":"https://dl.acm.org/doi/10.1145/3588565","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3588565","url":null,"abstract":"<p>Direct manipulation interactions on projections are often incorporated in visual analytics applications. These interactions enable analysts to provide incremental feedback to the system in a semi-supervised manner, demonstrating relationships that the analyst wishes to find within the data. However, determining the precise intent of the analyst is a challenge. When an analyst interacts with a projection, the inherent ambiguity of interactions can lead to a variety of possible interpretations that the system can infer. Previous work has demonstrated the utility of clusters as an interaction target to address this “With Respect to What” problem in dimension-reduced projections. However, the introduction of clusters introduces interaction inference challenges as well. In this work, we discuss the interaction space for the simultaneous use of semi-supervised dimension reduction and clustering algorithms. We introduce a novel pipeline representation to disambiguate between interactions on observations and clusters, as well as which underlying model is responding to those analyst interactions. We use a prototype visual analytics tool to demonstrate the effects of these ambiguous interactions, their properties, and the insights that an analyst can glean from each.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Addressing Ambiguous Interactions and Inferring User Intent with Dimension Reduction and Clustering Combinations in Visual Analytics 在视觉分析中使用降维和聚类组合来处理模糊交互和推断用户意图
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-04-17 DOI: 10.1145/3588565
John E. Wenskovitch, Michelle Dowling, Chris North
Direct manipulation interactions on projections are often incorporated in visual analytics applications. These interactions enable analysts to provide incremental feedback to the system in a semi-supervised manner, demonstrating relationships that the analyst wishes to find within the data. However, determining the precise intent of the analyst is a challenge. When an analyst interacts with a projection, the inherent ambiguity of interactions can lead to a variety of possible interpretations that the system can infer. Previous work has demonstrated the utility of clusters as an interaction target to address this “With Respect to What” problem in dimension-reduced projections. However, the introduction of clusters introduces interaction inference challenges as well. In this work, we discuss the interaction space for the simultaneous use of semi-supervised dimension reduction and clustering algorithms. We introduce a novel pipeline representation to disambiguate between interactions on observations and clusters, as well as which underlying model is responding to those analyst interactions. We use a prototype visual analytics tool to demonstrate the effects of these ambiguous interactions, their properties, and the insights that an analyst can glean from each.
投影上的直接操作交互通常被纳入可视化分析应用程序中。这些交互使分析人员能够以半监督的方式向系统提供增量反馈,展示分析人员希望在数据中找到的关系。然而,确定分析师的确切意图是一个挑战。当分析人员与投影交互时,交互的固有模糊性可能导致系统可以推断的各种可能的解释。以前的工作已经证明了集群作为一个交互目标的效用,以解决在降维投影中“关于什么”的问题。然而,集群的引入也带来了交互推理的挑战。在这项工作中,我们讨论了同时使用半监督降维和聚类算法的交互空间。我们引入了一种新的管道表示来消除观察和集群交互之间的歧义,以及哪个底层模型响应这些分析交互。我们使用一个原型可视化分析工具来演示这些模糊交互的影响、它们的属性,以及分析人员可以从中收集到的见解。
{"title":"Towards Addressing Ambiguous Interactions and Inferring User Intent with Dimension Reduction and Clustering Combinations in Visual Analytics","authors":"John E. Wenskovitch, Michelle Dowling, Chris North","doi":"10.1145/3588565","DOIUrl":"https://doi.org/10.1145/3588565","url":null,"abstract":"Direct manipulation interactions on projections are often incorporated in visual analytics applications. These interactions enable analysts to provide incremental feedback to the system in a semi-supervised manner, demonstrating relationships that the analyst wishes to find within the data. However, determining the precise intent of the analyst is a challenge. When an analyst interacts with a projection, the inherent ambiguity of interactions can lead to a variety of possible interpretations that the system can infer. Previous work has demonstrated the utility of clusters as an interaction target to address this “With Respect to What” problem in dimension-reduced projections. However, the introduction of clusters introduces interaction inference challenges as well. In this work, we discuss the interaction space for the simultaneous use of semi-supervised dimension reduction and clustering algorithms. We introduce a novel pipeline representation to disambiguate between interactions on observations and clusters, as well as which underlying model is responding to those analyst interactions. We use a prototype visual analytics tool to demonstrate the effects of these ambiguous interactions, their properties, and the insights that an analyst can glean from each.","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89520419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Explaining Recommendations through Conversations: Dialog Model and the Effects of Interface Type and Degree of Interactivity 通过对话解释建议:对话模型和界面类型和交互性程度的影响
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-04-12 DOI: https://dl.acm.org/doi/10.1145/3579541
Diana C. Hernandez-Bocanegra, Jürgen Ziegler

Explaining system-generated recommendations based on user reviews can foster users’ understanding and assessment of the recommended items and the recommender system (RS) as a whole. While up to now explanations have mostly been static, shown in a single presentation unit, some interactive explanatory approaches have emerged in explainable artificial intelligence (XAI), making it easier for users to examine system decisions and to explore arguments according to their information needs. However, little is known about how interactive interfaces should be conceptualized and designed to meet the explanatory aims of transparency, effectiveness, and trust in RS. Thus, we investigate the potential of interactive, conversational explanations in review-based RS and propose an explanation approach inspired by dialog models and formal argument structures. In particular, we investigate users’ perception of two different interface types for presenting explanations, a graphical user interface (GUI)-based dialog consisting of a sequence of explanatory steps, and a chatbot-like natural-language interface.

Since providing explanations by means of natural language conversation is a novel approach, there is a lack of understanding how users would formulate their questions with a corresponding lack of datasets. We thus propose an intent model for explanatory queries and describe the development of ConvEx-DS, a dataset containing intent annotations of 1,806 user questions in the domain of hotels, that can be used to to train intent detection methods as part of the development of conversational agents for explainable RS. We validate the model by measuring user-perceived helpfulness of answers given based on the implemented intent detection. Finally, we report on a user study investigating users’ evaluation of the two types of interactive explanations proposed (GUI and chatbot), and to test the effect of varying degrees of interactivity that result in greater or lesser access to explanatory information. By using Structural Equation Modeling, we reveal details on the relationships between the perceived quality of an explanation and the explanatory objectives of transparency, trust, and effectiveness. Our results show that providing interactive options for scrutinizing explanatory arguments has a significant positive influence on the evaluation by users (compared to low interactive alternatives). Results also suggest that user characteristics such as decision-making style may have a significant influence on the evaluation of different types of interactive explanation interfaces.

解释基于用户评论的系统生成的推荐可以促进用户对推荐项目和推荐系统(RS)作为一个整体的理解和评估。虽然到目前为止,解释大多是静态的,以单个表示单元显示,但在可解释人工智能(XAI)中出现了一些交互式解释方法,使用户更容易检查系统决策并根据他们的信息需求探索论点。然而,对于交互界面应该如何概念化和设计以满足RS中透明度、有效性和信任的解释目标,我们知之甚少。因此,我们研究了基于评论的RS中交互式对话解释的潜力,并提出了一种受对话模型和正式论证结构启发的解释方法。我们特别研究了用户对两种不同界面类型的感知,一种是基于图形用户界面(GUI)的对话框,由一系列解释步骤组成,另一种是类似聊天机器人的自然语言界面。由于通过自然语言对话提供解释是一种新颖的方法,因此缺乏对用户如何在相应缺乏数据集的情况下提出问题的理解。因此,我们提出了一个用于解释性查询的意图模型,并描述了ConvEx-DS的开发,ConvEx-DS是一个包含酒店领域1806个用户问题的意图注释的数据集,可用于训练意图检测方法,作为可解释RS的会话代理开发的一部分。我们通过测量基于实现的意图检测给出的答案的用户感知有用性来验证模型。最后,我们报告了一项用户研究,调查了用户对所提出的两种类型的交互解释(GUI和聊天机器人)的评价,并测试了不同程度的交互性对解释信息访问的影响。通过使用结构方程模型,我们揭示了解释的感知质量与透明度、信任和有效性的解释目标之间关系的细节。我们的研究结果表明,提供交互式选项来审查解释性论点对用户的评价有显著的积极影响(与低交互性替代方案相比)。结果还表明,决策风格等用户特征可能会对不同类型的交互式解释界面的评价产生显著影响。
{"title":"Explaining Recommendations through Conversations: Dialog Model and the Effects of Interface Type and Degree of Interactivity","authors":"Diana C. Hernandez-Bocanegra, Jürgen Ziegler","doi":"https://dl.acm.org/doi/10.1145/3579541","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3579541","url":null,"abstract":"<p>Explaining system-generated recommendations based on user reviews can foster users’ understanding and assessment of the recommended items and the recommender system (RS) as a whole. While up to now explanations have mostly been static, shown in a single presentation unit, some interactive explanatory approaches have emerged in explainable artificial intelligence (XAI), making it easier for users to examine system decisions and to explore arguments according to their information needs. However, little is known about how interactive interfaces should be conceptualized and designed to meet the explanatory aims of transparency, effectiveness, and trust in RS. Thus, we investigate the potential of interactive, conversational explanations in review-based RS and propose an explanation approach inspired by dialog models and formal argument structures. In particular, we investigate users’ perception of two different interface types for presenting explanations, a graphical user interface (GUI)-based dialog consisting of a sequence of explanatory steps, and a chatbot-like natural-language interface.</p><p>Since providing explanations by means of natural language conversation is a novel approach, there is a lack of understanding how users would formulate their questions with a corresponding lack of datasets. We thus propose an intent model for explanatory queries and describe the development of ConvEx-DS, a dataset containing intent annotations of 1,806 user questions in the domain of hotels, that can be used to to train intent detection methods as part of the development of conversational agents for explainable RS. We validate the model by measuring user-perceived helpfulness of answers given based on the implemented intent detection. Finally, we report on a user study investigating users’ evaluation of the two types of interactive explanations proposed (GUI and chatbot), and to test the effect of varying degrees of interactivity that result in greater or lesser access to explanatory information. By using Structural Equation Modeling, we reveal details on the relationships between the perceived quality of an explanation and the explanatory objectives of transparency, trust, and effectiveness. Our results show that providing interactive options for scrutinizing explanatory arguments has a significant positive influence on the evaluation by users (compared to low interactive alternatives). Results also suggest that user characteristics such as decision-making style may have a significant influence on the evaluation of different types of interactive explanation interfaces.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RadarSense: Accurate Recognition of Mid-air Hand Gestures with Radar Sensing and Few Training Examples RadarSense:基于雷达传感的空中手势的准确识别和少量训练实例
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-03-31 DOI: 10.1145/3589645
Arthur Sluÿters, S. Lambot, J. Vanderdonckt, Radu-Daniel Vatavu
Microwave radars bring many benefits to mid-air gesture sensing due to their large field of view and independence from environmental conditions, such as ambient light and occlusion. However, radar signals are highly dimensional and usually require complex deep learning approaches. To understand this landscape, we report results from a systematic literature review of (N=118) scientific papers on radar sensing, unveiling a large variety of radar technology of different operating frequencies and bandwidths and antenna configurations but also various gesture recognition techniques. Although highly accurate, these techniques require a large amount of training data that depend on the type of radar. Therefore, the training results cannot be easily transferred to other radars. To address this aspect, we introduce a new gesture recognition pipeline that implements advanced full-wave electromagnetic modeling and inversion to retrieve physical characteristics of gestures that are radar independent, i.e., independent of the source, antennas, and radar-hand interactions. Inversion of radar signals further reduces the size of the dataset by several orders of magnitude, while preserving the essential information. This approach is compatible with conventional gesture recognizers, such as those based on template matching, which only need a few training examples to deliver high recognition accuracy rates. To evaluate our gesture recognition pipeline, we conducted user-dependent and user-independent evaluations on a dataset of 16 gesture types collected with the Walabot, a low-cost off-the-shelf array radar. We contrast these results with those obtained for the same gesture types collected with an ultra-wideband radar made of a vector network analyzer with a single horn antenna and with a computer vision sensor, respectively. Based on our findings, we suggest some design implications to support future development in radar-based gesture recognition.
微波雷达由于其大视场和不受环境条件(如环境光和遮挡)的影响,为空中手势传感带来了许多好处。然而,雷达信号是高度多维的,通常需要复杂的深度学习方法。为了了解这一情况,我们报告了对(N=118)篇雷达传感科学论文的系统文献综述的结果,揭示了各种不同工作频率、带宽和天线配置的雷达技术,以及各种手势识别技术。虽然准确度很高,但这些技术需要大量的训练数据,这取决于雷达的类型。因此,训练结果不容易转移到其他雷达。为了解决这方面的问题,我们引入了一种新的手势识别管道,该管道实现了先进的全波电磁建模和反演,以检索与雷达无关的手势的物理特征,即与源、天线和雷达手交互无关。雷达信号的反演在保留基本信息的前提下,将数据集的大小进一步降低了几个数量级。该方法与传统的基于模板匹配的手势识别器兼容,只需少量的训练样例即可提供较高的识别准确率。为了评估我们的手势识别管道,我们对使用低成本的现成阵列雷达Walabot收集的16种手势类型的数据集进行了用户依赖和用户独立的评估。我们将这些结果与由带有单喇叭天线的矢量网络分析仪和计算机视觉传感器组成的超宽带雷达收集的相同手势类型的结果进行了对比。基于我们的研究结果,我们提出了一些设计建议,以支持基于雷达的手势识别的未来发展。
{"title":"RadarSense: Accurate Recognition of Mid-air Hand Gestures with Radar Sensing and Few Training Examples","authors":"Arthur Sluÿters, S. Lambot, J. Vanderdonckt, Radu-Daniel Vatavu","doi":"10.1145/3589645","DOIUrl":"https://doi.org/10.1145/3589645","url":null,"abstract":"Microwave radars bring many benefits to mid-air gesture sensing due to their large field of view and independence from environmental conditions, such as ambient light and occlusion. However, radar signals are highly dimensional and usually require complex deep learning approaches. To understand this landscape, we report results from a systematic literature review of (N=118) scientific papers on radar sensing, unveiling a large variety of radar technology of different operating frequencies and bandwidths and antenna configurations but also various gesture recognition techniques. Although highly accurate, these techniques require a large amount of training data that depend on the type of radar. Therefore, the training results cannot be easily transferred to other radars. To address this aspect, we introduce a new gesture recognition pipeline that implements advanced full-wave electromagnetic modeling and inversion to retrieve physical characteristics of gestures that are radar independent, i.e., independent of the source, antennas, and radar-hand interactions. Inversion of radar signals further reduces the size of the dataset by several orders of magnitude, while preserving the essential information. This approach is compatible with conventional gesture recognizers, such as those based on template matching, which only need a few training examples to deliver high recognition accuracy rates. To evaluate our gesture recognition pipeline, we conducted user-dependent and user-independent evaluations on a dataset of 16 gesture types collected with the Walabot, a low-cost off-the-shelf array radar. We contrast these results with those obtained for the same gesture types collected with an ultra-wideband radar made of a vector network analyzer with a single horn antenna and with a computer vision sensor, respectively. Based on our findings, we suggest some design implications to support future development in radar-based gesture recognition.","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86834309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
RadarSense: Accurate Recognition of Mid-Air Hand Gestures with Radar Sensing and Few Training Examples RadarSense:基于雷达传感的空中手势的准确识别和少量训练实例
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-03-31 DOI: https://dl.acm.org/doi/10.1145/3589645
Arthur SluŸters, Sébastien Lambot, Jean Vanderdonckt, Radu-Daniel Vatavu

Microwave radars bring many benefits to mid-air gesture sensing due to their large field of view and independence from environmental conditions, such as ambient light and occlusion. However, radar signals are highly dimensional and usually require complex deep learning approaches. To understand this landscape, we report results from a systematic literature review of (N = 118) scientific papers on radar sensing, unveiling a large variety of radar technology of different operating frequencies and bandwidths, antenna configurations, but also various gesture recognition techniques. Although highly accurate, these techniques require a large amount of training data that depend on the type of radar. Therefore, the training results cannot be easily transferred to other radars. To address this aspect, we introduce a new gesture recognition pipeline that implements advanced full-wave electromagnetic modeling and inversion to retrieve physical characteristics of gestures that are radar independent, i.e., independent of the source, antennas, and radar-hand interactions. Inversion of radar signals further reduces the size of the dataset by several orders of magnitude, while preserving the essential information. This approach is compatible with conventional gesture recognizers, such as those based on template matching, which only need a few training examples to deliver high recognition accuracy rates. To evaluate our gesture recognition pipeline, we conducted user-dependent and user-independent evaluations on a dataset of 16 gesture types collected with the Walabot, a low-cost off-the-shelf array radar. We contrast these results with those obtained for the same gesture types collected with an ultra-wideband radar made of a vector network analyzer with a single horn antenna and with a computer vision sensor, respectively. Based on our findings, we suggest some design implications to support future development in radar-based gesture recognition.

微波雷达由于其大视场和不受环境条件(如环境光和遮挡)的影响,为空中手势传感带来了许多好处。然而,雷达信号是高度多维的,通常需要复杂的深度学习方法。为了了解这一情况,我们报告了对(N = 118)篇雷达传感科学论文的系统文献综述的结果,揭示了各种不同工作频率和带宽的雷达技术,天线配置,以及各种手势识别技术。虽然准确度很高,但这些技术需要大量的训练数据,这取决于雷达的类型。因此,训练结果不容易转移到其他雷达。为了解决这方面的问题,我们引入了一种新的手势识别管道,该管道实现了先进的全波电磁建模和反演,以检索与雷达无关的手势的物理特征,即与源、天线和雷达手交互无关。雷达信号的反演在保留基本信息的前提下,将数据集的大小进一步降低了几个数量级。该方法与传统的基于模板匹配的手势识别器兼容,只需少量的训练样例即可提供较高的识别准确率。为了评估我们的手势识别管道,我们对使用低成本的现成阵列雷达Walabot收集的16种手势类型的数据集进行了用户依赖和用户独立的评估。我们将这些结果与由带有单喇叭天线的矢量网络分析仪和计算机视觉传感器组成的超宽带雷达收集的相同手势类型的结果进行了对比。基于我们的研究结果,我们提出了一些设计建议,以支持基于雷达的手势识别的未来发展。
{"title":"RadarSense: Accurate Recognition of Mid-Air Hand Gestures with Radar Sensing and Few Training Examples","authors":"Arthur SluŸters, Sébastien Lambot, Jean Vanderdonckt, Radu-Daniel Vatavu","doi":"https://dl.acm.org/doi/10.1145/3589645","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3589645","url":null,"abstract":"<p>Microwave radars bring many benefits to mid-air gesture sensing due to their large field of view and independence from environmental conditions, such as ambient light and occlusion. However, radar signals are highly dimensional and usually require complex deep learning approaches. To understand this landscape, we report results from a systematic literature review of (<i>N</i> = 118) scientific papers on radar sensing, unveiling a large variety of radar technology of different operating frequencies and bandwidths, antenna configurations, but also various gesture recognition techniques. Although highly accurate, these techniques require a large amount of training data that depend on the type of radar. Therefore, the training results cannot be easily transferred to other radars. To address this aspect, we introduce a new gesture recognition pipeline that implements advanced full-wave electromagnetic modeling and inversion to retrieve physical characteristics of gestures that are radar independent, <i>i.e.</i>, independent of the source, antennas, and radar-hand interactions. Inversion of radar signals further reduces the size of the dataset by several orders of magnitude, while preserving the essential information. This approach is compatible with conventional gesture recognizers, such as those based on template matching, which only need a few training examples to deliver high recognition accuracy rates. To evaluate our gesture recognition pipeline, we conducted user-dependent and user-independent evaluations on a dataset of 16 gesture types collected with the Walabot, a low-cost off-the-shelf array radar. We contrast these results with those obtained for the same gesture types collected with an ultra-wideband radar made of a vector network analyzer with a single horn antenna and with a computer vision sensor, respectively. Based on our findings, we suggest some design implications to support future development in radar-based gesture recognition.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LIMEADE: From AI Explanations to Advice Taking 莱姆德:从人工智能解释到建议采纳
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-03-28 DOI: https://dl.acm.org/doi/10.1145/3589345
Benjamin Charles Germain Lee, Doug Downey, Kyle Lo, Daniel S. Weld

Research in human-centered AI has shown the benefits of systems that can explain their predictions. Methods that allow an AI to take advice from humans in response to explanations are similarly useful. While both capabilities are well-developed for transparent learning models (e.g., linear models and GA2Ms), and recent techniques (e.g., LIME and SHAP) can generate explanations for opaque models, little attention has been given to advice methods for opaque models. This paper introduces LIMEADE, the first general framework that translates both positive and negative advice (expressed using high-level vocabulary such as that employed by post-hoc explanations) into an update to an arbitrary, underlying opaque model. We demonstrate the generality of our approach with case studies on seventy real-world models across two broad domains: image classification and text recommendation. We show our method improves accuracy compared to a rigorous baseline on the image classification domains. For the text modality, we apply our framework to a neural recommender system for scientific papers on a public website; our user study shows that our framework leads to significantly higher perceived user control, trust, and satisfaction.

以人为中心的人工智能研究表明,能够解释其预测的系统的好处。允许人工智能在回应解释时听取人类建议的方法同样有用。虽然这两种能力都是为透明学习模型(例如,线性模型和GA2Ms)开发的,并且最近的技术(例如,LIME和SHAP)可以为不透明模型生成解释,但很少有人关注不透明模型的建议方法。本文介绍了LIMEADE,这是第一个将积极和消极建议(使用高级词汇,如事后解释所使用的词汇)转换为对任意的、底层不透明模型的更新的通用框架。我们通过70个真实世界模型的案例研究展示了我们方法的通用性,这些模型跨越两个广泛的领域:图像分类和文本推荐。我们表明,与严格的基线图像分类域相比,我们的方法提高了精度。对于文本模态,我们将该框架应用于公共网站科学论文的神经推荐系统;我们的用户研究表明,我们的框架显著提高了用户控制、信任和满意度。
{"title":"LIMEADE: From AI Explanations to Advice Taking","authors":"Benjamin Charles Germain Lee, Doug Downey, Kyle Lo, Daniel S. Weld","doi":"https://dl.acm.org/doi/10.1145/3589345","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3589345","url":null,"abstract":"<p>Research in human-centered AI has shown the benefits of systems that can explain their predictions. Methods that allow an AI to take advice from humans in response to explanations are similarly useful. While both capabilities are well-developed for <i>transparent</i> learning models (e.g., linear models and GA<sup>2</sup>Ms), and recent techniques (e.g., LIME and SHAP) can generate explanations for <i>opaque</i> models, little attention has been given to advice methods for opaque models. This paper introduces LIMEADE, the first general framework that translates both positive and negative advice (expressed using high-level vocabulary such as that employed by post-hoc explanations) into an update to an arbitrary, underlying opaque model. We demonstrate the generality of our approach with case studies on seventy real-world models across two broad domains: image classification and text recommendation. We show our method improves accuracy compared to a rigorous baseline on the image classification domains. For the text modality, we apply our framework to a neural recommender system for scientific papers on a public website; our user study shows that our framework leads to significantly higher perceived user control, trust, and satisfaction.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Crowdsourcing Thumbnail Captions: Data Collection and Validation 众包缩略图说明:数据收集和验证
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-03-28 DOI: https://dl.acm.org/doi/10.1145/3589346
Carlos Aguirre, Shiye Cao, Amama Mahmood, Chien-Ming Huang

Speech interfaces, such as personal assistants and screen readers, read image captions to users—but typically only one caption is available per image, which may not be adequate for all situations (e.g., browsing large quantities of images). Long captions provide a deeper understanding of an image but require more time to listen to, whereas shorter captions may not allow for such thorough comprehension, yet have the advantage of being faster to consume. We explore how to effectively collect both thumbnail captions—succinct image descriptions meant to be consumed quickly—and comprehensive captions—which allow individuals to understand visual content in greater detail; we consider text-based instructions and time-constrained methods to collect descriptions at these two levels of detail and find that a time-constrained method is the most effective for collecting thumbnail captions while preserving caption accuracy. Additionally, we verify that caption authors using this time-constrained method are still able to focus on the most important regions of an image by tracking their eye gaze. We evaluate our collected captions along human-rated axes—correctness, fluency, amount of detail, and mentions of important concepts—and discuss the potential for model-based metrics to perform large-scale automatic evaluations in the future.

语音界面,如个人助理和屏幕阅读器,会向用户读取图像标题,但通常每个图像只有一个标题可用,这可能不适用于所有情况(例如,浏览大量图像)。较长的字幕提供了对图像更深入的理解,但需要更多的时间来听,而较短的字幕可能不允许如此彻底的理解,但具有更快的消费优势。我们探讨了如何有效地收集缩略图标题(简洁的图像描述,旨在快速消费)和综合标题(允许个人更详细地理解视觉内容);我们考虑了基于文本的指令和时间约束的方法来收集这两个细节级别的描述,并发现时间约束的方法在收集缩略图标题的同时保持标题的准确性是最有效的。此外,我们验证了使用这种时间约束方法的标题作者仍然能够通过跟踪他们的眼睛注视来关注图像中最重要的区域。我们按照人类评定的标准(正确性、流畅性、细节数量和重要概念的提及)评估收集到的标题,并讨论基于模型的指标在未来执行大规模自动评估的潜力。
{"title":"Crowdsourcing Thumbnail Captions: Data Collection and Validation","authors":"Carlos Aguirre, Shiye Cao, Amama Mahmood, Chien-Ming Huang","doi":"https://dl.acm.org/doi/10.1145/3589346","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3589346","url":null,"abstract":"<p>Speech interfaces, such as personal assistants and screen readers, read image captions to users—but typically only one caption is available per image, which may not be adequate for all situations (e.g., browsing large quantities of images). Long captions provide a deeper understanding of an image but require more time to listen to, whereas shorter captions may not allow for such thorough comprehension, yet have the advantage of being faster to consume. We explore how to effectively collect both thumbnail captions—succinct image descriptions meant to be consumed quickly—and comprehensive captions—which allow individuals to understand visual content in greater detail; we consider text-based instructions and time-constrained methods to collect descriptions at these two levels of detail and find that a time-constrained method is the most effective for collecting thumbnail captions while preserving caption accuracy. Additionally, we verify that caption authors using this time-constrained method are still able to focus on the most important regions of an image by tracking their eye gaze. We evaluate our collected captions along human-rated axes—correctness, fluency, amount of detail, and mentions of important concepts—and discuss the potential for model-based metrics to perform large-scale automatic evaluations in the future.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Crowdsourcing Thumbnail Captions: Data Collection and Validation 众包缩略图说明:数据收集和验证
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-03-28 DOI: 10.1145/3589346
Carlos Alejandro Aguirre, Shiye Cao, Amama Mahmood, Chien-Ming Huang
Speech interfaces, such as personal assistants and screen readers, read image captions to users. Typically, however, only one caption is available per image, which may not be adequate for all situations (e.g., browsing large quantities of images). Long captions provide a deeper understanding of an image but require more time to listen to, whereas shorter captions may not allow for such thorough comprehension yet have the advantage of being faster to consume. We explore how to effectively collect both thumbnail captions—succinct image descriptions meant to be consumed quickly—and comprehensive captions—which allow individuals to understand visual content in greater detail. We consider text-based instructions and time-constrained methods to collect descriptions at these two levels of detail and find that a time-constrained method is the most effective for collecting thumbnail captions while preserving caption accuracy. Additionally, we verify that caption authors using this time-constrained method are still able to focus on the most important regions of an image by tracking their eye gaze. We evaluate our collected captions along human-rated axes—correctness, fluency, amount of detail, and mentions of important concepts—and discuss the potential for model-based metrics to perform large-scale automatic evaluations in the future.
语音界面,如个人助理和屏幕阅读器,会向用户读取图像说明。然而,通常情况下,每张图片只有一个标题,这可能不适用于所有情况(例如,浏览大量图片)。较长的字幕提供了对图像更深入的理解,但需要更多的时间来听,而较短的字幕可能不允许如此彻底的理解,但具有更快的消费优势。我们探讨了如何有效地收集缩略图标题(简洁的图像描述,旨在快速消费)和综合标题(允许个人更详细地理解视觉内容)。我们考虑了基于文本的指令和时间约束的方法来收集这两个细节级别的描述,并发现时间约束的方法在收集缩略图标题的同时保持标题的准确性是最有效的。此外,我们验证了使用这种时间约束方法的标题作者仍然能够通过跟踪他们的眼睛注视来关注图像中最重要的区域。我们按照人类评定的标准(正确性、流畅性、细节数量和重要概念的提及)评估收集到的标题,并讨论基于模型的指标在未来执行大规模自动评估的潜力。
{"title":"Crowdsourcing Thumbnail Captions: Data Collection and Validation","authors":"Carlos Alejandro Aguirre, Shiye Cao, Amama Mahmood, Chien-Ming Huang","doi":"10.1145/3589346","DOIUrl":"https://doi.org/10.1145/3589346","url":null,"abstract":"Speech interfaces, such as personal assistants and screen readers, read image captions to users. Typically, however, only one caption is available per image, which may not be adequate for all situations (e.g., browsing large quantities of images). Long captions provide a deeper understanding of an image but require more time to listen to, whereas shorter captions may not allow for such thorough comprehension yet have the advantage of being faster to consume. We explore how to effectively collect both thumbnail captions—succinct image descriptions meant to be consumed quickly—and comprehensive captions—which allow individuals to understand visual content in greater detail. We consider text-based instructions and time-constrained methods to collect descriptions at these two levels of detail and find that a time-constrained method is the most effective for collecting thumbnail captions while preserving caption accuracy. Additionally, we verify that caption authors using this time-constrained method are still able to focus on the most important regions of an image by tracking their eye gaze. We evaluate our collected captions along human-rated axes—correctness, fluency, amount of detail, and mentions of important concepts—and discuss the potential for model-based metrics to perform large-scale automatic evaluations in the future.","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89153840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How do Users Experience Traceability of AI Systems? Examining Subjective Information Processing Awareness in Automated Insulin Delivery (AID) Systems 用户如何体验人工智能系统的可追溯性?胰岛素自动输送(AID)系统的主观信息处理意识研究
IF 3.4 4区 计算机科学 Q2 Computer Science Pub Date : 2023-03-24 DOI: https://dl.acm.org/doi/10.1145/3588594
Tim Schrills, Thomas Franke

When interacting with artificial intelligence (AI) in the medical domain, users frequently face automated information processing, which can remain opaque to them. For example, users with diabetes may interact daily with automated insulin delivery (AID). However, effective AID therapy requires traceability of automated decisions for diverse users. Grounded in research on human-automation interaction, we study Subjective Information Processing Awareness (SIPA) as a key construct to research users’ experience of explainable AI. The objective of the present research was to examine how users experience differing levels of traceability of an AI algorithm. We developed a basic AID simulation to create realistic scenarios for an experiment with N = 80, where we examined the effect of three levels of information disclosure on SIPA and performance. Attributes serving as the basis for insulin needs calculation were shown to users, who predicted the AID system’s calculation after over 60 observations. Results showed a difference in SIPA after repeated observations, associated with a general decline of SIPA ratings over time. Supporting scale validity, SIPA was strongly correlated with trust and satisfaction with explanations. The present research indicates that the effect of different levels of information disclosure may need several repetitions before it manifests. Additionally, high levels of information disclosure may lead to a miscalibration between SIPA and performance in predicting the system’s results. The results indicate that for a responsible design of XAI, system designers could utilize prediction tasks in order to calibrate experienced traceability.

在与医疗领域的人工智能(AI)交互时,用户经常面临自动化的信息处理,而这些信息对他们来说可能仍然是不透明的。例如,糖尿病患者可能每天都与自动胰岛素输送(AID)互动。然而,有效的艾滋病治疗需要不同用户的自动决策的可追溯性。在人机交互研究的基础上,我们研究了主观信息处理意识(SIPA)作为研究可解释人工智能用户体验的关键结构。本研究的目的是研究用户如何体验不同程度的人工智能算法的可追溯性。我们开发了一个基本的AID模拟,为一个N = 80的实验创建了真实的场景,在这个实验中,我们检查了三个级别的信息披露对SIPA和绩效的影响。将作为胰岛素需求计算基础的属性显示给用户,用户经过60多次观察后预测AID系统的计算结果。反复观察后,结果显示SIPA的差异,与SIPA评分随时间的普遍下降有关。支持量表效度,SIPA与信任和对解释的满意度呈强相关。本研究表明,不同层次的信息披露效应可能需要多次重复才能显现。此外,高水平的信息披露可能导致SIPA与预测系统结果的性能之间的错误校准。结果表明,对于负责任的XAI设计,系统设计者可以利用预测任务来校准经验可追溯性。
{"title":"How do Users Experience Traceability of AI Systems? Examining Subjective Information Processing Awareness in Automated Insulin Delivery (AID) Systems","authors":"Tim Schrills, Thomas Franke","doi":"https://dl.acm.org/doi/10.1145/3588594","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3588594","url":null,"abstract":"<p>When interacting with artificial intelligence (AI) in the medical domain, users frequently face automated information processing, which can remain opaque to them. For example, users with diabetes may interact daily with automated insulin delivery (AID). However, effective AID therapy requires traceability of automated decisions for diverse users. Grounded in research on human-automation interaction, we study Subjective Information Processing Awareness (SIPA) as a key construct to research users’ experience of explainable AI. The objective of the present research was to examine how users experience differing levels of traceability of an AI algorithm. We developed a basic AID simulation to create realistic scenarios for an experiment with <i>N</i> = 80, where we examined the effect of three levels of information disclosure on SIPA and performance. Attributes serving as the basis for insulin needs calculation were shown to users, who predicted the AID system’s calculation after over 60 observations. Results showed a difference in SIPA after repeated observations, associated with a general decline of SIPA ratings over time. Supporting scale validity, SIPA was strongly correlated with trust and satisfaction with explanations. The present research indicates that the effect of different levels of information disclosure may need several repetitions before it manifests. Additionally, high levels of information disclosure may lead to a miscalibration between SIPA and performance in predicting the system’s results. The results indicate that for a responsible design of XAI, system designers could utilize prediction tasks in order to calibrate experienced traceability.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2023-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Interactive Intelligent Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1