首页 > 最新文献

ACM Transactions on Interactive Intelligent Systems最新文献

英文 中文
Textflow: Toward Supporting Screen-free Manipulation of Situation-Relevant Smart Messages Textflow:朝着支持无屏幕操作情境相关的智能消息
IF 3.4 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-11-05 DOI: 10.1145/3519263
Pegah Karimi, Emanuele Plebani, Aqueasha Martin-Hammond, D. Bolchini
Texting relies on screen-centric prompts designed for sighted users, still posing significant barriers to people who are blind and visually impaired (BVI). Can we re-imagine texting untethered from a visual display? In an interview study, 20 BVI adults shared situations surrounding their texting practices, recurrent topics of conversations, and challenges. Informed by these insights, we introduce TextFlow, a mixed-initiative context-aware system that generates entirely auditory message options relevant to the users’ location, activity, and time of the day. Users can browse and select suggested aural messages using finger-taps supported by an off-the-shelf finger-worn device without having to hold or attend to a mobile screen. In an evaluative study, 10 BVI participants successfully interacted with TextFlow to browse and send messages in screen-free mode. The experiential response of the users shed light on the importance of bypassing the phone and accessing rapidly controllable messages at their fingertips while preserving privacy and accuracy with respect to speech or screen-based input. We discuss how non-visual access to proactive, contextual messaging can support the blind in a variety of daily scenarios.
短信依赖于为视力正常的用户设计的以屏幕为中心的提示,这仍然给盲人和视障人士(BVI)造成了很大的障碍。我们能重新想象不受视觉显示束缚的短信吗?在一项采访研究中,20名英属维尔京群岛成年人分享了他们发短信的情况、经常谈论的话题和面临的挑战。根据这些见解,我们推出了TextFlow,这是一个混合主动的上下文感知系统,可以根据用户的位置、活动和时间生成完全听觉的消息选项。用户可以通过手指点击浏览和选择建议的语音信息,而无需手持或关注手机屏幕。在一项评估性研究中,10名BVI参与者成功地与TextFlow交互,在无屏幕模式下浏览和发送消息。用户的体验反应揭示了绕过手机,用指尖快速获取可控信息的重要性,同时在语音或屏幕输入方面保持隐私和准确性。我们讨论了如何在各种日常场景中为盲人提供主动的、上下文信息的非视觉访问。
{"title":"Textflow: Toward Supporting Screen-free Manipulation of Situation-Relevant Smart Messages","authors":"Pegah Karimi, Emanuele Plebani, Aqueasha Martin-Hammond, D. Bolchini","doi":"10.1145/3519263","DOIUrl":"https://doi.org/10.1145/3519263","url":null,"abstract":"Texting relies on screen-centric prompts designed for sighted users, still posing significant barriers to people who are blind and visually impaired (BVI). Can we re-imagine texting untethered from a visual display? In an interview study, 20 BVI adults shared situations surrounding their texting practices, recurrent topics of conversations, and challenges. Informed by these insights, we introduce TextFlow, a mixed-initiative context-aware system that generates entirely auditory message options relevant to the users’ location, activity, and time of the day. Users can browse and select suggested aural messages using finger-taps supported by an off-the-shelf finger-worn device without having to hold or attend to a mobile screen. In an evaluative study, 10 BVI participants successfully interacted with TextFlow to browse and send messages in screen-free mode. The experiential response of the users shed light on the importance of bypassing the phone and accessing rapidly controllable messages at their fingertips while preserving privacy and accuracy with respect to speech or screen-based input. We discuss how non-visual access to proactive, contextual messaging can support the blind in a variety of daily scenarios.","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":"77 1","pages":"1 - 29"},"PeriodicalIF":3.4,"publicationDate":"2022-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79265079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Auto-Icon+: An Automated End-to-End Code Generation Tool for Icon Designs in UI Development Auto-Icon+:一个自动的端到端代码生成工具,用于UI开发中的图标设计
IF 3.4 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-11-04 DOI: https://dl.acm.org/doi/10.1145/3531065
Sidong Feng, Minmin Jiang, Tingting Zhou, Yankun Zhen, Chunyang Chen

Approximately 50% of development resources are devoted to user interface (UI) development tasks [9]. Occupying a large proportion of development resources, developing icons can be a time-consuming task, because developers need to consider not only effective implementation methods but also easy-to-understand descriptions. In this article, we present Auto-Icon+, an approach for automatically generating readable and efficient code for icons from design artifacts. According to our interviews to understand the gap between designers (icons are assembled from multiple components) and developers (icons as single images), we apply a heuristic clustering algorithm to compose the components into an icon image. We then propose an approach based on a deep learning model and computer vision methods to convert the composed icon image to fonts with descriptive labels, thereby reducing the laborious manual effort for developers and facilitating UI development. We quantitatively evaluate the quality of our method in the real-world UI development environment and demonstrate that our method offers developers accurate, efficient, readable, and usable code for icon designs, in terms of saving 65.2% implementing time.

大约50%的开发资源用于用户界面(UI)开发任务[9]。开发图标占用了大量的开发资源,这可能是一项耗时的任务,因为开发人员不仅需要考虑有效的实现方法,还需要考虑易于理解的描述。在本文中,我们将介绍Auto-Icon+,这是一种从设计工件中为图标自动生成可读且高效代码的方法。根据我们对设计师(图标由多个组件组合而成)和开发者(图标作为单个图像)之间的差异的采访,我们应用启发式聚类算法将组件组合成一个图标图像。然后,我们提出了一种基于深度学习模型和计算机视觉方法的方法,将合成的图标图像转换为带有描述性标签的字体,从而减少了开发人员费力的手工工作,促进了UI开发。我们在真实的UI开发环境中定量评估了我们的方法的质量,并证明我们的方法为开发人员提供了准确、高效、可读和可用的图标设计代码,节省了65.2%的实现时间。
{"title":"Auto-Icon+: An Automated End-to-End Code Generation Tool for Icon Designs in UI Development","authors":"Sidong Feng, Minmin Jiang, Tingting Zhou, Yankun Zhen, Chunyang Chen","doi":"https://dl.acm.org/doi/10.1145/3531065","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3531065","url":null,"abstract":"<p>Approximately 50% of development resources are devoted to user interface (UI) development tasks [9]. Occupying a large proportion of development resources, developing icons can be a time-consuming task, because developers need to consider not only effective implementation methods but also easy-to-understand descriptions. In this article, we present <monospace>Auto-Icon+</monospace>, an approach for automatically generating readable and efficient code for icons from design artifacts. According to our interviews to understand the gap between designers (icons are assembled from multiple components) and developers (icons as single images), we apply a heuristic clustering algorithm to compose the components into an icon image. We then propose an approach based on a deep learning model and computer vision methods to convert the composed icon image to fonts with descriptive labels, thereby reducing the laborious manual effort for developers and facilitating UI development. We quantitatively evaluate the quality of our method in the real-world UI development environment and demonstrate that our method offers developers accurate, efficient, readable, and usable code for icon designs, in terms of saving 65.2% implementing time.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":"316 1","pages":""},"PeriodicalIF":3.4,"publicationDate":"2022-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138531893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of Explanations in AI-Assisted Decision Making: Principles and Comparisons 解释在人工智能辅助决策中的作用:原理与比较
IF 3.4 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-11-04 DOI: https://dl.acm.org/doi/10.1145/3519266
Xinru Wang, Ming Yin

Recent years have witnessed the growing literature in empirical evaluation of explainable AI (XAI) methods. This study contributes to this ongoing conversation by presenting a comparison on the effects of a set of established XAI methods in AI-assisted decision making. Based on our review of previous literature, we highlight three desirable properties that ideal AI explanations should satisfy — improve people’s understanding of the AI model, help people recognize the model uncertainty, and support people’s calibrated trust in the model. Through three randomized controlled experiments, we evaluate whether four types of common model-agnostic explainable AI methods satisfy these properties on two types of AI models of varying levels of complexity, and in two kinds of decision making contexts where people perceive themselves as having different levels of domain expertise. Our results demonstrate that many AI explanations do not satisfy any of the desirable properties when used on decision making tasks that people have little domain expertise in. On decision making tasks that people are more knowledgeable, the feature contribution explanation is shown to satisfy more desiderata of AI explanations, even when the AI model is inherently complex. We conclude by discussing the implications of our study for improving the design of XAI methods to better support human decision making, and for advancing more rigorous empirical evaluation of XAI methods.

近年来,关于可解释人工智能(XAI)方法的实证评估文献越来越多。本研究通过比较一组已建立的XAI方法在人工智能辅助决策中的效果,为这一正在进行的对话做出了贡献。基于我们对以往文献的回顾,我们强调了理想的人工智能解释应该满足的三个理想属性——提高人们对人工智能模型的理解,帮助人们认识到模型的不确定性,并支持人们对模型的校准信任。通过三个随机对照实验,我们评估了四种常见的与模型无关的可解释人工智能方法在两种不同复杂程度的人工智能模型上是否满足这些属性,以及在两种人们认为自己具有不同水平的领域专业知识的决策环境中是否满足这些属性。我们的研究结果表明,当用于人们几乎没有领域专业知识的决策任务时,许多人工智能解释不满足任何理想的属性。在人们知识更丰富的决策任务上,即使人工智能模型本身就很复杂,特征贡献解释也能满足人工智能解释的更多需求。最后,我们讨论了本研究对改进XAI方法的设计以更好地支持人类决策的意义,以及对XAI方法进行更严格的实证评估的意义。
{"title":"Effects of Explanations in AI-Assisted Decision Making: Principles and Comparisons","authors":"Xinru Wang, Ming Yin","doi":"https://dl.acm.org/doi/10.1145/3519266","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3519266","url":null,"abstract":"<p>Recent years have witnessed the growing literature in empirical evaluation of <b>explainable AI (XAI)</b> methods. This study contributes to this ongoing conversation by presenting a comparison on the effects of a set of established XAI methods in AI-assisted decision making. Based on our review of previous literature, we highlight three desirable properties that ideal AI explanations should satisfy — improve people’s understanding of the AI model, help people recognize the model uncertainty, and support people’s calibrated trust in the model. Through three randomized controlled experiments, we evaluate whether four types of common model-agnostic explainable AI methods satisfy these properties on two types of AI models of varying levels of complexity, and in two kinds of decision making contexts where people perceive themselves as having different levels of domain expertise. Our results demonstrate that many AI explanations do not satisfy any of the desirable properties when used on decision making tasks that people have little domain expertise in. On decision making tasks that people are more knowledgeable, the feature contribution explanation is shown to satisfy more desiderata of AI explanations, even when the AI model is inherently complex. We conclude by discussing the implications of our study for improving the design of XAI methods to better support human decision making, and for advancing more rigorous empirical evaluation of XAI methods.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":"58 3","pages":""},"PeriodicalIF":3.4,"publicationDate":"2022-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Semantically Rich Network-based Multi-modal Mobile User Interface Embeddings 学习语义丰富的基于网络的多模态移动用户界面嵌入
IF 3.4 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-11-04 DOI: https://dl.acm.org/doi/10.1145/3533856
Gary Ang, Ee-Peng Lim

Semantically rich information from multiple modalities—text, code, images, categorical and numerical data—co-exist in the user interface (UI) design of mobile applications. Moreover, each UI design is composed of inter-linked UI entities that support different functions of an application, e.g., a UI screen comprising a UI taskbar, a menu, and multiple button elements. Existing UI representation learning methods unfortunately are not designed to capture multi-modal and linkage structure between UI entities. To support effective search and recommendation applications over mobile UIs, we need UI representations that integrate latent semantics present in both multi-modal information and linkages between UI entities. In this article, we present a novel self-supervised model—Multi-modal Attention-based Attributed Network Embedding (MAAN) model. MAAN is designed to capture structural network information present within the linkages between UI entities, as well as multi-modal attributes of the UI entity nodes. Based on the variational autoencoder framework, MAAN learns semantically rich UI embeddings in a self-supervised manner by reconstructing the attributes of UI entities and the linkages between them. The generated embeddings can be applied to a variety of downstream tasks: predicting UI elements associated with UI screens, inferring missing UI screen and element attributes, predicting UI user ratings, and retrieving UIs. Extensive experiments, including user evaluations, conducted on datasets from RICO, a rich real-world mobile UI repository, demonstrate that MAAN out-performs other state-of-the-art models. The number of linkages between UI entities can provide further information on the role of different UI entities in UI designs. However, MAAN does not capture edge attributes. To extend and generalize MAAN to learn even richer UI embeddings, we further propose EMAAN to capture edge attributes. We conduct additional extensive experiments on EMAAN, which show that it improves the performance of MAAN and similarly out-performs state-of-the-art models.

在移动应用程序的用户界面(UI)设计中,来自多种形态(文本、代码、图像、分类和数字数据)的语义丰富信息共存。此外,每个UI设计都是由相互关联的UI实体组成的,这些UI实体支持应用程序的不同功能,例如,一个UI屏幕包含一个UI任务栏、一个菜单和多个按钮元素。遗憾的是,现有的UI表示学习方法不能捕获UI实体之间的多模态和链接结构。为了在移动UI上支持有效的搜索和推荐应用程序,我们需要UI表示,它集成了存在于多模态信息和UI实体之间的链接中的潜在语义。在本文中,我们提出了一种新的自监督模型-基于多模态注意的属性网络嵌入(MAAN)模型。MAAN旨在捕获UI实体之间的链接中存在的结构网络信息,以及UI实体节点的多模态属性。MAAN基于变分自编码器框架,通过重构UI实体的属性和它们之间的联系,以自监督的方式学习语义丰富的UI嵌入。生成的嵌入可以应用于各种下游任务:预测与UI屏幕关联的UI元素,推断缺失的UI屏幕和元素属性,预测UI用户评级,以及检索UI。广泛的实验,包括用户评估,在RICO(一个丰富的真实世界的移动UI存储库)的数据集上进行,证明了MAAN优于其他最先进的模型。UI实体之间的链接数量可以提供关于不同UI实体在UI设计中的作用的进一步信息。然而,MAAN不捕获边缘属性。为了扩展和推广MAAN以学习更丰富的UI嵌入,我们进一步提出了EMAAN来捕获边缘属性。我们对EMAAN进行了额外的广泛实验,结果表明它提高了MAAN的性能,并且同样优于最先进的模型。
{"title":"Learning Semantically Rich Network-based Multi-modal Mobile User Interface Embeddings","authors":"Gary Ang, Ee-Peng Lim","doi":"https://dl.acm.org/doi/10.1145/3533856","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3533856","url":null,"abstract":"<p>Semantically rich information from multiple modalities—text, code, images, categorical and numerical data—co-exist in the user interface (UI) design of mobile applications. Moreover, each UI design is composed of inter-linked UI entities that support different functions of an application, e.g., a UI screen comprising a UI taskbar, a menu, and multiple button elements. Existing UI representation learning methods unfortunately are not designed to capture multi-modal and linkage structure between UI entities. To support effective search and recommendation applications over mobile UIs, we need UI representations that integrate latent semantics present in both multi-modal information and linkages between UI entities. In this article, we present a novel self-supervised model—Multi-modal Attention-based Attributed Network Embedding (MAAN) model. MAAN is designed to capture structural network information present within the linkages between UI entities, as well as multi-modal attributes of the UI entity nodes. Based on the variational autoencoder framework, MAAN learns semantically rich UI embeddings in a self-supervised manner by reconstructing the attributes of UI entities and the linkages between them. The generated embeddings can be applied to a variety of downstream tasks: predicting UI elements associated with UI screens, inferring missing UI screen and element attributes, predicting UI user ratings, and retrieving UIs. Extensive experiments, including user evaluations, conducted on datasets from RICO, a rich real-world mobile UI repository, demonstrate that MAAN out-performs other state-of-the-art models. The number of linkages between UI entities can provide further information on the role of different UI entities in UI designs. However, MAAN does not capture edge attributes. To extend and generalize MAAN to learn even richer UI embeddings, we further propose EMAAN to capture edge attributes. We conduct additional extensive experiments on EMAAN, which show that it improves the performance of MAAN and similarly out-performs state-of-the-art models.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":"185 1","pages":""},"PeriodicalIF":3.4,"publicationDate":"2022-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138531908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generating User-Centred Explanations via Illocutionary Question Answering: From Philosophy to Interfaces 通过语用问答生成以用户为中心的解释:从哲学到界面
IF 3.4 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-11-04 DOI: https://dl.acm.org/doi/10.1145/3519265
Francesco Sovrano, Fabio Vitali

We propose a new method for generating explanations with Artificial Intelligence (AI) and a tool to test its expressive power within a user interface. In order to bridge the gap between philosophy and human-computer interfaces, we show a new approach for the generation of interactive explanations based on a sophisticated pipeline of AI algorithms for structuring natural language documents into knowledge graphs, answering questions effectively and satisfactorily. With this work, we aim to prove that the philosophical theory of explanations presented by Achinstein can be actually adapted for being implemented into a concrete software application, as an interactive and illocutionary process of answering questions. Specifically, our contribution is an approach to frame illocution in a computer-friendly way, to achieve user-centrality with statistical question answering. Indeed, we frame the illocution of an explanatory process as that mechanism responsible for anticipating the needs of the explainee in the form of unposed, implicit, archetypal questions, hence improving the user-centrality of the underlying explanatory process. Therefore, we hypothesise that if an explanatory process is an illocutionary act of providing content-giving answers to questions, and illocution is as we defined it, the more explicit and implicit questions can be answered by an explanatory tool, the more usable (as per ISO 9241-210) its explanations. We tested our hypothesis with a user-study involving more than 60 participants, on two XAI-based systems, one for credit approval (finance) and one for heart disease prediction (healthcare). The results showed that increasing the illocutionary power of an explanatory tool can produce statistically significant improvements (hence with a P value lower than .05) on effectiveness. This, combined with a visible alignment between the increments in effectiveness and satisfaction, suggests that our understanding of illocution can be correct, giving evidence in favour of our theory.

我们提出了一种用人工智能(AI)生成解释的新方法,以及一种在用户界面中测试其表达能力的工具。为了弥合哲学和人机界面之间的差距,我们展示了一种基于复杂的人工智能算法管道生成交互式解释的新方法,用于将自然语言文档构建为知识图,有效且令人满意地回答问题。通过这项工作,我们的目标是证明阿奇斯坦提出的哲学解释理论实际上可以作为回答问题的交互式和言外之语过程应用于具体的软件应用程序中。具体来说,我们的贡献是以一种计算机友好的方式来构建illoction,以实现以统计问题回答为中心的用户。事实上,我们将解释过程的言外之意定义为一种机制,负责以未提出的、隐含的、原型问题的形式预测被解释者的需求,从而提高潜在解释过程的用户中心性。因此,我们假设,如果解释过程是一种提供内容给出问题答案的言外行为,而言外行为正如我们所定义的那样,那么解释工具可以回答的显性和隐性问题越多,其解释就越有用(根据ISO 9241-210)。我们用两个基于xai的系统,一个用于信贷审批(金融),一个用于心脏病预测(医疗),对60多名参与者进行了用户研究,以检验我们的假设。结果表明,增加解释工具的言外能力可以在有效性上产生统计学上显著的改善(因此P值低于0.05)。这与有效性和满意度之间的明显一致性相结合,表明我们对非言语的理解可能是正确的,为我们的理论提供了证据。
{"title":"Generating User-Centred Explanations via Illocutionary Question Answering: From Philosophy to Interfaces","authors":"Francesco Sovrano, Fabio Vitali","doi":"https://dl.acm.org/doi/10.1145/3519265","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3519265","url":null,"abstract":"<p>We propose a new method for generating explanations with Artificial Intelligence (AI) and a tool to test its expressive power within a user interface. In order to bridge the gap between philosophy and human-computer interfaces, we show a new approach for the generation of interactive explanations based on a sophisticated pipeline of AI algorithms for structuring natural language documents into knowledge graphs, answering questions effectively and satisfactorily. With this work, we aim to prove that the philosophical theory of explanations presented by Achinstein can be actually adapted for being implemented into a concrete software application, as an interactive and illocutionary process of answering questions. Specifically, our contribution is an approach to frame <i>illocution</i> in a computer-friendly way, to achieve user-centrality with statistical question answering. Indeed, we frame the <i>illocution</i> of an explanatory process as that mechanism responsible for anticipating the needs of the explainee in the form of unposed, implicit, archetypal questions, hence improving the user-centrality of the underlying explanatory process. Therefore, we hypothesise that if an explanatory process is an illocutionary act of providing content-giving answers to questions, and illocution is as we defined it, the more explicit and implicit questions can be answered by an explanatory tool, the more usable (as per ISO 9241-210) its explanations. We tested our hypothesis with a user-study involving more than 60 participants, on two XAI-based systems, one for credit approval (finance) and one for heart disease prediction (healthcare). The results showed that increasing the <i>illocutionary power</i> of an explanatory tool can produce statistically <i>significant</i> improvements (hence with a <i>P</i> value lower than .05) on effectiveness. This, combined with a visible alignment between the increments in effectiveness and satisfaction, suggests that our understanding of <i>illocution</i> can be correct, giving evidence in favour of our theory.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":"53 6","pages":""},"PeriodicalIF":3.4,"publicationDate":"2022-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ForSense: Accelerating Online Research Through Sensemaking Integration and Machine Research Support ForSense:通过语义集成和机器研究支持加速在线研究
IF 3.4 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-11-04 DOI: https://dl.acm.org/doi/10.1145/3532853
Gonzalo Ramos, Napol Rachatasumrit, Jina Suh, Rachel Ng, Christopher Meek

Online research is a frequent and important activity people perform on the Internet, yet current support for this task is basic, fragmented and not well integrated into web browser experiences. Guided by sensemaking theory, we present ForSense, a browser extension for accelerating people’s online research experience. The two primary sources of novelty of ForSense are the integration of multiple stages of online research and providing machine assistance to the user by leveraging recent advances in neural-driven machine reading. We use ForSense as a design probe to explore (1) the benefits of integrating multiple stages of online research, (2) the opportunities to accelerate online research using current advances in machine reading, (3) the opportunities to support online research tasks in the presence of imprecise machine suggestions, and (4) insights about the behaviors people exhibit when performing online research, the pages they visit, and the artifacts they create. Through our design probe, we observe people performing online research tasks, and see that they benefit from ForSense’s integration and machine support for online research. From the information and insights we collected, we derive and share key recommendations for designing and supporting imprecise machine assistance for research tasks.

在线搜索是人们在互联网上进行的一项频繁而重要的活动,但目前对这项任务的支持是基本的、分散的,并且没有很好地集成到web浏览器体验中。在语义制造理论的指导下,我们提出了ForSense,一个加速人们在线研究体验的浏览器扩展。ForSense的两个主要新颖性来源是在线研究的多个阶段的集成,以及通过利用神经驱动机器阅读的最新进展为用户提供机器辅助。我们使用ForSense作为设计探针来探索(1)整合在线研究的多个阶段的好处,(2)利用当前机器阅读的进展加速在线研究的机会,(3)在不精确的机器建议存在的情况下支持在线研究任务的机会,以及(4)关于人们在进行在线研究时表现出的行为的见解,他们访问的页面,以及他们创建的工件。通过我们的设计探测,我们观察人们执行在线研究任务,并看到他们受益于ForSense的集成和在线研究的机器支持。从我们收集的信息和见解中,我们得出并分享了设计和支持研究任务的不精确机器辅助的关键建议。
{"title":"ForSense: Accelerating Online Research Through Sensemaking Integration and Machine Research Support","authors":"Gonzalo Ramos, Napol Rachatasumrit, Jina Suh, Rachel Ng, Christopher Meek","doi":"https://dl.acm.org/doi/10.1145/3532853","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3532853","url":null,"abstract":"<p>Online research is a frequent and important activity people perform on the Internet, yet current support for this task is basic, fragmented and not well integrated into web browser experiences. Guided by sensemaking theory, we present ForSense, a browser extension for accelerating people’s online research experience. The two primary sources of novelty of ForSense are the integration of multiple stages of online research and providing machine assistance to the user by leveraging recent advances in neural-driven machine reading. We use ForSense as a design probe to explore (1) the benefits of integrating multiple stages of online research, (2) the opportunities to accelerate online research using current advances in machine reading, (3) the opportunities to support online research tasks in the presence of imprecise machine suggestions, and (4) insights about the behaviors people exhibit when performing online research, the pages they visit, and the artifacts they create. Through our design probe, we observe people performing online research tasks, and see that they benefit from ForSense’s integration and machine support for online research. From the information and insights we collected, we derive and share key recommendations for designing and supporting imprecise machine assistance for research tasks.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":"40 1","pages":""},"PeriodicalIF":3.4,"publicationDate":"2022-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138531885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GO-Finder: A Registration-free Wearable System for Assisting Users in Finding Lost Hand-held Objects GO-Finder:一个无需注册的可穿戴系统,用于帮助用户寻找丢失的手持物品
IF 3.4 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-11-04 DOI: 10.1145/3519268
Takuma Yagi, Takumi Nishiyasu, Kunimasa Kawasaki, Moe Matsuki, Yoichi Sato
People spend an enormous amount of time and effort looking for lost objects. To help remind people of the location of lost objects, various computational systems that provide information on their locations have been developed. However, prior systems for assisting people in finding objects require users to register the target objects in advance. This requirement imposes a cumbersome burden on the users, and the system cannot help remind them of unexpectedly lost objects. We propose GO-Finder (“Generic Object Finder”), a registration-free wearable camera-based system for assisting people in finding an arbitrary number of objects based on two key features: automatic discovery of hand-held objects and image-based candidate selection. Given a video taken from a wearable camera, GO-Finder automatically detects and groups hand-held objects to form a visual timeline of the objects. Users can retrieve the last appearance of the object by browsing the timeline through a smartphone app. We conducted user studies to investigate how users benefit from using GO-Finder. In the first study, we asked participants to perform an object retrieval task and confirmed improved accuracy and reduced mental load in the object search task by providing clear visual cues on object locations. In the second study, the system’s usability on a longer and more realistic scenario was verified, accompanied by an additional feature of context-based candidate filtering. Participant feedback suggested the usefulness of GO-Finder also in realistic scenarios where more than one hundred objects appear.
人们花费大量的时间和精力寻找丢失的物品。为了提醒人们丢失物品的位置,各种各样的计算系统已经被开发出来,可以提供物品位置的信息。然而,以前帮助人们寻找物体的系统需要用户提前注册目标物体。这一要求给用户带来了繁琐的负担,系统无法提醒他们意外丢失的物品。我们提出GO-Finder(“通用对象查找器”),这是一个无需注册的基于可穿戴相机的系统,用于帮助人们找到任意数量的对象,该系统基于两个关键功能:自动发现手持对象和基于图像的候选对象选择。根据可穿戴摄像头拍摄的视频,GO-Finder会自动检测和分组手持物体,形成物体的视觉时间轴。用户可以通过智能手机应用程序浏览时间轴来检索物体的最后一次外观。我们进行了用户研究,以调查用户如何从使用GO-Finder中获益。在第一项研究中,我们要求参与者执行一个物体检索任务,并证实通过提供物体位置的清晰视觉线索,提高了物体搜索任务的准确性,减少了心理负荷。在第二项研究中,验证了系统在更长的、更现实的场景中的可用性,并增加了基于上下文的候选筛选功能。参与者的反馈表明,GO-Finder在出现100多个物体的现实场景中也很有用。
{"title":"GO-Finder: A Registration-free Wearable System for Assisting Users in Finding Lost Hand-held Objects","authors":"Takuma Yagi, Takumi Nishiyasu, Kunimasa Kawasaki, Moe Matsuki, Yoichi Sato","doi":"10.1145/3519268","DOIUrl":"https://doi.org/10.1145/3519268","url":null,"abstract":"People spend an enormous amount of time and effort looking for lost objects. To help remind people of the location of lost objects, various computational systems that provide information on their locations have been developed. However, prior systems for assisting people in finding objects require users to register the target objects in advance. This requirement imposes a cumbersome burden on the users, and the system cannot help remind them of unexpectedly lost objects. We propose GO-Finder (“Generic Object Finder”), a registration-free wearable camera-based system for assisting people in finding an arbitrary number of objects based on two key features: automatic discovery of hand-held objects and image-based candidate selection. Given a video taken from a wearable camera, GO-Finder automatically detects and groups hand-held objects to form a visual timeline of the objects. Users can retrieve the last appearance of the object by browsing the timeline through a smartphone app. We conducted user studies to investigate how users benefit from using GO-Finder. In the first study, we asked participants to perform an object retrieval task and confirmed improved accuracy and reduced mental load in the object search task by providing clear visual cues on object locations. In the second study, the system’s usability on a longer and more realistic scenario was verified, accompanied by an additional feature of context-based candidate filtering. Participant feedback suggested the usefulness of GO-Finder also in realistic scenarios where more than one hundred objects appear.","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":"101 1","pages":"1 - 29"},"PeriodicalIF":3.4,"publicationDate":"2022-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80466165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GO-Finder: A Registration-free Wearable System for Assisting Users in Finding Lost Hand-held Objects GO-Finder:一个无需注册的可穿戴系统,用于帮助用户寻找丢失的手持物品
IF 3.4 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-11-04 DOI: https://dl.acm.org/doi/10.1145/3519268
Takuma Yagi, Takumi Nishiyasu, Kunimasa Kawasaki, Moe Matsuki, Yoichi Sato

People spend an enormous amount of time and effort looking for lost objects. To help remind people of the location of lost objects, various computational systems that provide information on their locations have been developed. However, prior systems for assisting people in finding objects require users to register the target objects in advance. This requirement imposes a cumbersome burden on the users, and the system cannot help remind them of unexpectedly lost objects. We propose GO-Finder (“Generic Object Finder”), a registration-free wearable camera-based system for assisting people in finding an arbitrary number of objects based on two key features: automatic discovery of hand-held objects and image-based candidate selection. Given a video taken from a wearable camera, GO-Finder automatically detects and groups hand-held objects to form a visual timeline of the objects. Users can retrieve the last appearance of the object by browsing the timeline through a smartphone app. We conducted user studies to investigate how users benefit from using GO-Finder. In the first study, we asked participants to perform an object retrieval task and confirmed improved accuracy and reduced mental load in the object search task by providing clear visual cues on object locations. In the second study, the system’s usability on a longer and more realistic scenario was verified, accompanied by an additional feature of context-based candidate filtering. Participant feedback suggested the usefulness of GO-Finder also in realistic scenarios where more than one hundred objects appear.

人们花费大量的时间和精力寻找丢失的物品。为了提醒人们丢失物品的位置,各种各样的计算系统已经被开发出来,可以提供物品位置的信息。然而,以前帮助人们寻找物体的系统需要用户提前注册目标物体。这一要求给用户带来了繁琐的负担,系统无法提醒他们意外丢失的物品。我们提出GO-Finder(“通用对象查找器”),这是一个无需注册的基于可穿戴相机的系统,用于帮助人们找到任意数量的对象,该系统基于两个关键功能:自动发现手持对象和基于图像的候选对象选择。根据可穿戴摄像头拍摄的视频,GO-Finder会自动检测和分组手持物体,形成物体的视觉时间轴。用户可以通过智能手机应用程序浏览时间轴来检索物体的最后一次外观。我们进行了用户研究,以调查用户如何从使用GO-Finder中获益。在第一项研究中,我们要求参与者执行一个物体检索任务,并证实通过提供物体位置的清晰视觉线索,提高了物体搜索任务的准确性,减少了心理负荷。在第二项研究中,验证了系统在更长的、更现实的场景中的可用性,并增加了基于上下文的候选筛选功能。参与者的反馈表明,GO-Finder在出现100多个物体的现实场景中也很有用。
{"title":"GO-Finder: A Registration-free Wearable System for Assisting Users in Finding Lost Hand-held Objects","authors":"Takuma Yagi, Takumi Nishiyasu, Kunimasa Kawasaki, Moe Matsuki, Yoichi Sato","doi":"https://dl.acm.org/doi/10.1145/3519268","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3519268","url":null,"abstract":"<p>People spend an enormous amount of time and effort looking for lost objects. To help remind people of the location of lost objects, various computational systems that provide information on their locations have been developed. However, prior systems for assisting people in finding objects require users to register the target objects in advance. This requirement imposes a cumbersome burden on the users, and the system cannot help remind them of unexpectedly lost objects. We propose GO-Finder (“Generic Object Finder”), a registration-free wearable camera-based system for assisting people in finding an arbitrary number of objects based on two key features: automatic discovery of hand-held objects and image-based candidate selection. Given a video taken from a wearable camera, GO-Finder automatically detects and groups hand-held objects to form a visual timeline of the objects. Users can retrieve the last appearance of the object by browsing the timeline through a smartphone app. We conducted user studies to investigate how users benefit from using GO-Finder. In the first study, we asked participants to perform an object retrieval task and confirmed improved accuracy and reduced mental load in the object search task by providing clear visual cues on object locations. In the second study, the system’s usability on a longer and more realistic scenario was verified, accompanied by an additional feature of context-based candidate filtering. Participant feedback suggested the usefulness of GO-Finder also in realistic scenarios where more than one hundred objects appear.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":"27 1","pages":""},"PeriodicalIF":3.4,"publicationDate":"2022-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138531884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PEACE: A Model of Key Social and Emotional Qualities of Conversational Chatbots 和平:会话聊天机器人的关键社交和情感品质模型
IF 3.4 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-11-04 DOI: https://dl.acm.org/doi/10.1145/3531064
Ekaterina Svikhnushina, Pearl Pu

Open-domain chatbots engage with users in natural conversations to socialize and establish bonds. However, designing and developing an effective open-domain chatbot is challenging. It is unclear what qualities of a chatbot most correspond to users’ expectations and preferences. Even though existing work has considered a wide range of aspects, some key components are still missing. For example, the role of chatbots’ ability to communicate with humans at the emotional level remains an open subject of study. Furthermore, these trait qualities are likely to cover several dimensions. It is crucial to understand how the different qualities relate and interact with each other and what the core aspects would be. For this purpose, we first designed an exploratory user study aimed at gaining a basic understanding of the desired qualities of chatbots with a special focus on their emotional intelligence. Using the findings from the first study, we constructed a model of the desired traits by carefully selecting a set of features. With the help of a large-scale survey and structural equation modeling, we further validated the model using data collected from the survey. The final outcome is called the PEACE model (Politeness, Entertainment, Attentive Curiosity, and Empathy). By analyzing the dependencies between the different PEACE constructs, we shed light on the importance of and interplay between the chatbots’ qualities and the effect of users’ attitudes and concerns on their expectations of the technology. Not only PEACE defines the key ingredients of the social qualities of a chatbot, it also helped us derive a set of design implications useful for the development of socially adequate and emotionally aware open-domain chatbots.

开放域聊天机器人与用户进行自然对话,进行社交并建立联系。然而,设计和开发一个有效的开放域聊天机器人是具有挑战性的。目前还不清楚聊天机器人的哪些品质最符合用户的期望和偏好。尽管现有的工作已经考虑了广泛的方面,但仍然缺少一些关键的组成部分。例如,聊天机器人在情感层面与人类交流的能力所扮演的角色仍然是一个开放的研究课题。此外,这些特质可能涵盖几个方面。理解不同的品质是如何相互联系和互动的,以及核心方面是什么是至关重要的。为此,我们首先设计了一个探索性的用户研究,旨在对聊天机器人的期望品质有一个基本的了解,特别关注他们的情商。利用第一项研究的结果,我们通过仔细选择一组特征,构建了一个所需特征的模型。借助大规模调查和结构方程建模,我们利用调查收集的数据进一步验证了模型。最后的结果被称为PEACE模型(礼貌、娱乐、细心的好奇心和同理心)。通过分析不同PEACE结构之间的依赖关系,我们揭示了聊天机器人的质量与用户对技术期望的态度和关注的影响之间的重要性和相互作用。PEACE不仅定义了聊天机器人社交品质的关键要素,它还帮助我们获得了一组设计暗示,这些暗示对开发社交能力强、情感意识强的开放域聊天机器人很有用。
{"title":"PEACE: A Model of Key Social and Emotional Qualities of Conversational Chatbots","authors":"Ekaterina Svikhnushina, Pearl Pu","doi":"https://dl.acm.org/doi/10.1145/3531064","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3531064","url":null,"abstract":"<p>Open-domain chatbots engage with users in natural conversations to socialize and establish bonds. However, designing and developing an effective open-domain chatbot is challenging. It is unclear what qualities of a chatbot most correspond to users’ expectations and preferences. Even though existing work has considered a wide range of aspects, some key components are still missing. For example, the role of chatbots’ ability to communicate with humans at the emotional level remains an open subject of study. Furthermore, these trait qualities are likely to cover several dimensions. It is crucial to understand how the different qualities relate and interact with each other and what the core aspects would be. For this purpose, we first designed an exploratory user study aimed at gaining a basic understanding of the desired qualities of chatbots with a special focus on their emotional intelligence. Using the findings from the first study, we constructed a model of the desired traits by carefully selecting a set of features. With the help of a large-scale survey and structural equation modeling, we further validated the model using data collected from the survey. The final outcome is called the <b>PEACE model (Politeness, Entertainment, Attentive Curiosity, and Empathy)</b>. By analyzing the dependencies between the different PEACE constructs, we shed light on the importance of and interplay between the chatbots’ qualities and the effect of users’ attitudes and concerns on their expectations of the technology. Not only PEACE defines the key ingredients of the social qualities of a chatbot, it also helped us derive a set of design implications useful for the development of socially adequate and emotionally aware open-domain chatbots.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":"180 1","pages":""},"PeriodicalIF":3.4,"publicationDate":"2022-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138531913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Personalized Interaction Mechanism Framework for Micro-moment Recommender Systems 微时刻推荐系统的个性化交互机制框架
IF 3.4 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-10-29 DOI: 10.1145/3569586
Yi-ling Lin, Shao-Wei Lee
The emergence of the micro-moment concept highlights the influence of context; recommender system design should reflect this trend. In response to different contexts, a micro-moment recommender system (MMRS) requires an effective interaction mechanism that allows users to easily interact with the system in a way that supports autonomy and promotes the creation and expression of self. We study four types of interaction mechanisms to understand which personalization approach is the most suitable design for MMRSs. We assume that designs that support micro-moment needs well are those that give users more control over the system and constitute a lighter user burden. We test our hypothesis via a two-week between-subject field study in which participants used our system and provided feedback. User-initiated and mix-initiated intention mechanisms show higher perceived active control, and the additional controls do not add to user burdens. Therefore, these two designs suit the MMRS interaction mechanism.
微瞬间概念的出现凸显了语境的影响;推荐系统的设计应该反映这一趋势。针对不同的情境,微时刻推荐系统(MMRS)需要一种有效的交互机制,允许用户以支持自主性和促进自我创造和表达的方式轻松地与系统进行交互。我们研究了四种类型的交互机制,以了解哪种个性化方法最适合MMRSs的设计。我们认为,能够很好地支持微瞬间需求的设计是那些能够让用户更好地控制系统并减轻用户负担的设计。我们通过为期两周的主题间实地研究来检验我们的假设,参与者使用我们的系统并提供反馈。用户发起和混合发起的意图机制表现出更高的感知主动控制,并且额外的控制不会增加用户负担。因此,这两种设计都适合MMRS交互机制。
{"title":"A Personalized Interaction Mechanism Framework for Micro-moment Recommender Systems","authors":"Yi-ling Lin, Shao-Wei Lee","doi":"10.1145/3569586","DOIUrl":"https://doi.org/10.1145/3569586","url":null,"abstract":"The emergence of the micro-moment concept highlights the influence of context; recommender system design should reflect this trend. In response to different contexts, a micro-moment recommender system (MMRS) requires an effective interaction mechanism that allows users to easily interact with the system in a way that supports autonomy and promotes the creation and expression of self. We study four types of interaction mechanisms to understand which personalization approach is the most suitable design for MMRSs. We assume that designs that support micro-moment needs well are those that give users more control over the system and constitute a lighter user burden. We test our hypothesis via a two-week between-subject field study in which participants used our system and provided feedback. User-initiated and mix-initiated intention mechanisms show higher perceived active control, and the additional controls do not add to user burdens. Therefore, these two designs suit the MMRS interaction mechanism.","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":"38 1","pages":"1 - 28"},"PeriodicalIF":3.4,"publicationDate":"2022-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87349924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Interactive Intelligent Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1