首页 > 最新文献

International Journal of Human-Computer Studies最新文献

英文 中文
Preventing users from going down rabbit holes of extreme video content: A study of the role played by different modes of autoplay 防止用户进入极端视频内容的兔子洞:不同自动播放模式的作用研究
IF 5.3 2区 计算机科学 Q1 Social Sciences Pub Date : 2024-06-05 DOI: 10.1016/j.ijhcs.2024.103303
Cheng Chen , Jingshi Kang , Pejman Sajjadi , S. Shyam Sundar

The autoplay feature of video platforms is often blamed for users going down rabbit holes of binge-watching extreme content. However, autoplay is not necessarily a passive experience, because users can toggle the feature off if they want. While the automation aspect is passive, the toggle option signals interactivity, making it “interpassive,” which lies between completely passive autoplay and manual initiation of each video. We empirically compare these three modes of video viewing in a user study (N = 394), which exposed participants to either extreme or non-extreme content under conditions of manual play, interpassive autoplay, or completely passive autoplay. Results show that interpassive autoplay is favored over the other two. It triggers the control heuristic compared to passive autoplay, but leads to higher inattentiveness compared to manual play. Both the invoked control heuristic and inattentiveness result in higher rabbit hole perception. These findings have implications for socially responsible design of the autoplay feature.

视频平台的自动播放功能常常被指责为用户狂看极端内容的 "兔子洞"。然而,自动播放并不一定是一种被动体验,因为用户可以根据自己的需要关闭该功能。虽然自动播放是被动的,但切换选项则意味着互动性,使其成为介于完全被动的自动播放和手动启动每段视频之间的 "被动式"。我们在一项用户研究(N = 394)中对这三种视频观看模式进行了实证比较,让参与者在手动播放、被动式自动播放或完全被动式自动播放的条件下观看极端或非极端内容。结果显示,被动式自动播放比其他两种方式更受欢迎。与被动式自动播放相比,它能触发控制启发式,但与手动播放相比,会导致更高的注意力不集中。控制启发式和注意力不集中都会导致更高的兔子洞感知。这些发现对设计具有社会责任感的自动播放功能具有重要意义。
{"title":"Preventing users from going down rabbit holes of extreme video content: A study of the role played by different modes of autoplay","authors":"Cheng Chen ,&nbsp;Jingshi Kang ,&nbsp;Pejman Sajjadi ,&nbsp;S. Shyam Sundar","doi":"10.1016/j.ijhcs.2024.103303","DOIUrl":"10.1016/j.ijhcs.2024.103303","url":null,"abstract":"<div><p>The autoplay feature of video platforms is often blamed for users going down rabbit holes of binge-watching extreme content. However, autoplay is not necessarily a passive experience, because users can toggle the feature off if they want. While the automation aspect is passive, the toggle option signals interactivity, making it “interpassive,” which lies between completely passive autoplay and manual initiation of each video. We empirically compare these three modes of video viewing in a user study (<em>N</em> = 394), which exposed participants to either extreme or non-extreme content under conditions of manual play, interpassive autoplay, or completely passive autoplay. Results show that interpassive autoplay is favored over the other two. It triggers the control heuristic compared to passive autoplay, but leads to higher inattentiveness compared to manual play. Both the invoked control heuristic and inattentiveness result in higher rabbit hole perception. These findings have implications for socially responsible design of the autoplay feature.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141410275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mixed-reality art as shared experience for cross-device users: Materialize, understand, and explore 混合现实艺术作为跨设备用户的共享体验:物化、理解和探索
IF 5.4 2区 计算机科学 Q1 Social Sciences Pub Date : 2024-05-31 DOI: 10.1016/j.ijhcs.2024.103291
Hayoun Moon , Mia Saade , Daniel Enriquez , Zachary Duer , Hye Sung Moon , Sang Won Lee , Myounghoon Jeon

Virtual reality (VR) has opened new possibilities for creative expression, while the 360-degree head-worn display (HWD) delivers a fully immersive experience in the world of art. The immersiveness, however, comes with the cost of blocking out the physical world, including bystanders without an HWD. Therefore, VR experiences in public (e.g., galleries, museums) often lack social interactivity, which plays an important role in forming aesthetic experiences. In the current study, we explored the application of a cross-device mixed reality (MR) platform in the domain of art to enable social and inclusive experiences with artworks that utilize VR technology. Our concept of interest features co-located audiences of HWD and mobile device users who interact across physical and virtual worlds. We conducted focus groups (N=22) and expert interviews (N=7) to identify the concept’s potential scenarios and fundamental components, as well as expected benefits and concerns. We also share our process of creating In-Between Spaces, an interactive artwork in MR that encourages social interactivity among cross-device audiences. Our exploration presents a prospective direction for future VR/MR aesthetic content, especially at public events and exhibitions targeting crowd audiences.

虚拟现实(VR)为创意表达提供了新的可能性,而 360 度头戴式显示器(HWD)则为艺术世界提供了完全身临其境的体验。然而,这种身临其境的体验是有代价的,那就是屏蔽了物理世界,包括没有 HWD 的旁观者。因此,公共场合(如美术馆、博物馆)的 VR 体验往往缺乏社交互动,而社交互动在形成审美体验方面发挥着重要作用。在当前的研究中,我们探索了跨设备混合现实(MR)平台在艺术领域的应用,以利用 VR 技术实现艺术作品的社交和包容性体验。我们感兴趣的概念是,在物理和虚拟世界中进行互动的高危人群和移动设备用户的共同受众。我们进行了焦点小组(22 人)和专家访谈(7 人),以确定该概念的潜在场景和基本组成部分,以及预期效益和关注点。我们还分享了我们创建 "空间之间"(In-Between Spaces)的过程,这是一个鼓励跨设备受众进行社交互动的 MR 互动艺术作品。我们的探索为未来的 VR/MR 美学内容提供了一个前瞻性方向,尤其是在针对人群的公共活动和展览中。
{"title":"Mixed-reality art as shared experience for cross-device users: Materialize, understand, and explore","authors":"Hayoun Moon ,&nbsp;Mia Saade ,&nbsp;Daniel Enriquez ,&nbsp;Zachary Duer ,&nbsp;Hye Sung Moon ,&nbsp;Sang Won Lee ,&nbsp;Myounghoon Jeon","doi":"10.1016/j.ijhcs.2024.103291","DOIUrl":"https://doi.org/10.1016/j.ijhcs.2024.103291","url":null,"abstract":"<div><p>Virtual reality (VR) has opened new possibilities for creative expression, while the 360-degree head-worn display (HWD) delivers a fully immersive experience in the world of art. The immersiveness, however, comes with the cost of blocking out the physical world, including bystanders without an HWD. Therefore, VR experiences in public (e.g., galleries, museums) often lack social interactivity, which plays an important role in forming aesthetic experiences. In the current study, we explored the application of a cross-device mixed reality (MR) platform in the domain of art to enable social and inclusive experiences with artworks that utilize VR technology. Our concept of interest features co-located audiences of HWD and mobile device users who interact across physical and virtual worlds. We conducted focus groups (<em>N</em>=22) and expert interviews (<em>N</em>=7) to identify the concept’s potential scenarios and fundamental components, as well as expected benefits and concerns. We also share our process of creating <em>In-Between Spaces</em>, an interactive artwork in MR that encourages social interactivity among cross-device audiences. Our exploration presents a prospective direction for future VR/MR aesthetic content, especially at public events and exhibitions targeting crowd audiences.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141264275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DigCode—A generic mid-air gesture coding method on human-computer interaction DigCode - 人机交互中通用的空中手势编码方法
IF 5.4 2区 计算机科学 Q1 Social Sciences Pub Date : 2024-05-26 DOI: 10.1016/j.ijhcs.2024.103302
Xiaozhou Zhou , Lesong Jia , Ruidong Bai , Chengqi Xue

With high flexibility and rich semantic expressiveness, mid-air gesture interaction is an important part of natural human-computer interaction (HCI) and has broad application prospects. However, there is no unified representation frame for designing, recording, investigating and comparing HCI mid-air gestures. Therefore, this paper proposes an interpretable coding method, DigCode, for HCI mid-air gestures. DigCode converts the unstructured continuous actions into structured discrete string encoding. From the perspective of human cognition and expression, the research employed psychophysical methods to divide gesture actions into discrete intervals, defined the coding rules of representation in letters and numbers, and developed automated programs to enable encoding and decoding by using gesture sensors. The coding method can cover the existing representations of HCI mid-air gestures by considering human understanding and computer recognition and can be applied to HCI mid-air gesture design and gesture library construction.

空中手势交互具有高度灵活性和丰富的语义表达能力,是自然人机交互(HCI)的重要组成部分,具有广阔的应用前景。然而,目前还没有统一的表示框架用于设计、记录、研究和比较人机交互的中空手势。因此,本文提出了人机交互中空中手势的可解释编码方法--DigCode。DigCode 将非结构化的连续动作转换为结构化的离散字符串编码。该研究从人类认知和表达的角度出发,采用心理物理方法将手势动作划分为离散的区间,定义了用字母和数字表示的编码规则,并开发了自动程序,利用手势传感器实现编码和解码。考虑到人的理解和计算机的识别,该编码方法可以涵盖现有的人机交互中空中手势的表示方法,并可应用于人机交互中空中手势设计和手势库建设。
{"title":"DigCode—A generic mid-air gesture coding method on human-computer interaction","authors":"Xiaozhou Zhou ,&nbsp;Lesong Jia ,&nbsp;Ruidong Bai ,&nbsp;Chengqi Xue","doi":"10.1016/j.ijhcs.2024.103302","DOIUrl":"https://doi.org/10.1016/j.ijhcs.2024.103302","url":null,"abstract":"<div><p>With high flexibility and rich semantic expressiveness, mid-air gesture interaction is an important part of natural human-computer interaction (HCI) and has broad application prospects. However, there is no unified representation frame for designing, recording, investigating and comparing HCI mid-air gestures. Therefore, this paper proposes an interpretable coding method, DigCode, for HCI mid-air gestures. DigCode converts the unstructured continuous actions into structured discrete string encoding. From the perspective of human cognition and expression, the research employed psychophysical methods to divide gesture actions into discrete intervals, defined the coding rules of representation in letters and numbers, and developed automated programs to enable encoding and decoding by using gesture sensors. The coding method can cover the existing representations of HCI mid-air gestures by considering human understanding and computer recognition and can be applied to HCI mid-air gesture design and gesture library construction.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141240866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From explainable to interactive AI: A literature review on current trends in human-AI interaction 从可解释人工智能到交互式人工智能:关于当前人机交互趋势的文献综述
IF 5.4 2区 计算机科学 Q1 Social Sciences Pub Date : 2024-05-23 DOI: 10.1016/j.ijhcs.2024.103301
Muhammad Raees , Inge Meijerink , Ioanna Lykourentzou , Vassilis-Javed Khan , Konstantinos Papangelis

AI systems are increasingly being adopted across various domains and application areas. With this surge, there is a growing research focus and societal concern for actively involving humans in developing, operating, and adopting these systems. Despite this concern, most existing literature on AI and Human–Computer Interaction (HCI) primarily focuses on explaining how AI systems operate and, at times, allowing users to contest AI decisions. Existing studies often overlook more impactful forms of user interaction with AI systems, such as giving users agency beyond contestability and enabling them to adapt and even co-design the AI’s internal mechanics. In this survey, we aim to bridge this gap by reviewing the state-of-the-art in Human-Centered AI literature, the domain where AI and HCI studies converge, extending past Explainable and Contestable AI, delving into the Interactive AI and beyond. Our analysis contributes to shaping the trajectory of future Interactive AI design and advocates for a more user-centric approach that provides users with greater agency, fostering not only their understanding of AI’s workings but also their active engagement in its development and evolution.

人工智能系统正被越来越多地应用于各个领域和应用领域。随着这一浪潮的兴起,让人类积极参与这些系统的开发、操作和采用成为越来越多的研究重点和社会关注的焦点。尽管存在这种担忧,但大多数关于人工智能和人机交互(HCI)的现有文献主要侧重于解释人工智能系统如何运行,有时还允许用户对人工智能的决策提出质疑。现有的研究往往忽略了用户与人工智能系统互动的更有影响力的形式,例如赋予用户超越质疑的代理权,使他们能够适应甚至共同设计人工智能的内部机制。在本调查报告中,我们旨在通过回顾以人为中心的人工智能文献的最新进展来弥补这一差距,这是人工智能和人机交互研究的交汇点,它超越了可解释人工智能和可竞争人工智能,深入到交互式人工智能及其他领域。我们的分析有助于塑造未来交互式人工智能设计的轨迹,并倡导一种更加以用户为中心的方法,为用户提供更大的能动性,不仅促进他们对人工智能工作原理的理解,而且促进他们积极参与人工智能的发展和演变。
{"title":"From explainable to interactive AI: A literature review on current trends in human-AI interaction","authors":"Muhammad Raees ,&nbsp;Inge Meijerink ,&nbsp;Ioanna Lykourentzou ,&nbsp;Vassilis-Javed Khan ,&nbsp;Konstantinos Papangelis","doi":"10.1016/j.ijhcs.2024.103301","DOIUrl":"10.1016/j.ijhcs.2024.103301","url":null,"abstract":"<div><p>AI systems are increasingly being adopted across various domains and application areas. With this surge, there is a growing research focus and societal concern for actively involving humans in developing, operating, and adopting these systems. Despite this concern, most existing literature on AI and Human–Computer Interaction (HCI) primarily focuses on explaining how AI systems operate and, at times, allowing users to contest AI decisions. Existing studies often overlook more impactful forms of user interaction with AI systems, such as giving users agency beyond contestability and enabling them to adapt and even co-design the AI’s internal mechanics. In this survey, we aim to bridge this gap by reviewing the state-of-the-art in Human-Centered AI literature, the domain where AI and HCI studies converge, extending past Explainable and Contestable AI, delving into the Interactive AI and beyond. Our analysis contributes to shaping the trajectory of future Interactive AI design and advocates for a more user-centric approach that provides users with greater agency, fostering not only their understanding of AI’s workings but also their active engagement in its development and evolution.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141142214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The way you assess matters: User interaction design of survey chatbots for mental health 在心理健康调查聊天机器人中,采用封闭式问题的心理评估设计对用户回答开放式问题的影响
IF 5.4 2区 计算机科学 Q1 Social Sciences Pub Date : 2024-05-22 DOI: 10.1016/j.ijhcs.2024.103290
Yucheng Jin, Li Chen, Xianglin Zhao, Wanling Cai

The global pandemic has pushed human society into a mental health crisis, prompting the development of various chatbots to supplement the limited mental health workforce. Several organizations have employed mental health survey chatbots for public mental status assessments. These survey chatbots typically ask closed-ended questions (Closed-EQs) to assess specific psychological issues like anxiety, depression, and loneliness, followed by open-ended questions (Open-EQs) for deeper insights. While Open-EQs are naturally presented conversationally in a survey chatbot, Closed-EQs can be delivered as embedded forms or within conversations, with the length of the questionnaire varying according to the psychological assessment. This study investigates how the interaction style of Closed-EQs and the questionnaire length affect user perceptions regarding survey credibility, enjoyment, and self-awareness, as well as their responses to Open-EQs in terms of quality and self-disclosure in a survey chatbot. We conducted a 2 (interaction style: form-based vs. conversation-based) × 3 (questionnaire length: short vs. middle vs. long) between-subjects study (N=213) with a loneliness survey chatbot. The results indicate that the form-based interaction significantly enhances the perceived credibility of the assessment, thereby improving response quality and self-disclosure in subsequent Open-EQs and fostering self-awareness. We discuss our findings for the interaction design of psychological assessment in a survey chatbot for mental health.

全球大流行将人类社会推向了心理健康危机,促使人们开发各种聊天机器人来补充有限的心理健康劳动力。一些组织已经使用心理健康调查聊天机器人进行公共心理状态评估。这些调查聊天机器人通常会提出封闭式问题(Closed-EQs)来评估特定的心理问题,如焦虑、抑郁和孤独,然后再提出开放式问题(Open-EQs)来深入了解。在调查聊天机器人中,开放式问卷是以对话形式自然呈现的,而封闭式问卷则可以嵌入表单或在对话中提供,问卷的长度根据心理评估的不同而不同。本研究调查了封闭式问卷的交互方式和问卷长度如何影响用户对调查可信度、乐趣和自我意识的看法,以及他们在调查聊天机器人中对开放式问卷在质量和自我披露方面的反应。我们使用一个孤独感调查聊天机器人进行了一项2(交互方式:基于表格的交互方式与基于对话的交互方式)×3(问卷长度:短问卷与中长问卷)的主体间研究(N=213)。结果表明,基于表单的互动能显著提高评估的可信度,从而提高后续开放式问卷的回答质量和自我披露程度,并促进自我认知。我们讨论了心理健康调查聊天机器人中心理评估互动设计的研究结果。
{"title":"The way you assess matters: User interaction design of survey chatbots for mental health","authors":"Yucheng Jin,&nbsp;Li Chen,&nbsp;Xianglin Zhao,&nbsp;Wanling Cai","doi":"10.1016/j.ijhcs.2024.103290","DOIUrl":"10.1016/j.ijhcs.2024.103290","url":null,"abstract":"<div><p>The global pandemic has pushed human society into a mental health crisis, prompting the development of various chatbots to supplement the limited mental health workforce. Several organizations have employed mental health survey chatbots for public mental status assessments. These survey chatbots typically ask closed-ended questions (Closed-EQs) to assess specific psychological issues like anxiety, depression, and loneliness, followed by open-ended questions (Open-EQs) for deeper insights. While Open-EQs are naturally presented conversationally in a survey chatbot, Closed-EQs can be delivered as embedded forms or within conversations, with the length of the questionnaire varying according to the psychological assessment. This study investigates how the <em>interaction style</em> of Closed-EQs and the <em>questionnaire length</em> affect user perceptions regarding survey credibility, enjoyment, and self-awareness, as well as their responses to Open-EQs in terms of quality and self-disclosure in a survey chatbot. We conducted a 2 (<em>interaction style</em>: form-based vs. conversation-based) <span><math><mo>×</mo></math></span> 3 (<em>questionnaire length</em>: short vs. middle vs. long) between-subjects study (N=213) with a loneliness survey chatbot. The results indicate that the form-based interaction significantly enhances the perceived credibility of the assessment, thereby improving response quality and self-disclosure in subsequent Open-EQs and fostering self-awareness. We discuss our findings for the interaction design of psychological assessment in a survey chatbot for mental health.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141140499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effect of interface design on cognitive workload in unmanned aerial vehicle control 界面设计对无人飞行器控制中认知工作量的影响
IF 5.4 2区 计算机科学 Q1 Social Sciences Pub Date : 2024-05-16 DOI: 10.1016/j.ijhcs.2024.103287
Wenjuan Zhang , Yunmei Liu , David B. Kaber

Unmanned Aerial Vehicle (UAV) control interfaces are critical channels for transferring information between the vehicle and an operator. Research on system performance has focused on enhancing vehicle automation and some work has evaluated cognitive workload for existing UAV interfaces. The potential for usable interface design to reduce cognitive workload during the early design phase has been largely overlooked. This study addresses these gaps by: (1) evaluating the effectiveness of a contemporary UAV interface design tool (the Modified GEDIS-UAV) to moderate user workload; (2) examining the effectiveness of various UAV interface designs for minimizing cognitive workload under different control task pacing; and (3) exploring the use of eye tracking measures, traditionally applied in other domains, as indicators of cognitive workload in UAV operations. We prototyped three different interface designs, classified as “baseline”, “enhanced” and “degraded” interfaces. Cognitive workload in UAV operation was manipulated in terms of levels of vehicle speed (“low” and “high”). Physiological and subjective measures of workload were collected for all combinations of interface design and task demand. Results revealed the “enhanced” interface to yield the lowest operator cognitive workload and supported operator resilience to increased control task demand, as compared to the “baseline” and “degraded” interfaces. In addition, task demand was found to elevate operator cognitive workload, particularly in terms of "mental" and "temporal" demands and operator perceptions of "performance". The study also demonstrated utility of eye-tracking technology for detecting cognitive workload in UAV operations. This research provides practical guidance for UAV control interface design to manage operator workload. The methods employed in the study are applicable to interface evaluation for various types of UAVs and other unmanned systems to enhance human-automation interaction.

无人飞行器(UAV)控制界面是飞行器与操作员之间传递信息的重要渠道。对系统性能的研究主要集中在提高飞行器的自动化程度上,一些研究还对现有无人飞行器界面的认知工作量进行了评估。在早期设计阶段,可用界面设计减少认知工作量的潜力在很大程度上被忽视了。本研究通过以下方法弥补了这些不足(1) 评估当代无人机界面设计工具(改良版 GEDIS-UAV)在减轻用户工作量方面的有效性;(2) 检验各种无人机界面设计在不同控制任务步调下减少认知工作量的有效性;(3) 探索使用传统上应用于其他领域的眼动跟踪测量方法作为无人机操作中认知工作量的指标。我们设计了三种不同的界面原型,分为 "基线"、"增强 "和 "退化 "界面。无人机操作中的认知工作量根据车辆速度水平("低 "和 "高")进行操控。针对界面设计和任务需求的所有组合,收集了工作量的生理和主观测量数据。结果表明,与 "基线 "和 "降级 "界面相比,"增强 "界面产生的操作员认知工作量最低,并支持操作员对控制任务需求增加的适应能力。此外,研究还发现任务需求会增加操作员的认知工作量,尤其是在 "心理 "和 "时间 "需求以及操作员对 "性能 "的感知方面。研究还证明了眼动跟踪技术在检测无人机操作认知工作量方面的实用性。这项研究为无人机控制界面设计提供了实用指导,以管理操作员的工作量。研究中采用的方法适用于各类无人机和其他无人系统的界面评估,以增强人机交互。
{"title":"Effect of interface design on cognitive workload in unmanned aerial vehicle control","authors":"Wenjuan Zhang ,&nbsp;Yunmei Liu ,&nbsp;David B. Kaber","doi":"10.1016/j.ijhcs.2024.103287","DOIUrl":"10.1016/j.ijhcs.2024.103287","url":null,"abstract":"<div><p>Unmanned Aerial Vehicle (UAV) control interfaces are critical channels for transferring information between the vehicle and an operator. Research on system performance has focused on enhancing vehicle automation and some work has evaluated cognitive workload for existing UAV interfaces. The potential for usable interface design to reduce cognitive workload during the early design phase has been largely overlooked. This study addresses these gaps by: (1) evaluating the effectiveness of a contemporary UAV interface design tool (the Modified GEDIS-UAV) to moderate user workload; (2) examining the effectiveness of various UAV interface designs for minimizing cognitive workload under different control task pacing; and (3) exploring the use of eye tracking measures, traditionally applied in other domains, as indicators of cognitive workload in UAV operations. We prototyped three different interface designs, classified as “baseline”, “enhanced” and “degraded” interfaces. Cognitive workload in UAV operation was manipulated in terms of levels of vehicle speed (“low” and “high”). Physiological and subjective measures of workload were collected for all combinations of interface design and task demand. Results revealed the “enhanced” interface to yield the lowest operator cognitive workload and supported operator resilience to increased control task demand, as compared to the “baseline” and “degraded” interfaces. In addition, task demand was found to elevate operator cognitive workload, particularly in terms of \"mental\" and \"temporal\" demands and operator perceptions of \"performance\". The study also demonstrated utility of eye-tracking technology for detecting cognitive workload in UAV operations. This research provides practical guidance for UAV control interface design to manage operator workload. The methods employed in the study are applicable to interface evaluation for various types of UAVs and other unmanned systems to enhance human-automation interaction.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141035920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Priming users with babies’ gestures: Investigating the influences of priming with different development origin of image schemas in gesture elicitation study 用婴儿的手势引导用户:在手势诱导研究中研究不同发展起源的图像图式对引导的影响
IF 5.4 2区 计算机科学 Q1 Social Sciences Pub Date : 2024-05-10 DOI: 10.1016/j.ijhcs.2024.103288
Yanming He , Qizhang Sun , Peiyao Cheng , Shumeng Hou , Lei Zhou

Gesture elicitation study is an effective method to design gestures for various contexts. Through involving end-users, GES results in intuitive gestures because they directly reflect end-users’ mental models and preferences. However, limited by personal experience, end-users are not capable of taking full advantages of technology while proposing gestures, which is referred as legacy bias. To overcome this, previous studies demonstrate that users’ performance can be improved by priming, such as viewing gestures, watching fictional movies, and experiencing framed scenarios. This research extends this line of studies by considering the developmental origin of image schemas in priming. More specifically, we compared the influences of no-priming, priming with early image schemas (EIS), and priming with late image schemas (LIS) on GES. Controlled experiments were conducted (N = 120) along the three stages of GES: users’ generation of gestures (Experiment 1), final gesture sets (Experiment 2), and end-users’ learnability of gestures (Experiment 3). Results show that users are largely influenced by developmental origin of image schemas in priming. LIS-priming improve gesture proposal production in comparison to no-priming condition. As for end-users’ evaluation, EIS-priming gestures exhibit higher initial and overall learnability.

手势诱导研究是针对不同情境设计手势的有效方法。通过最终用户的参与,手势诱导研究能产生直观的手势,因为它们直接反映了最终用户的心理模型和偏好。然而,受个人经验的限制,最终用户在提出手势时并不能充分利用技术的优势,这就是所谓的传统偏见。为了克服这一问题,以往的研究表明,用户的表现可以通过观看手势、观看虚构电影和体验框架场景等引子得到改善。本研究对这一研究思路进行了扩展,考虑了图像图式在引物中的发展起源。更具体地说,我们比较了无引物、早期形象图式引物(EIS)和晚期形象图式引物(LIS)对 GES 的影响。我们在 GES 的三个阶段进行了对照实验(N = 120):用户生成手势(实验 1)、最终手势集(实验 2)和最终用户对手势的可学习性(实验 3)。结果表明,用户在初始化过程中很大程度上受到图像图式发展起源的影响。与无引物条件相比,LIS 引物提高了手势提案的生成。至于最终用户的评价,EIS-priming 手势表现出更高的初始可学性和整体可学性。
{"title":"Priming users with babies’ gestures: Investigating the influences of priming with different development origin of image schemas in gesture elicitation study","authors":"Yanming He ,&nbsp;Qizhang Sun ,&nbsp;Peiyao Cheng ,&nbsp;Shumeng Hou ,&nbsp;Lei Zhou","doi":"10.1016/j.ijhcs.2024.103288","DOIUrl":"10.1016/j.ijhcs.2024.103288","url":null,"abstract":"<div><p>Gesture elicitation study is an effective method to design gestures for various contexts. Through involving end-users, GES results in intuitive gestures because they directly reflect end-users’ mental models and preferences. However, limited by personal experience, end-users are not capable of taking full advantages of technology while proposing gestures, which is referred as legacy bias. To overcome this, previous studies demonstrate that users’ performance can be improved by priming, such as viewing gestures, watching fictional movies, and experiencing framed scenarios. This research extends this line of studies by considering the developmental origin of image schemas in priming. More specifically, we compared the influences of no-priming, priming with early image schemas (EIS), and priming with late image schemas (LIS) on GES. Controlled experiments were conducted (<em>N</em> = 120) along the three stages of GES: users’ generation of gestures (Experiment 1), final gesture sets (Experiment 2), and end-users’ learnability of gestures (Experiment 3). Results show that users are largely influenced by developmental origin of image schemas in priming. LIS-priming improve gesture proposal production in comparison to no-priming condition. As for end-users’ evaluation, EIS-priming gestures exhibit higher initial and overall learnability.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141023630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Basic Needs in Games Scale (BANGS): A new tool for investigating positive and negative video game experiences 游戏中的基本需求量表(BANGS):调查积极和消极电子游戏体验的新工具
IF 5.4 2区 计算机科学 Q1 Social Sciences Pub Date : 2024-05-06 DOI: 10.1016/j.ijhcs.2024.103289
Nick Ballou , Alena Denisova , Richard Ryan , C. Scott Rigby , Sebastian Deterding

Players’ basic psychological needs for autonomy, competence, and relatedness are among the most commonly used constructs used in research on what makes video games so engaging, and how they might support or undermine user wellbeing. However, existing measures of basic psychological needs in games have important limitations—they either do not measure need frustration, or measure it in a way that may not be appropriate for the video games domain, they struggle to capture feelings of relatedness in both single- and multiplayer contexts, and they often lack validity evidence for certain contexts (e.g., playtesting vs experience with games as a whole). In this paper, we report on the design and validation of a new measure, the Basic Needs in Games Scale (BANGS), whose 6 subscales cover satisfaction and frustration of each basic psychological need in gaming contexts. The scale was validated and evaluated over five studies with a total of 1246 unique participants. Results supported the theorized structure of the scale and provided evidence for discriminant, convergent and criterion validity. Results also show that the scale performs well over different contexts (including evaluating experiences in a single game session or across various sessions) and over time, supporting measurement invariance. Further improvements to the scale are warranted, as results indicated lower reliability in the autonomy frustration subscale, and a surprising non-significant correlation between relatedness satisfaction and frustration. Despite these minor limitations, BANGS is a reliable and theoretically sound tool for researchers to measure basic needs satisfaction and frustration with a degree of domain validity not previously available.

玩家对自主性、能力和相关性的基本心理需求是研究电子游戏为何如此吸引人以及它们如何支持或损害用户福祉的最常用的概念之一。然而,现有的游戏中基本心理需求的测量方法有很大的局限性--它们要么没有测量需求挫败感,要么测量的方式可能不适合电子游戏领域,它们难以捕捉单人和多人游戏情境中的关联感,而且它们往往缺乏特定情境(如游戏测试与游戏整体体验)的有效性证据。在本文中,我们报告了一种新量表--游戏中的基本需求量表(BANGS)--的设计和验证情况,该量表的 6 个分量表涵盖了游戏情境中每种基本心理需求的满足感和挫败感。该量表经过了五项研究的验证和评估,共有 1246 人参与。结果支持量表的理论结构,并提供了判别效度、收敛效度和标准效度的证据。研究结果还表明,该量表在不同情境(包括评估单次游戏体验或不同游戏体验)和不同时间内均表现良好,支持测量不变性。由于结果显示自主性挫折感子量表的可靠性较低,而且令人惊讶的是,亲缘关系满意度与挫折感之间的相关性并不显著,因此有必要进一步改进该量表。尽管存在这些小的局限性,但对于研究人员来说,BANGS 是一种可靠的、理论上合理的工具,可用于测量基本需求满意度和挫折感,其领域效度是以前所没有的。
{"title":"The Basic Needs in Games Scale (BANGS): A new tool for investigating positive and negative video game experiences","authors":"Nick Ballou ,&nbsp;Alena Denisova ,&nbsp;Richard Ryan ,&nbsp;C. Scott Rigby ,&nbsp;Sebastian Deterding","doi":"10.1016/j.ijhcs.2024.103289","DOIUrl":"https://doi.org/10.1016/j.ijhcs.2024.103289","url":null,"abstract":"<div><p>Players’ basic psychological needs for autonomy, competence, and relatedness are among the most commonly used constructs used in research on what makes video games so engaging, and how they might support or undermine user wellbeing. However, existing measures of basic psychological needs in games have important limitations—they either do not measure need frustration, or measure it in a way that may not be appropriate for the video games domain, they struggle to capture feelings of relatedness in both single- and multiplayer contexts, and they often lack validity evidence for certain contexts (e.g., playtesting vs experience with games as a whole). In this paper, we report on the design and validation of a new measure, the Basic Needs in Games Scale (BANGS), whose 6 subscales cover satisfaction and frustration of each basic psychological need in gaming contexts. The scale was validated and evaluated over five studies with a total of 1246 unique participants. Results supported the theorized structure of the scale and provided evidence for discriminant, convergent and criterion validity. Results also show that the scale performs well over different contexts (including evaluating experiences in a single game session or across various sessions) and over time, supporting measurement invariance. Further improvements to the scale are warranted, as results indicated lower reliability in the autonomy frustration subscale, and a surprising non-significant correlation between relatedness satisfaction and frustration. Despite these minor limitations, BANGS is a reliable and theoretically sound tool for researchers to measure basic needs satisfaction and frustration with a degree of domain validity not previously available.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1071581924000739/pdfft?md5=ca9ad58cbea144bfd6f26708850af19d&pid=1-s2.0-S1071581924000739-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140880663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Virtual reality experiences for breathing and relaxation training: The effects of real vs. placebo biofeedback 用于呼吸和放松训练的虚拟现实体验:真实生物反馈与安慰剂生物反馈的效果对比
IF 5.4 2区 计算机科学 Q1 Social Sciences Pub Date : 2024-04-27 DOI: 10.1016/j.ijhcs.2024.103275
Luca Chittaro, Marta Serafini, Yvonne Vulcano

Virtual reality biofeedback systems for relaxation training can be an effective tool for reducing stress and anxiety levels, but most of them offer a limited user experience associated to the execution of a single task and a biofeedback mechanism that reflects a single physiological measurement. Furthermore, user evaluations of such systems do not typically include a placebo condition, making it difficult to determine the actual contribution of biofeedback. This paper proposes a VR system for breathing and relaxation training that: (i) uses biofeedback mechanisms based on multiple physiological measurements, (ii) provides a richer user experience through a narrative that unfolds in phases where the user is the main character and controls different elements of the virtual environment through biofeedback. To evaluate the system and to assess the actual contribution of biofeedback, we compared two conditions involving 35 participants: a biofeedback condition that exploited real-time measurements of user's breathing, skin conductance, and heart rate; and a placebo control condition, in which changes in the virtual environment followed physiological values recorded from a session with another user. The results showed that the proposed virtual experience helped users relax in both conditions, but real biofeedback produced results that were superior to placebo biofeedback, in terms of both relaxation and sense of presence. These outcomes highlight the important role that biofeedback can play in virtual reality systems for relaxation training, as well as the need for researchers to consider placebo conditions in evaluating this kind of systems.

用于放松训练的虚拟现实生物反馈系统可以成为降低压力和焦虑水平的有效工具,但大多数虚拟现实生物反馈系统提供的用户体验仅限于执行单一任务和反映单一生理测量结果的生物反馈机制。此外,此类系统的用户评估通常不包括安慰剂条件,因此很难确定生物反馈的实际贡献。本文提出了一种用于呼吸和放松训练的 VR 系统,该系统具有以下特点(i) 使用基于多种生理测量的生物反馈机制,(ii) 通过分阶段展开的叙事提供更丰富的用户体验,用户是主角,通过生物反馈控制虚拟环境的不同元素。为了评估该系统和生物反馈的实际贡献,我们对 35 名参与者参与的两种情况进行了比较:一种是利用实时测量用户呼吸、皮肤电导和心率的生物反馈情况;另一种是安慰剂对照情况,其中虚拟环境的变化是根据与其他用户的会话中记录的生理值进行的。结果表明,在这两种条件下,拟议的虚拟体验都能帮助用户放松,但就放松和临场感而言,真实生物反馈产生的效果优于安慰剂生物反馈。这些结果凸显了生物反馈在虚拟现实系统放松训练中的重要作用,以及研究人员在评估此类系统时考虑安慰剂条件的必要性。
{"title":"Virtual reality experiences for breathing and relaxation training: The effects of real vs. placebo biofeedback","authors":"Luca Chittaro,&nbsp;Marta Serafini,&nbsp;Yvonne Vulcano","doi":"10.1016/j.ijhcs.2024.103275","DOIUrl":"https://doi.org/10.1016/j.ijhcs.2024.103275","url":null,"abstract":"<div><p>Virtual reality biofeedback systems for relaxation training can be an effective tool for reducing stress and anxiety levels, but most of them offer a limited user experience associated to the execution of a single task and a biofeedback mechanism that reflects a single physiological measurement. Furthermore, user evaluations of such systems do not typically include a placebo condition, making it difficult to determine the actual contribution of biofeedback. This paper proposes a VR system for breathing and relaxation training that: (i) uses biofeedback mechanisms based on multiple physiological measurements, (ii) provides a richer user experience through a narrative that unfolds in phases where the user is the main character and controls different elements of the virtual environment through biofeedback. To evaluate the system and to assess the actual contribution of biofeedback, we compared two conditions involving 35 participants: a biofeedback condition that exploited real-time measurements of user's breathing, skin conductance, and heart rate; and a placebo control condition, in which changes in the virtual environment followed physiological values recorded from a session with another user. The results showed that the proposed virtual experience helped users relax in both conditions, but real biofeedback produced results that were superior to placebo biofeedback, in terms of both relaxation and sense of presence. These outcomes highlight the important role that biofeedback can play in virtual reality systems for relaxation training, as well as the need for researchers to consider placebo conditions in evaluating this kind of systems.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1071581924000594/pdfft?md5=4535cdc0d947a4b827fb903b5c01e2d7&pid=1-s2.0-S1071581924000594-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140822608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EEBA: Efficient and ergonomic Big-Arm for distant object manipulation in VR EEBA:用于在 VR 中操作远处物体的高效且符合人体工学的大臂
IF 5.4 2区 计算机科学 Q1 Social Sciences Pub Date : 2024-04-25 DOI: 10.1016/j.ijhcs.2024.103273
Jian Wu , Lili Wang , Sio Kei Im , Chan Tong Lam

Object manipulation is the most common form of interaction in virtual reality. We introduced an efficient and ergonomic Big-Arm method to improve the efficiency and comfort of manipulating distant objects in virtual reality. We prolong the upper arm and forearm lengths according to the maximum distance of the manipulation space and construct the linear mapping between the real and virtual elbow angle, which makes manipulation easier to control and more efficient. We propose an optimized elbow angle mapping to further improve the efficiency and comfort of distant object manipulation. Two user studies were designed and conducted to evaluate the performance of our optimized Big-Arm method. The results show that our method achieves significant improvement in efficiency, ergonomic performance, and task load reduction for manipulating the distant object (distance 6 m) compared to the state-of-the-art methods. At the same time, our method exhibits superior usability.

物体操作是虚拟现实中最常见的交互形式。我们引入了一种高效且符合人体工程学的大臂方法,以提高在虚拟现实中操作远处物体的效率和舒适度。我们根据操纵空间的最大距离延长了上臂和前臂的长度,并构建了真实和虚拟肘部角度之间的线性映射,从而使操纵更易于控制,效率更高。我们提出了优化的肘部角度映射,以进一步提高远距离物体操作的效率和舒适度。我们设计并进行了两项用户研究,以评估我们的优化大臂方法的性能。结果表明,与最先进的方法相比,我们的方法在操作远处物体(距离≥6 米)的效率、人体工学性能和减少任务负荷方面都有显著改善。同时,我们的方法还表现出更高的可用性。
{"title":"EEBA: Efficient and ergonomic Big-Arm for distant object manipulation in VR","authors":"Jian Wu ,&nbsp;Lili Wang ,&nbsp;Sio Kei Im ,&nbsp;Chan Tong Lam","doi":"10.1016/j.ijhcs.2024.103273","DOIUrl":"10.1016/j.ijhcs.2024.103273","url":null,"abstract":"<div><p>Object manipulation is the most common form of interaction in virtual reality. We introduced an efficient and ergonomic Big-Arm method to improve the efficiency and comfort of manipulating distant objects in virtual reality. We prolong the upper arm and forearm lengths according to the maximum distance of the manipulation space and construct the linear mapping between the real and virtual elbow angle, which makes manipulation easier to control and more efficient. We propose an optimized elbow angle mapping to further improve the efficiency and comfort of distant object manipulation. Two user studies were designed and conducted to evaluate the performance of our optimized Big-Arm method. The results show that our method achieves significant improvement in efficiency, ergonomic performance, and task load reduction for manipulating the distant object (distance <span><math><mo>≥</mo></math></span>6 m) compared to the state-of-the-art methods. At the same time, our method exhibits superior usability.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140790634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Human-Computer Studies
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1