首页 > 最新文献

International Journal of Human-Computer Studies最新文献

英文 中文
Beyond content: Multimodal emotional responses predict online moral contagion across laboratory and real-world contexts 超越内容:多模态情绪反应预测实验室和现实环境中的在线道德传染
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2025-11-17 DOI: 10.1016/j.ijhcs.2025.103689
Rui Li , Xu Liu , Yipeng Yu , Waxun Su , Yueqin Hu
Research on online moral contagion has highlighted the role of emotion in shaping the diffusion of moralized content. However, existing studies primarily focus on the emotional attributes of content rather than viewers’ actual emotional responses. How individuals’ actual reactions influence diffusion, especially whether positive or negative emotions facilitate spreading, remains unclear. This research employed multimodal measurements to capture college students’ subjective and physiological emotional experiences while viewing moralized short videos, along with their subsequent contagion behaviors. In Study 1, participants watched videos on a computer and reported their sharing intentions. Emotional responses were measured using EEG, ECG, eye tracking, and self-reports. In Study 2, participants viewed and shared videos on mobile devices to better reflect real-world contexts, with emotional responses captured via electrodermal activity, respiration, accelerometers, and gyroscopes. Machine learning models showed that actual emotional responses predicted sharing more accurately than content attributes (80% vs. 69% for intentions; 82% vs. 61% for behaviors). Short videos evoking high arousal or positive valence were more likely to be shared. Shapley value comparisons indicated that EEG, gyroscope, electrodermal activity, self-reports, and accelerometer contributed more to prediction accuracy than respiration, eye tracking, and ECG. EEG had the highest contribution, while the non-intrusive measures, gyroscope and accelerometer, may serve as substitutes for more intrusive methods. These findings extend the theoretical framework of online moral contagion by consistently demonstrating that high-arousal, positively valenced emotional responses facilitate diffusion, and also provide practical guidance for selecting effective modalities and implementing non-intrusive emotional assessments in future research and applications.
关于网络道德传染的研究强调了情感在塑造道德化内容传播中的作用。然而,现有的研究主要关注内容的情感属性,而不是观众的实际情绪反应。个人的实际反应是如何影响传播的,特别是积极或消极的情绪是否会促进传播,目前还不清楚。本研究采用多模态测量方法来捕捉大学生在观看道德短视频时的主观和生理情绪体验,以及他们随后的传染行为。在研究1中,参与者在电脑上观看视频,并报告他们分享的意图。使用脑电图、心电图、眼动追踪和自我报告来测量情绪反应。在研究2中,参与者在移动设备上观看和分享视频,以更好地反映现实环境,并通过皮肤电活动、呼吸、加速度计和陀螺仪捕捉情绪反应。机器学习模型显示,实际情绪反应比内容属性更准确地预测分享(80%对69%的意图;82%对61%的行为)。短视频唤起高唤醒或积极的效价更有可能被分享。Shapley值比较表明,脑电图、陀螺仪、皮肤电活动、自我报告和加速度计比呼吸、眼动追踪和心电图更有助于预测准确性。脑电图的贡献最大,而非侵入性的测量方法如陀螺仪和加速度计可以替代侵入性更强的方法。这些发现扩展了网络道德传染的理论框架,一致地证明了高唤醒、积极的情绪反应促进了传播,并为未来的研究和应用中选择有效的模式和实施非侵入性情绪评估提供了实践指导。
{"title":"Beyond content: Multimodal emotional responses predict online moral contagion across laboratory and real-world contexts","authors":"Rui Li ,&nbsp;Xu Liu ,&nbsp;Yipeng Yu ,&nbsp;Waxun Su ,&nbsp;Yueqin Hu","doi":"10.1016/j.ijhcs.2025.103689","DOIUrl":"10.1016/j.ijhcs.2025.103689","url":null,"abstract":"<div><div>Research on online moral contagion has highlighted the role of emotion in shaping the diffusion of moralized content. However, existing studies primarily focus on the emotional attributes of content rather than viewers’ actual emotional responses. How individuals’ actual reactions influence diffusion, especially whether positive or negative emotions facilitate spreading, remains unclear. This research employed multimodal measurements to capture college students’ subjective and physiological emotional experiences while viewing moralized short videos, along with their subsequent contagion behaviors. In Study 1, participants watched videos on a computer and reported their sharing intentions. Emotional responses were measured using EEG, ECG, eye tracking, and self-reports. In Study 2, participants viewed and shared videos on mobile devices to better reflect real-world contexts, with emotional responses captured via electrodermal activity, respiration, accelerometers, and gyroscopes. Machine learning models showed that actual emotional responses predicted sharing more accurately than content attributes (80% vs. 69% for intentions; 82% vs. 61% for behaviors). Short videos evoking high arousal or positive valence were more likely to be shared. Shapley value comparisons indicated that EEG, gyroscope, electrodermal activity, self-reports, and accelerometer contributed more to prediction accuracy than respiration, eye tracking, and ECG. EEG had the highest contribution, while the non-intrusive measures, gyroscope and accelerometer, may serve as substitutes for more intrusive methods. These findings extend the theoretical framework of online moral contagion by consistently demonstrating that high-arousal, positively valenced emotional responses facilitate diffusion, and also provide practical guidance for selecting effective modalities and implementing non-intrusive emotional assessments in future research and applications.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"208 ","pages":"Article 103689"},"PeriodicalIF":5.1,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Movement instability and body image factors: Unobtrusive real-time detection of loss of control in ballet dancers 运动不稳定和身体形象因素:芭蕾演员失控的不显眼的实时检测
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2025-11-15 DOI: 10.1016/j.ijhcs.2025.103686
Concepción Valdez , Ana Tajadura-Jimenez , Monica Tentori
Understanding the relationship between movement variability and body image is essential in disciplines where motor control and body perception are closely linked, such as dance. The present study with 24 ballet dancers presents a computational approach for detecting moments of loss of control in elbow movement fluidity. This serves as an objective proxy for body image and body awareness, as subtle disruptions in movement, especially in joints that are not the primary focus of technical training, can reflect increased self-monitoring or discomfort, factors commonly associated with negative body image in dancers.
Through movement analysis during two common rehearsal contexts (i.e., marking choreography and full execution with music) we examined how movement variability differs based on task demands. Our findings show that loss of control moments are more frequent during marking, underscoring the influence of rehearsal context on both motor performance and body-related self-perception. Furthermore, we demonstrate that movement instability in the elbow joint can be associated with self-reported measures of body image and awareness, reinforcing the connection between motor variability and psychological well-being.
By identifying the elbow as a sensitive and reliable indicator of instability during ballet practice, our approach offers a lightweight alternative to full-body kinematic analysis, supporting practical applications beyond laboratory settings. From a Human-Computer Interaction (HCI) and Ubiquitous Computing (Ubicomp) perspective, this work contributes design insights for systems that integrate body image and movement behavior metrics. These findings open new possibilities for interactive technologies aimed at enhancing body image and improving movement precision in dancers and other movement practitioners.
在运动控制和身体感知密切相关的学科中,如舞蹈,理解运动变异性和身体形象之间的关系是必不可少的。目前的研究与24芭蕾舞者提出了一种计算方法来检测失去控制的时刻在肘部运动的流动性。这是身体形象和身体意识的客观代表,因为运动中的细微中断,特别是在不是技术训练的主要重点的关节,可以反映出自我监控或不适的增加,这些因素通常与舞者的负面身体形象有关。通过在两种常见的排练情境(即标记编舞和完全执行音乐)中的动作分析,我们研究了基于任务需求的动作可变性是如何不同的。我们的研究结果表明,在标记过程中失去控制的时刻更频繁,强调了排练环境对运动表现和身体相关自我感知的影响。此外,我们证明肘关节的运动不稳定可能与自我报告的身体形象和意识测量有关,从而加强了运动变异性和心理健康之间的联系。通过确定肘部在芭蕾练习中是一个敏感而可靠的不稳定性指标,我们的方法为全身运动学分析提供了一个轻量级的替代方案,支持实验室环境之外的实际应用。从人机交互(HCI)和普适计算(Ubicomp)的角度来看,这项工作为整合身体图像和运动行为指标的系统提供了设计见解。这些发现为旨在增强舞者和其他动作实践者的身体形象和提高动作精度的互动技术开辟了新的可能性。
{"title":"Movement instability and body image factors: Unobtrusive real-time detection of loss of control in ballet dancers","authors":"Concepción Valdez ,&nbsp;Ana Tajadura-Jimenez ,&nbsp;Monica Tentori","doi":"10.1016/j.ijhcs.2025.103686","DOIUrl":"10.1016/j.ijhcs.2025.103686","url":null,"abstract":"<div><div>Understanding the relationship between movement variability and body image is essential in disciplines where motor control and body perception are closely linked, such as dance. The present study with 24 ballet dancers presents a computational approach for detecting moments of loss of control in elbow movement fluidity. This serves as an objective proxy for body image and body awareness, as subtle disruptions in movement, especially in joints that are not the primary focus of technical training, can reflect increased self-monitoring or discomfort, factors commonly associated with negative body image in dancers.</div><div>Through movement analysis during two common rehearsal contexts (i.e., marking choreography and full execution with music) we examined how movement variability differs based on task demands. Our findings show that loss of control moments are more frequent during marking, underscoring the influence of rehearsal context on both motor performance and body-related self-perception. Furthermore, we demonstrate that movement instability in the elbow joint can be associated with self-reported measures of body image and awareness, reinforcing the connection between motor variability and psychological well-being.</div><div>By identifying the elbow as a sensitive and reliable indicator of instability during ballet practice, our approach offers a lightweight alternative to full-body kinematic analysis, supporting practical applications beyond laboratory settings. From a Human-Computer Interaction (HCI) and Ubiquitous Computing (Ubicomp) perspective, this work contributes design insights for systems that integrate body image and movement behavior metrics. These findings open new possibilities for interactive technologies aimed at enhancing body image and improving movement precision in dancers and other movement practitioners.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"208 ","pages":"Article 103686"},"PeriodicalIF":5.1,"publicationDate":"2025-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145624452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transforming data clutter into cultural experiences: an evaluation framework for accessibility in cultural big data platforms 将数据杂乱转化为文化体验:文化大数据平台可及性评价框架
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2025-11-15 DOI: 10.1016/j.ijhcs.2025.103685
Qitong Zhou, Yinman Guo, Zixiang Fan, Tie Ji
The large-scale digitization of cultural heritage has enhanced public access to massive online resources; however, issues such as redundant data and inconsistent design standards across platforms often compromise user experience. This study investigates the accessibility factors of cultural big data platforms and aims to evaluate the digital user experience of platforms. Accessibility is explicitly defined here as the comprehensive ability of users to access, interpret, and utilize cultural data without barriers. A multi-stage mixed-method approach was adopted. First, a systematic literature review and bibliometric analysis were conducted to derive a preliminary indicator framework. Second, qualitative refinement of the framework was carried out using grounded theory based on interviews with three user groups (the public, researchers, and creative practitioners). Finally, the Delphi method and fuzzy AHP were employed for expert validation and weight calculation. Consequently, a weighted evaluation framework comprising four primary dimensions and twenty-two secondary indicators was established. The findings indicate that platform usability constitutes the foundation of accessibility, cultural data quality ensures reliability, cultural data presentation facilitates user engagement, and cultural data interaction deepens cultural understanding. Moreover, the study highlights the necessity of tailoring platforms to specific user needs and discusses strategies for enhancing the accessibility of digital cultural resources through interaction design. This research provides both theoretical insights and practical guidance for the design and optimization of cultural big data platforms.
大规模的文化遗产数字化为公众获取海量网络资源提供了便利;然而,跨平台的冗余数据和不一致的设计标准等问题往往会损害用户体验。本研究考察文化大数据平台的可达性因素,旨在评估平台的数字用户体验。可访问性在这里被明确地定义为用户无障碍地访问、解释和利用文化数据的综合能力。采用多阶段混合方法求解。首先,进行了系统的文献综述和文献计量分析,得出了初步的指标框架。其次,基于对三个用户群体(公众、研究人员和创意从业者)的访谈,使用扎根理论对框架进行定性改进。最后采用德尔菲法和模糊层次分析法进行专家验证和权重计算。据此,建立了包含4个主要维度和22个次要指标的加权评价框架。研究结果表明,平台可用性是可访问性的基础,文化数据质量保证了可靠性,文化数据呈现促进了用户参与,文化数据交互加深了文化理解。此外,该研究强调了定制平台以满足特定用户需求的必要性,并讨论了通过交互设计提高数字文化资源可及性的策略。本研究为文化大数据平台的设计与优化提供了理论洞见和实践指导。
{"title":"Transforming data clutter into cultural experiences: an evaluation framework for accessibility in cultural big data platforms","authors":"Qitong Zhou,&nbsp;Yinman Guo,&nbsp;Zixiang Fan,&nbsp;Tie Ji","doi":"10.1016/j.ijhcs.2025.103685","DOIUrl":"10.1016/j.ijhcs.2025.103685","url":null,"abstract":"<div><div>The large-scale digitization of cultural heritage has enhanced public access to massive online resources; however, issues such as redundant data and inconsistent design standards across platforms often compromise user experience. This study investigates the accessibility factors of cultural big data platforms and aims to evaluate the digital user experience of platforms. Accessibility is explicitly defined here as the comprehensive ability of users to access, interpret, and utilize cultural data without barriers. A multi-stage mixed-method approach was adopted. First, a systematic literature review and bibliometric analysis were conducted to derive a preliminary indicator framework. Second, qualitative refinement of the framework was carried out using grounded theory based on interviews with three user groups (the public, researchers, and creative practitioners). Finally, the Delphi method and fuzzy AHP were employed for expert validation and weight calculation. Consequently, a weighted evaluation framework comprising four primary dimensions and twenty-two secondary indicators was established. The findings indicate that platform usability constitutes the foundation of accessibility, cultural data quality ensures reliability, cultural data presentation facilitates user engagement, and cultural data interaction deepens cultural understanding. Moreover, the study highlights the necessity of tailoring platforms to specific user needs and discusses strategies for enhancing the accessibility of digital cultural resources through interaction design. This research provides both theoretical insights and practical guidance for the design and optimization of cultural big data platforms.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"208 ","pages":"Article 103685"},"PeriodicalIF":5.1,"publicationDate":"2025-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating the effect of co-speech gesture prediction on Human–Robot Interaction 评价协同语音手势预测在人机交互中的效果
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2025-11-13 DOI: 10.1016/j.ijhcs.2025.103674
Enrique Fernández-Rodicio, Juan José Gamboa-Montero, Marcos Maroto-Gómez, Álvaro Castro-González, Miguel A. Salichs
Robots are starting to be used in tasks involving human–robot interactions. For them to be efficient in these tasks, they must be seen as suitable interaction partners. One method to achieve this is to enable them to use proper verbal and non-verbal communication. Selecting non-verbal behaviours that appropriately complement the robot’s verbal messages is complex and requires roboticists to understand how each dimension of communication, as well as their combination, affects how the user perceives the message. In this work, we evaluated the effect of selecting appropriate non-verbal expressions given the robot’s speech on how users perceive the robot and its expressiveness. To do this, we conducted a within-subjects experiment where participants played cards with two robots — one that used a co-speech gesture prediction module for selecting its non-verbal expressions and another that used random expressions. The results showed that using the gestures predicted by our system improves the experience of participants during interactions. Specifically, participants perceived the robot using the co-speech gesture prediction module as having a higher level of agency, and as having a more coherent expressiveness.
机器人开始被用于涉及人机交互的任务。为了让他们有效地完成这些任务,他们必须被视为合适的互动伙伴。实现这一目标的一个方法是使他们能够使用适当的语言和非语言交流。选择适当补充机器人语言信息的非语言行为是复杂的,需要机器人专家了解交流的每个维度以及它们的组合如何影响用户对信息的感知。在这项工作中,我们评估了在给定机器人语音的情况下选择适当的非语言表达对用户如何感知机器人及其表达能力的影响。为了做到这一点,我们进行了一项受试者内部实验,参与者与两个机器人打牌——一个使用共同语音手势预测模块来选择非语言表达,另一个使用随机表达。结果表明,使用我们的系统预测的手势可以改善参与者在互动过程中的体验。具体来说,参与者认为使用协同语音手势预测模块的机器人具有更高水平的代理能力,并且具有更连贯的表达能力。
{"title":"Evaluating the effect of co-speech gesture prediction on Human–Robot Interaction","authors":"Enrique Fernández-Rodicio,&nbsp;Juan José Gamboa-Montero,&nbsp;Marcos Maroto-Gómez,&nbsp;Álvaro Castro-González,&nbsp;Miguel A. Salichs","doi":"10.1016/j.ijhcs.2025.103674","DOIUrl":"10.1016/j.ijhcs.2025.103674","url":null,"abstract":"<div><div>Robots are starting to be used in tasks involving human–robot interactions. For them to be efficient in these tasks, they must be seen as suitable interaction partners. One method to achieve this is to enable them to use proper verbal and non-verbal communication. Selecting non-verbal behaviours that appropriately complement the robot’s verbal messages is complex and requires roboticists to understand how each dimension of communication, as well as their combination, affects how the user perceives the message. In this work, we evaluated the effect of selecting appropriate non-verbal expressions given the robot’s speech on how users perceive the robot and its expressiveness. To do this, we conducted a within-subjects experiment where participants played cards with two robots — one that used a co-speech gesture prediction module for selecting its non-verbal expressions and another that used random expressions. The results showed that using the gestures predicted by our system improves the experience of participants during interactions. Specifically, participants perceived the robot using the co-speech gesture prediction module as having a higher level of agency, and as having a more coherent expressiveness.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"207 ","pages":"Article 103674"},"PeriodicalIF":5.1,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Easy to handle: Exploring users’ interactions with an augmented reality human-machine interface for virtual stops in automated on-demand mobility scenarios 易于操作:探索用户与增强现实人机界面的交互,用于自动按需移动场景中的虚拟站点
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2025-11-08 DOI: 10.1016/j.ijhcs.2025.103671
Fabian Hub , Marc Wilbrink , Michael Oehl
For the success of future shared automated mobility on-demand (SAMOD) a high quality of service needs to be assured. One key challenge lies in the passenger journey, particularly in finding the designated virtual stop (vStop) for pick-up and identifying the approaching shuttle. Therefore, the new vStop human-machine interface (vStop HMI) concept supports passengers with means of augmented reality (AR). This explorative user study with young adults (N = 44) used a virtual reality (VR) setup to simulate the AR-based vStop HMI and answers two research questions across two consecutive scenarios: (1) When using an AR-based navigation aid like the vStop HMI, users tend to focus heavily on their device while walking, which may increase the risk of negligent roadside behavior. While restricting AR information access during walking might reduce this risk, how does it affect overall user experience? To examine this, we compared a vStop HMI with restricted AR information access against a baseline condition. Results showed positive effects on user experience and acceptance for both HMI versions, although workload was partly negatively affected. (2) When identifying the approaching shuttle, does using the vStop HMI effectively assist users in doing so and lead to positive user experience? Usability testing indicated positive rates of workload, user experience, and acceptance during the shuttle identification task. Overall, the findings suggest that selectively restricting AR information during navigation tasks can help mitigate undesired user behavior in roadside environments while maintaining high levels of pragmatic quality and acceptance. Moreover, the vStop HMI enables seamless shuttle identification, even under complex conditions. By improving these critical stages of the passenger journey, the vStop HMI concept has the potential to enhance the overall quality of SAMOD services and increase public acceptance of future shared, automated, demand-responsive transportation systems.
未来共享自动按需出行(SAMOD)的成功需要保证高质量的服务。一个关键的挑战在于乘客的旅程,特别是找到指定的虚拟站点(vStop)接送和识别接近的班车。因此,新的vStop人机界面(vStop HMI)概念为乘客提供增强现实(AR)的手段。这项探索性的用户研究(N = 44)使用虚拟现实(VR)设置来模拟基于ar的vStop HMI,并在两个连续的场景中回答了两个研究问题:(1)当使用基于ar的导航辅助工具(如vStop HMI)时,用户倾向于在行走时过多地关注他们的设备,这可能会增加疏忽路边行为的风险。虽然在行走过程中限制AR信息访问可能会降低这种风险,但它如何影响整体用户体验?为了检验这一点,我们将限制AR信息访问的vStop HMI与基线条件进行了比较。结果显示,两个HMI版本对用户体验和接受度都有积极影响,尽管工作量部分受到负面影响。(2)在识别接近的班车时,使用vStop人机界面是否有效地帮助用户识别并带来积极的用户体验?可用性测试表明,在航天飞机识别任务期间,工作量、用户体验和接受度都是积极的。总的来说,研究结果表明,在导航任务中有选择地限制AR信息可以帮助减轻路边环境中不受欢迎的用户行为,同时保持高水平的实用质量和接受度。此外,即使在复杂的条件下,vStop HMI也能实现无缝穿梭识别。通过改善乘客旅程的这些关键阶段,vStop HMI概念有可能提高SAMOD服务的整体质量,并提高公众对未来共享、自动化、需求响应的交通系统的接受度。
{"title":"Easy to handle: Exploring users’ interactions with an augmented reality human-machine interface for virtual stops in automated on-demand mobility scenarios","authors":"Fabian Hub ,&nbsp;Marc Wilbrink ,&nbsp;Michael Oehl","doi":"10.1016/j.ijhcs.2025.103671","DOIUrl":"10.1016/j.ijhcs.2025.103671","url":null,"abstract":"<div><div>For the success of future shared automated mobility on-demand (SAMOD) a high quality of service needs to be assured. One key challenge lies in the passenger journey, particularly in finding the designated virtual stop (vStop) for pick-up and identifying the approaching shuttle. Therefore, the new vStop human-machine interface (vStop HMI) concept supports passengers with means of augmented reality (AR). This explorative user study with young adults (<em>N</em> = 44) used a virtual reality (VR) setup to simulate the AR-based vStop HMI and answers two research questions across two consecutive scenarios: (1) When using an AR-based navigation aid like the vStop HMI, users tend to focus heavily on their device while walking, which may increase the risk of negligent roadside behavior. While restricting AR information access during walking might reduce this risk, how does it affect overall user experience? To examine this, we compared a vStop HMI with restricted AR information access against a baseline condition. Results showed positive effects on user experience and acceptance for both HMI versions, although workload was partly negatively affected. (2) When identifying the approaching shuttle, does using the vStop HMI effectively assist users in doing so and lead to positive user experience? Usability testing indicated positive rates of workload, user experience, and acceptance during the shuttle identification task. Overall, the findings suggest that selectively restricting AR information during navigation tasks can help mitigate undesired user behavior in roadside environments while maintaining high levels of pragmatic quality and acceptance. Moreover, the vStop HMI enables seamless shuttle identification, even under complex conditions. By improving these critical stages of the passenger journey, the vStop HMI concept has the potential to enhance the overall quality of SAMOD services and increase public acceptance of future shared, automated, demand-responsive transportation systems.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"207 ","pages":"Article 103671"},"PeriodicalIF":5.1,"publicationDate":"2025-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LLM-powered assistant with electrotactile feedback to assist blind and low vision people with maps and routes preview llm驱动的电子触觉反馈助手,帮助盲人和弱视人士预览地图和路线
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2025-11-08 DOI: 10.1016/j.ijhcs.2025.103682
Chutian Jiang , Yinan Fan , Junan Xie , Emily Kuang , Kaihao Zhang , Mingming Fan
Previewing routes to unfamiliar destinations is a crucial task for many blind and low vision (BLV) individuals to ensure safety and confidence before their journey. While prior work has primarily supported navigation during travel, less research has focused on how best to assist BLV people in previewing routes on a map. We designed a novel electrotactile system around the fingertip and the Trip Preview Assistant (TPA) to convey map elements, route conditions, and trajectories. TPA harnesses large language models (LLMs) to dynamically control and personalize electrotactile feedback, enhancing the interpretability of complex spatial map data for BLV users. In a user study with twelve BLV participants, our system demonstrated improvements in efficiency and user experience for previewing maps and routes. This work contributes to advancing the accessibility of visual map information for BLV users when previewing trips.
对于许多盲人和弱视人士来说,在旅行前预先查看通往陌生目的地的路线是一项至关重要的任务,以确保他们的安全和信心。虽然之前的工作主要是支持旅行中的导航,但很少有研究关注如何最好地帮助BLV用户在地图上预览路线。我们设计了一种新颖的指尖电触觉系统和旅行预览助手(TPA)来传达地图元素、路线条件和轨迹。TPA利用大型语言模型(llm)来动态控制和个性化电触觉反馈,增强了BLV用户对复杂空间地图数据的可解释性。在一项有12名BLV参与者的用户研究中,我们的系统证明了在预览地图和路线的效率和用户体验方面的改进。这项工作有助于提高BLV用户在预览行程时可视化地图信息的可访问性。
{"title":"LLM-powered assistant with electrotactile feedback to assist blind and low vision people with maps and routes preview","authors":"Chutian Jiang ,&nbsp;Yinan Fan ,&nbsp;Junan Xie ,&nbsp;Emily Kuang ,&nbsp;Kaihao Zhang ,&nbsp;Mingming Fan","doi":"10.1016/j.ijhcs.2025.103682","DOIUrl":"10.1016/j.ijhcs.2025.103682","url":null,"abstract":"<div><div>Previewing routes to unfamiliar destinations is a crucial task for many blind and low vision (BLV) individuals to ensure safety and confidence before their journey. While prior work has primarily supported navigation during travel, less research has focused on how best to assist BLV people in previewing routes on a map. We designed a novel electrotactile system around the fingertip and the Trip Preview Assistant (TPA) to convey map elements, route conditions, and trajectories. TPA harnesses large language models (LLMs) to dynamically control and personalize electrotactile feedback, enhancing the interpretability of complex spatial map data for BLV users. In a user study with twelve BLV participants, our system demonstrated improvements in efficiency and user experience for previewing maps and routes. This work contributes to advancing the accessibility of visual map information for BLV users when previewing trips.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"207 ","pages":"Article 103682"},"PeriodicalIF":5.1,"publicationDate":"2025-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MusicTraces: Combining music and painting to support adults with neurodevelopmental conditions and intellectual disabilities MusicTraces:结合音乐和绘画来帮助有神经发育障碍和智力障碍的成年人
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2025-11-07 DOI: 10.1016/j.ijhcs.2025.103675
Valentin Bauer , Giacomo Caslini , Marco Mores , Mattia Gianotti , Franca Garzotto
People with NeuroDevelopmental Conditions (NDC) and associated Intellectual Disability (ID) often face social and emotional issues affecting their well-being. Although art therapies like music-making or painting can be beneficial, they often lack adaptability for those with moderate-to-severe ID or limited language abilities. Multisensory creative technologies combining art practices offer promising solutions but remain under-explored. This research investigates the impact of integrating music-making and painting within an interactive multisensory environment to support social communication, emotional regulation, and well-being in adults with moderate-to-severe NDCs and ID. The two-player music and painting activity called MusicTraces was first created through co-design process involving caregivers and people with NDCs and ID. Two field studies were conducted. The first study compared MusicTraces with traditional group painting with music for ten adults, while the second examined its impact over two sessions on eight adults with more severe conditions. Both studies showed improved social communication, social regulation and well-being. Insights are discussed to enhance collaboration between people with NDCs and ID through interactive creative technologies.
患有神经发育疾病(NDC)和相关智力残疾(ID)的人经常面临影响其福祉的社会和情感问题。尽管音乐创作或绘画等艺术疗法可能是有益的,但对于中度至重度ID或语言能力有限的人来说,它们往往缺乏适应性。结合艺术实践的多感官创意技术提供了有希望的解决方案,但仍未得到充分探索。本研究调查了在互动多感官环境中整合音乐制作和绘画对中重度ndc和ID成人的社会沟通、情绪调节和幸福感的影响。这款名为“MusicTraces”的双人音乐和绘画活动最初是由护理人员和拥有ndc和ID的人共同设计的。进行了两次实地研究。第一项研究比较了10名成年人的MusicTraces与传统集体绘画的音乐,而第二项研究则对8名病情更严重的成年人进行了为期两次的研究。两项研究都显示了社交沟通、社会调节和幸福感的提高。讨论了通过互动创意技术加强国家自主贡献中心和ID之间的协作的见解。
{"title":"MusicTraces: Combining music and painting to support adults with neurodevelopmental conditions and intellectual disabilities","authors":"Valentin Bauer ,&nbsp;Giacomo Caslini ,&nbsp;Marco Mores ,&nbsp;Mattia Gianotti ,&nbsp;Franca Garzotto","doi":"10.1016/j.ijhcs.2025.103675","DOIUrl":"10.1016/j.ijhcs.2025.103675","url":null,"abstract":"<div><div>People with NeuroDevelopmental Conditions (NDC) and associated Intellectual Disability (ID) often face social and emotional issues affecting their well-being. Although art therapies like music-making or painting can be beneficial, they often lack adaptability for those with moderate-to-severe ID or limited language abilities. Multisensory creative technologies combining art practices offer promising solutions but remain under-explored. This research investigates the impact of integrating music-making and painting within an interactive multisensory environment to support social communication, emotional regulation, and well-being in adults with moderate-to-severe NDCs and ID. The two-player music and painting activity called <em>MusicTraces</em> was first created through co-design process involving caregivers and people with NDCs and ID. Two field studies were conducted. The first study compared <em>MusicTraces</em> with traditional group painting with music for ten adults, while the second examined its impact over two sessions on eight adults with more severe conditions. Both studies showed improved social communication, social regulation and well-being. Insights are discussed to enhance collaboration between people with NDCs and ID through interactive creative technologies.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"207 ","pages":"Article 103675"},"PeriodicalIF":5.1,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Designing mediated social touch for mobile communication: From hand gestures to touch signals 为移动通信设计媒介社交触摸:从手势到触摸信号
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2025-11-07 DOI: 10.1016/j.ijhcs.2025.103684
Qianhui Wei , Jun Hu , Min Li
The advanced haptic technology in smartphones makes mediated social touch (MST) possible and provides rich mobile communication between people. This paper presents a generation method for MST signals on smartphones. We translate MST gesture pressure to MST signal intensity, specifically by varying frequency through a function that maps pressure to frequency. We set the duration and provide different compound waveform compositions for MST signals. We conducted two user studies, each with 20 participants. The pilot study explored how likely the designed MST signals could be understood as intended MST gestures. We screened 23 MST signals that were suitable for intended MST gestures. Then, we conducted a recognition task in the main study to explore to which extent the designed MST signals could be recognized as intended MST gestures. The recall results exhibited a range of 13.3–71.7 %, while the precision results varied from 15.1 to 55.1 %. These results can be referenced when designing MST signals. Our design implications include adjusting signal parameters to better match MST gestures and creating context-specific signals for different expressions. We suggest controlling the number of signals, using varied compound waveform composition forms, and adding visual stickers with vibrotactile stimuli. MST signals should also be evaluated in specific contexts, especially for mobile communication.
智能手机先进的触觉技术使媒介社交触摸(MST)成为可能,为人们之间丰富的移动通信提供了可能。本文提出了一种智能手机上MST信号的生成方法。我们将MST手势压力转换为MST信号强度,特别是通过将压力映射到频率的函数来改变频率。我们设置了持续时间,并为MST信号提供了不同的复合波形组成。我们进行了两项用户研究,每项研究有20名参与者。试点研究探讨了设计的MST信号如何可能被理解为预定的MST手势。我们筛选了23个MST信号,适合预定的MST手势。然后,我们在主研究中进行了识别任务,以探索设计的MST信号在多大程度上可以被识别为预定的MST手势。召回率为13.3% ~ 71.7%,精密度为15.1% ~ 55.1%。这些结果可供设计MST信号时参考。我们的设计含义包括调整信号参数以更好地匹配MST手势,并为不同的表情创建特定于上下文的信号。我们建议控制信号的数量,使用不同的复合波形组成形式,并添加带有振动触觉刺激的视觉贴纸。MST信号也应在特定情况下进行评估,特别是在移动通信中。
{"title":"Designing mediated social touch for mobile communication: From hand gestures to touch signals","authors":"Qianhui Wei ,&nbsp;Jun Hu ,&nbsp;Min Li","doi":"10.1016/j.ijhcs.2025.103684","DOIUrl":"10.1016/j.ijhcs.2025.103684","url":null,"abstract":"<div><div>The advanced haptic technology in smartphones makes mediated social touch (MST) possible and provides rich mobile communication between people. This paper presents a generation method for MST signals on smartphones. We translate MST gesture pressure to MST signal intensity, specifically by varying frequency through a function that maps pressure to frequency. We set the duration and provide different compound waveform compositions for MST signals. We conducted two user studies, each with 20 participants. The pilot study explored how likely the designed MST signals could be understood as intended MST gestures. We screened 23 MST signals that were suitable for intended MST gestures. Then, we conducted a recognition task in the main study to explore to which extent the designed MST signals could be recognized as intended MST gestures. The recall results exhibited a range of 13.3–71.7 %, while the precision results varied from 15.1 to 55.1 %. These results can be referenced when designing MST signals. Our design implications include adjusting signal parameters to better match MST gestures and creating context-specific signals for different expressions. We suggest controlling the number of signals, using varied compound waveform composition forms, and adding visual stickers with vibrotactile stimuli. MST signals should also be evaluated in specific contexts, especially for mobile communication.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"207 ","pages":"Article 103684"},"PeriodicalIF":5.1,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Do voice agents affect people’s gender stereotypes? Quantitative investigation of stereotype spillover effects from interacting with gendered voice agents 语音代理会影响人们对性别的刻板印象吗?与性别语音代理互动的刻板印象溢出效应的定量研究
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2025-11-07 DOI: 10.1016/j.ijhcs.2025.103683
Madeleine Steeds , Marius Claudy , Benjamin R. Cowan , Anshu Suri
A 2019 UNESCO report raised concerns that female gendered voice assistants (VAs) may be perpetuating gender stereotypical views. Subsequent research has investigated remedies for this harm, but there is little research into the potential spillover effect of using gendered VAs onto stereotypical views of gender. We take a quantitative approach to address this gap. Over two studies (n = 235, 351), we find little predictive or causational evidence of gendered VAs relating to gender stereotypical views, despite stereotypes being ascribed to VAs. This implies that, while we should continue to strive for equality and equity in technological design, everyday VA use may not be perpetuating gender stereotypes to the extent expected. We highlight the need for longitudinal research studying gendered technology use, and into problematic use cases such as technology use to simulate harassment. Further we suggest the need for work understanding common stereotypes held about diverse gender identities.
2019年联合国教科文组织的一份报告对女性性别语音助手可能会延续性别陈规定型观念表示担忧。随后的研究对这种伤害的补救措施进行了调查,但很少有研究表明使用性别VAs对性别刻板印象的潜在溢出效应。我们采取定量方法来解决这一差距。在两项研究中(n = 235, 351),我们发现性别VAs与性别刻板印象有关的预测性或因果性证据很少,尽管刻板印象归因于VAs。这意味着,虽然我们应该继续为技术设计的平等和公平而努力,但日常的VA使用可能不会像预期的那样延续性别刻板印象。我们强调需要对性别技术使用进行纵向研究,并研究有问题的用例,如技术用于模拟骚扰。此外,我们建议需要努力了解人们对不同性别认同的普遍刻板印象。
{"title":"Do voice agents affect people’s gender stereotypes? Quantitative investigation of stereotype spillover effects from interacting with gendered voice agents","authors":"Madeleine Steeds ,&nbsp;Marius Claudy ,&nbsp;Benjamin R. Cowan ,&nbsp;Anshu Suri","doi":"10.1016/j.ijhcs.2025.103683","DOIUrl":"10.1016/j.ijhcs.2025.103683","url":null,"abstract":"<div><div>A 2019 UNESCO report raised concerns that female gendered voice assistants (VAs) may be perpetuating gender stereotypical views. Subsequent research has investigated remedies for this harm, but there is little research into the potential spillover effect of using gendered VAs onto stereotypical views of gender. We take a quantitative approach to address this gap. Over two studies (<em>n</em> = 235, 351), we find little predictive or causational evidence of gendered VAs relating to gender stereotypical views, despite stereotypes being ascribed to VAs. This implies that, while we should continue to strive for equality and equity in technological design, everyday VA use may not be perpetuating gender stereotypes to the extent expected. We highlight the need for longitudinal research studying gendered technology use, and into problematic use cases such as technology use to simulate harassment. Further we suggest the need for work understanding common stereotypes held about diverse gender identities.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"207 ","pages":"Article 103683"},"PeriodicalIF":5.1,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
To ‘errr’ is robot: How humans interpret hesitations in the speech of a humanoid robot “errr”是机器人:人类如何解释人形机器人说话时的犹豫
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2025-11-07 DOI: 10.1016/j.ijhcs.2025.103681
Xinyi Chen , Yao Yao
In human-to-human conversations, people sometimes interpret hesitations from their conversational partners as a clue for rejection (e.g., “um I’ll tell you later if I could come to the party” may be interpreted as “I won’t come to the party”). This type of interpretation is deeply embedded in human talkers’ understanding of social etiquette and modeling of the state of mind of the interlocutors. In this study, we examine how human listeners interpret hesitations in robot speech in a human-robot interactive context, as compared to how they interpret human-produced hesitations. In Experiment 1, participants (N = 63) watched videos of conversations between a humanoid robot talker and a human talker, where the robot talker would give responses, with or without hesitations, to the human talker’s requests or inquiries. The participants then completed a memory test of what they remembered from the conversations. The memory test results showed that participants were significantly more likely to interpret hesitant responses from the robot as rejections compared to completely fluent robot responses. The hesitation-triggered bias toward negative interpretations was replicated in Experiment 2 with a separate group of participants (N = 59), who listened to the same conversations but as human-to-human interactions. Combined analysis found no difference in the magnitude of the hesitation bias between the two conditions. These results provide evidence that human listeners draw similar inferences from hesitant speech produced by robots and those by human talkers. This study offers valuable insights for the future design of conversational AI agents, highlighting the importance of subtle speech cues in human-machine interaction.
在人与人之间的对话中,人们有时会把谈话对象的犹豫理解为拒绝的暗示(例如,“嗯,我以后再告诉你我是否能来参加聚会”可能被理解为“我不会来参加聚会”)。这种类型的解释深深植根于人类说话者对社会礼仪的理解和对对话者心理状态的模拟。在这项研究中,我们研究了人类听众如何在人机交互环境中解释机器人语音中的犹豫,以及他们如何解释人类产生的犹豫。在实验1中,参与者(N = 63)观看了人形说话机器人和人类说话机器人之间的对话视频,在视频中,说话机器人会对人类说话者的请求或询问做出回应,有或没有犹豫。然后,参与者完成了一项记忆测试,测试他们从对话中记住了什么。记忆测试结果显示,与完全流畅的机器人反应相比,参与者更有可能将机器人的犹豫反应解释为拒绝。在实验2中,另一组参与者(N = 59)重复了由犹豫引发的对负面解释的偏见,他们听的是同样的对话,但是人与人之间的互动。综合分析发现,在两种情况下,犹豫偏差的程度没有差异。这些结果提供了证据,表明人类听者从机器人和人类说话者发出的犹豫话语中得出类似的推论。这项研究为未来的会话AI代理设计提供了有价值的见解,强调了人机交互中微妙的语音线索的重要性。
{"title":"To ‘errr’ is robot: How humans interpret hesitations in the speech of a humanoid robot","authors":"Xinyi Chen ,&nbsp;Yao Yao","doi":"10.1016/j.ijhcs.2025.103681","DOIUrl":"10.1016/j.ijhcs.2025.103681","url":null,"abstract":"<div><div>In human-to-human conversations, people sometimes interpret hesitations from their conversational partners as a clue for rejection (e.g., “<em>um</em> I’ll tell you later if I could come to the party” may be interpreted as “I won’t come to the party”). This type of interpretation is deeply embedded in human talkers’ understanding of social etiquette and modeling of the state of mind of the interlocutors. In this study, we examine how human listeners interpret hesitations in robot speech in a human-robot interactive context, as compared to how they interpret human-produced hesitations. In Experiment 1, participants (<em>N</em> = 63) watched videos of conversations between a humanoid robot talker and a human talker, where the robot talker would give responses, with or without hesitations, to the human talker’s requests or inquiries. The participants then completed a memory test of what they remembered from the conversations. The memory test results showed that participants were significantly more likely to interpret hesitant responses from the robot as rejections compared to completely fluent robot responses. The hesitation-triggered bias toward negative interpretations was replicated in Experiment 2 with a separate group of participants (<em>N</em> = 59), who listened to the same conversations but as human-to-human interactions. Combined analysis found no difference in the magnitude of the hesitation bias between the two conditions. These results provide evidence that human listeners draw similar inferences from hesitant speech produced by robots and those by human talkers. This study offers valuable insights for the future design of conversational AI agents, highlighting the importance of subtle speech cues in human-machine interaction.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"208 ","pages":"Article 103681"},"PeriodicalIF":5.1,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145624450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Human-Computer Studies
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1