首页 > 最新文献

International Journal of Human-Computer Studies最新文献

英文 中文
Evaluating empathic responses to bimodal realism in emotionally expressive virtual humans: An eye-tracking and facial electromyography study 评估情感表达虚拟人对双峰现实主义的共情反应:眼动追踪和面部肌电图研究
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2025-09-12 DOI: 10.1016/j.ijhcs.2025.103630
Darragh Higgins , Benjamin R. Cowan , Rachel McDonnell
With the expanding range of uses for advancements in animation and voice synthesis, more opportunities arise for interactions with animated virtual humans. Such interactions may be influenced by improved portrayals of character features such as emotion and realism. The present study aimed to examine how variations in animated facial detail and vocal prosody shape user perception of emotion in virtual characters. This impact was assessed via facial electromyography and eye-tracking measures, as well as self-reports of state empathy and character appeal. Results indicate that participants were influenced by emotional valence in terms of zygomaticus major and corrugator supercilii muscle activation. Survey data appear to show greater empathy for conditions of increased facial detail and more human-like vocal prosody. Moreover, eye tracking results suggest a preference for eye contact regardless of detail or prosody, with participants fixating more on facial areas of interest overall for the positively valenced conditions. Finally, there is evidence that trait empathy and mismatches between higher facial detail and lower vocal human-likeness may influence zygomaticus major activity in response to positively valenced stimuli. These results are discussed in the context of virtual character design, contemporary understandings of empathy and the phenomenon of the Uncanny Valley.
随着动画和语音合成技术的应用范围不断扩大,与动画虚拟人互动的机会也越来越多。这种互动可能会受到诸如情感和现实主义等角色特征的改进的影响。本研究旨在研究动画面部细节和声音韵律的变化如何影响用户对虚拟角色情感的感知。这种影响是通过面部肌电图和眼球追踪测量,以及状态同理心和性格吸引力的自我报告来评估的。结果表明,情绪效价对被试颧骨大肌和瓦楞纸上纤毛肌的激活有影响。调查数据似乎显示,对于面部细节增加和声音韵律更像人类的情况,人们会有更大的同理心。此外,眼动追踪结果表明,无论细节或韵律如何,参与者都更喜欢眼神交流,总体而言,在积极评价条件下,参与者更关注感兴趣的面部区域。最后,有证据表明,特征共情和较高的面部细节与较低的声音相似度之间的不匹配可能会影响颧大肌对积极效价刺激的反应。这些结果将在虚拟角色设计、当代对移情的理解和恐怖谷现象的背景下进行讨论。
{"title":"Evaluating empathic responses to bimodal realism in emotionally expressive virtual humans: An eye-tracking and facial electromyography study","authors":"Darragh Higgins ,&nbsp;Benjamin R. Cowan ,&nbsp;Rachel McDonnell","doi":"10.1016/j.ijhcs.2025.103630","DOIUrl":"10.1016/j.ijhcs.2025.103630","url":null,"abstract":"<div><div>With the expanding range of uses for advancements in animation and voice synthesis, more opportunities arise for interactions with animated virtual humans. Such interactions may be influenced by improved portrayals of character features such as emotion and realism. The present study aimed to examine how variations in animated facial detail and vocal prosody shape user perception of emotion in virtual characters. This impact was assessed via facial electromyography and eye-tracking measures, as well as self-reports of state empathy and character appeal. Results indicate that participants were influenced by emotional valence in terms of zygomaticus major and corrugator supercilii muscle activation. Survey data appear to show greater empathy for conditions of increased facial detail and more human-like vocal prosody. Moreover, eye tracking results suggest a preference for eye contact regardless of detail or prosody, with participants fixating more on facial areas of interest overall for the positively valenced conditions. Finally, there is evidence that trait empathy and mismatches between higher facial detail and lower vocal human-likeness may influence zygomaticus major activity in response to positively valenced stimuli. These results are discussed in the context of virtual character design, contemporary understandings of empathy and the phenomenon of the Uncanny Valley.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"205 ","pages":"Article 103630"},"PeriodicalIF":5.1,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145107311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Do men and women navigate differently in virtual environments? A comparative study 在虚拟环境中,男性和女性的导航方式不同吗?比较研究
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2025-09-11 DOI: 10.1016/j.ijhcs.2025.103621
Pinyan Tang, Yuye Liao, Kun Zheng, Yifeng Sheng, Wenjie Ren, Chuan Liu, Yuqi Li
This study investigates the effects of gender and training interventions on spatial navigation in VR. Thirty-eight participants, divided into male and female intervention and control groups, performed a dual-task involving coin collection and destination location in a large-scale urban VR environment. Performance metrics included the number of coins collected, time taken to reach the destination, and eye-tracking data, normalised for task difficulty. While pre-test performance revealed no significant gender differences, eye movement data highlighted baseline gender differences in gaze patterns, with females exhibiting more exploratory behaviour. Training interventions led to performance improvements, particularly for females, whose gains remained statistically significant after Bonferroni correction. These improvements were accompanied by successful transitions between egocentric and allocentric strategies, as evidenced by gaze data and post-hoc interviews. For males, the intervention led to mixed results, with improvements in performance but a trade-off in efficiency. These findings deepen our understanding of how gender and training influence navigation strategies in VR and inform the design of future VR training systems, emphasising the importance of balancing cognitive load and strategy selection.
本研究探讨了性别和训练干预对虚拟现实空间导航的影响。38名参与者被分为男性和女性干预组和对照组,他们在一个大型城市VR环境中执行了一项双重任务,包括硬币收集和目的地定位。性能指标包括收集的硬币数量,到达目的地所需的时间,以及根据任务难度标准化的眼球追踪数据。虽然测试前的表现没有显示出明显的性别差异,但眼动数据突出了注视模式的基线性别差异,女性表现出更多的探索行为。训练干预导致了成绩的提高,尤其是对女性来说,在Bonferroni校正后,她们的成绩在统计上仍然显著。这些改进伴随着自我中心和非中心策略之间的成功过渡,正如凝视数据和事后访谈所证明的那样。对男性来说,干预带来了好坏参半的结果,表现有所改善,但效率却有所下降。这些发现加深了我们对性别和训练如何影响VR导航策略的理解,并为未来VR训练系统的设计提供了信息,强调了平衡认知负荷和策略选择的重要性。
{"title":"Do men and women navigate differently in virtual environments? A comparative study","authors":"Pinyan Tang,&nbsp;Yuye Liao,&nbsp;Kun Zheng,&nbsp;Yifeng Sheng,&nbsp;Wenjie Ren,&nbsp;Chuan Liu,&nbsp;Yuqi Li","doi":"10.1016/j.ijhcs.2025.103621","DOIUrl":"10.1016/j.ijhcs.2025.103621","url":null,"abstract":"<div><div>This study investigates the effects of gender and training interventions on spatial navigation in VR. Thirty-eight participants, divided into male and female intervention and control groups, performed a dual-task involving coin collection and destination location in a large-scale urban VR environment. Performance metrics included the number of coins collected, time taken to reach the destination, and eye-tracking data, normalised for task difficulty. While pre-test performance revealed no significant gender differences, eye movement data highlighted baseline gender differences in gaze patterns, with females exhibiting more exploratory behaviour. Training interventions led to performance improvements, particularly for females, whose gains remained statistically significant after Bonferroni correction. These improvements were accompanied by successful transitions between egocentric and allocentric strategies, as evidenced by gaze data and post-hoc interviews. For males, the intervention led to mixed results, with improvements in performance but a trade-off in efficiency. These findings deepen our understanding of how gender and training influence navigation strategies in VR and inform the design of future VR training systems, emphasising the importance of balancing cognitive load and strategy selection.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"205 ","pages":"Article 103621"},"PeriodicalIF":5.1,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145050578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mental health management as a social endeavour: Challenges and opportunities for conversational agent design 心理健康管理作为一项社会努力:对话代理设计的挑战与机遇
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2025-09-10 DOI: 10.1016/j.ijhcs.2025.103618
Robert Bowman , Anja Thieme , Benjamin Cowan , Gavin Doherty
Conversational agents (CAs) are a tempting type of computer interface for assisting people’s mental health due to their ability to simulate human-like interactions, however their integration within the broader social context of mental health management remains largely under-explored. Recognising that managing one’s mental health is often a social rather than individual activity involving close persons such as partners, family, and friends, our research takes a social-orientation to mental health management. Utilising design cards that depict fictional, yet plausible CA concepts, we present the analysis of an interview study with 24 young adults to understand their views on CAs for both their own use and for a close person. Participants viewed CAs as potentially valuable complements to human support, but expressed concerns about over-reliance and replacement. Our analysis reveal key tensions, design considerations, and opportunities for integrating CAs into mental health ecosystems in ways that respect and enhance existing social support structures.
会话代理(CAs)是一种很有吸引力的计算机界面,因为它们能够模拟类似人类的互动,以帮助人们的心理健康,然而,它们在更广泛的心理健康管理社会背景下的整合在很大程度上仍未得到充分的探索。认识到管理一个人的心理健康往往是一种社会活动,而不是涉及亲密的人(如伴侣、家人和朋友)的个人活动,我们的研究以社会为导向来进行心理健康管理。利用设计卡片描绘虚构的,但似乎合理的CA概念,我们提出了对24名年轻人的访谈研究的分析,以了解他们对自己和亲近的人使用CA的看法。与会者认为,核证机关可能是人力支持的宝贵补充,但对过度依赖和替代表示关切。我们的分析揭示了以尊重和加强现有社会支持结构的方式将ca整合到心理健康生态系统中的关键紧张关系、设计考虑因素和机会。
{"title":"Mental health management as a social endeavour: Challenges and opportunities for conversational agent design","authors":"Robert Bowman ,&nbsp;Anja Thieme ,&nbsp;Benjamin Cowan ,&nbsp;Gavin Doherty","doi":"10.1016/j.ijhcs.2025.103618","DOIUrl":"10.1016/j.ijhcs.2025.103618","url":null,"abstract":"<div><div>Conversational agents (CAs) are a tempting type of computer interface for assisting people’s mental health due to their ability to simulate human-like interactions, however their integration within the broader social context of mental health management remains largely under-explored. Recognising that managing one’s mental health is often a <em>social</em> rather than individual activity involving close persons such as partners, family, and friends, our research takes a social-orientation to mental health management. Utilising design cards that depict fictional, yet plausible CA concepts, we present the analysis of an interview study with 24 young adults to understand their views on CAs for both their own use and for a close person. Participants viewed CAs as potentially valuable complements to human support, but expressed concerns about over-reliance and replacement. Our analysis reveal key tensions, design considerations, and opportunities for integrating CAs into mental health ecosystems in ways that respect and enhance existing social support structures.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"205 ","pages":"Article 103618"},"PeriodicalIF":5.1,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145107312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection of cognitive and attention dimensions in block programming interface for learning sensor data analytics in construction education 面向建筑教学传感器数据分析学习的块编程界面认知维度和注意维度检测
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2025-09-10 DOI: 10.1016/j.ijhcs.2025.103626
Mohammad Khalid , Abiola Akanmu , Ibukun Awolusi , Homero Murzi
The increasing adoption of sensing technologies in the construction industry generates vast amounts of raw data, requiring analytics skills for effective extraction, analysis, and communication of actionable insights. To address this, ActionSens, a block-based programming interface, was developed to equip undergraduate construction engineering students with domain-specific sensor data analytics skills. However, efficient user interaction with such tools requires integrating intelligent systems capable of detecting users’ attention and cognitive states to provide context-specific and tailored support. This study leveraged eye-tracking data from construction students during the usability evaluation of ActionSens to explore machine learning models for classifying areas of interest and interaction difficulties. For visual detection, key interface elements were defined as areas of interest, serving as ground truth, while interaction difficulty was labeled based on participant feedback for reported challenges. The Ensemble model demonstrated the highest performance, achieving 88.3% accuracy in classifying areas of interest with raw data, and 82.9% for classifying interaction difficulties using oversampling techniques. Results show that gaze position and pupil diameter were the most reliable predictors for classifying areas of interest and detecting interaction difficulties. This study pioneers the integration of machine learning and eye-tracking with block-based programming interfaces in construction education. It also reinforces the Aptitude-Treatment Interaction theory by demonstrating how personalized support can be adapted based on individual cognitive aptitudes to enhance learning outcomes. These findings further contribute to the development of adaptive learning environments that can detect specific user aptitudes and provide context-specific guidance, enabling students to acquire technical skills more effectively.
建筑行业越来越多地采用传感技术,产生了大量的原始数据,需要分析技能来有效地提取、分析和交流可操作的见解。为了解决这个问题,开发了基于块的编程接口ActionSens,使建筑工程专业的本科生具备特定领域的传感器数据分析技能。然而,与这些工具进行有效的用户交互需要集成能够检测用户注意力和认知状态的智能系统,以提供特定于上下文和量身定制的支持。本研究利用ActionSens可用性评估期间来自建筑系学生的眼动追踪数据,探索机器学习模型对兴趣领域和交互困难进行分类。对于视觉检测,关键的界面元素被定义为感兴趣的领域,作为基础事实,而交互难度是根据参与者对报告挑战的反馈来标记的。集成模型表现出了最高的性能,使用原始数据对感兴趣的区域进行分类的准确率达到了88.3%,使用过采样技术对交互困难进行分类的准确率达到了82.9%。结果表明,凝视位置和瞳孔直径是分类感兴趣区域和检测交互困难的最可靠的预测因子。本研究率先将机器学习和眼动追踪与基于块的编程接口集成到建筑教育中。它还通过展示如何根据个人认知能力调整个性化支持以提高学习成果,加强了能力-治疗相互作用理论。这些发现进一步促进了适应性学习环境的发展,这种环境可以检测特定用户的能力并提供特定情境的指导,使学生能够更有效地获得技术技能。
{"title":"Detection of cognitive and attention dimensions in block programming interface for learning sensor data analytics in construction education","authors":"Mohammad Khalid ,&nbsp;Abiola Akanmu ,&nbsp;Ibukun Awolusi ,&nbsp;Homero Murzi","doi":"10.1016/j.ijhcs.2025.103626","DOIUrl":"10.1016/j.ijhcs.2025.103626","url":null,"abstract":"<div><div>The increasing adoption of sensing technologies in the construction industry generates vast amounts of raw data, requiring analytics skills for effective extraction, analysis, and communication of actionable insights. To address this, ActionSens, a block-based programming interface, was developed to equip undergraduate construction engineering students with domain-specific sensor data analytics skills. However, efficient user interaction with such tools requires integrating intelligent systems capable of detecting users’ attention and cognitive states to provide context-specific and tailored support. This study leveraged eye-tracking data from construction students during the usability evaluation of ActionSens to explore machine learning models for classifying areas of interest and interaction difficulties. For visual detection, key interface elements were defined as areas of interest, serving as ground truth, while interaction difficulty was labeled based on participant feedback for reported challenges. The Ensemble model demonstrated the highest performance, achieving 88.3% accuracy in classifying areas of interest with raw data, and 82.9% for classifying interaction difficulties using oversampling techniques. Results show that gaze position and pupil diameter were the most reliable predictors for classifying areas of interest and detecting interaction difficulties. This study pioneers the integration of machine learning and eye-tracking with block-based programming interfaces in construction education. It also reinforces the Aptitude-Treatment Interaction theory by demonstrating how personalized support can be adapted based on individual cognitive aptitudes to enhance learning outcomes. These findings further contribute to the development of adaptive learning environments that can detect specific user aptitudes and provide context-specific guidance, enabling students to acquire technical skills more effectively.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"205 ","pages":"Article 103626"},"PeriodicalIF":5.1,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145107309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the Impact of Modality and Speech Rate Manipulation in Voice Permission Requests—Limits of Applicability and Potential for Influencing Decision-Making 探讨语音许可请求中语态和语速操纵的影响——适用性的限制和影响决策的潜力
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2025-09-10 DOI: 10.1016/j.ijhcs.2025.103590
Anna Leschanowsky , Anastasia Sergeeva , Judith Bauer , Sheetal Vijapurapu , Mateusz Dubiel
As voice-enabled technologies are becoming increasingly more prevalent, voice-enabled permission requests become a crucial topic of investigation. It is yet unclear how to appropriately inform users in voice user interfaces (VUIs) about data processing practices. To understand how modality (text vs. voice) and the speech rate of the voice can influence users’ perceptions and decisions to grant permission, we conducted two preregistered studies (N = 343 and N = 594) and one pre-study, including two listening tasks to design potentially deceptive voice patterns. We found that users can distinguish between different levels of intrusiveness in the voice modality. However, they are less likely to accept voice-based permissions, pointing to cognitive problems associated with them. Moreover, we found that speech rate manipulations of action verbs “Accept” and “Decline” shifted users’ decisions towards acceptance, making the effect less controllable than predicted. This work highlights implications and design considerations for future voice-enabled permission requests.
随着支持语音的技术变得越来越普遍,支持语音的权限请求成为一个重要的研究主题。目前尚不清楚如何在语音用户界面(VUIs)中适当地告知用户有关数据处理实践的信息。为了了解语态(文本vs语音)和语音的语速如何影响用户的感知和授予许可的决定,我们进行了两项预注册研究(N = 343和N = 594)和一项预研究,包括两项听力任务,以设计潜在的欺骗性语音模式。我们发现用户可以区分不同程度的语音语气的侵入性。然而,他们不太可能接受基于语音的许可,这表明与之相关的认知问题。此外,我们发现动作动词“接受”和“拒绝”的语速操纵使用户的决定转向接受,使得效果不如预期的可控。这项工作强调了未来语音允许请求的含义和设计考虑。
{"title":"Exploring the Impact of Modality and Speech Rate Manipulation in Voice Permission Requests—Limits of Applicability and Potential for Influencing Decision-Making","authors":"Anna Leschanowsky ,&nbsp;Anastasia Sergeeva ,&nbsp;Judith Bauer ,&nbsp;Sheetal Vijapurapu ,&nbsp;Mateusz Dubiel","doi":"10.1016/j.ijhcs.2025.103590","DOIUrl":"10.1016/j.ijhcs.2025.103590","url":null,"abstract":"<div><div>As voice-enabled technologies are becoming increasingly more prevalent, voice-enabled permission requests become a crucial topic of investigation. It is yet unclear how to appropriately inform users in voice user interfaces (VUIs) about data processing practices. To understand how modality (text vs. voice) and the speech rate of the voice can influence users’ perceptions and decisions to grant permission, we conducted two preregistered studies (N = 343 and N = 594) and one pre-study, including two listening tasks to design potentially deceptive voice patterns. We found that users can distinguish between different levels of intrusiveness in the voice modality. However, they are less likely to accept voice-based permissions, pointing to cognitive problems associated with them. Moreover, we found that speech rate manipulations of action verbs “Accept” and “Decline” shifted users’ decisions towards acceptance, making the effect less controllable than predicted. This work highlights implications and design considerations for future voice-enabled permission requests.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"205 ","pages":"Article 103590"},"PeriodicalIF":5.1,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145027837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Do you need help? Identifying and responding to pilots’ troubleshooting through eye-tracking and Large Language Model 你需要帮助吗?通过眼动追踪和大语言模型识别和响应飞行员故障
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2025-09-09 DOI: 10.1016/j.ijhcs.2025.103617
Mengtao Lyu, Fan Li
In-time automation support is crucial for enhancing pilots’ performance and flight safety. While extensive research has been conducted on providing automation support to mitigate risks associated with the Out-of-the-Loop (OOTL) phenomenon, limited attention has been given to supporting pilots who are actively engaged, known as In-the-Loop (ITL) status. Despite their active engagement, ITL pilots face challenges in managing multiple tasks simultaneously without additional support. For instance, providing critical information through in-time automation support can significantly improve efficiency and flight safety when pilots need to visually troubleshoot unexpected incidents while monitoring the aircraft’s flying status. This study addresses the gap in ITL support by introducing a method that utilizes eye-tracking data tokenized into Visual Attention Matrices (VAMs), integrated with a Large Language Model (LLM) to identify and respond to troubleshooting activities of ITL pilots. We address two primary challenges: capturing the complex troubleshooting status of pilots, which blends with normal monitoring behaviors, and effectively processing non-semantic eye-tracking data using LLM. The proposed VAM approach provides a structured representation of visual attention that supports LLM reasoning, while empirical VAMs enhance the model’s ability to efficiently identify critical features. A case study involving 19 licensed pilots validates the efficacy of the proposed approach in identifying and responding to pilots’ troubleshooting activities. This research contributes significantly to adaptive Human–Computer Interaction (HCI) in aviation by improving support for ITL pilots, thereby laying a foundation for future advancements in human–AI collaboration within automated aviation systems.
及时的自动化支持对于提高飞行员的性能和飞行安全至关重要。虽然已经进行了广泛的研究,以提供自动化支持,以减轻与环外(OOTL)现象相关的风险,但对积极参与的飞行员(称为环内(ITL)状态)的支持关注有限。尽管他们积极参与,国际机场飞行员在没有额外支持的情况下同时管理多个任务面临挑战。例如,当飞行员需要在监控飞机飞行状态的同时直观地排除意外事故时,通过实时自动化支持提供关键信息可以显著提高效率和飞行安全。本研究通过引入一种方法来解决ITL支持方面的差距,该方法利用眼动追踪数据标记为视觉注意矩阵(VAMs),并与大型语言模型(LLM)集成,以识别和响应ITL飞行员的故障排除活动。我们解决了两个主要的挑战:捕捉飞行员复杂的故障排除状态,它与正常的监控行为相融合,以及使用LLM有效地处理非语义眼动追踪数据。提出的VAM方法提供了支持LLM推理的视觉注意力的结构化表示,而经验VAM增强了模型有效识别关键特征的能力。一项涉及19名持牌飞行员的案例研究验证了所提议的方法在识别和响应飞行员故障排除活动方面的有效性。本研究通过改善对ITL飞行员的支持,为航空领域的自适应人机交互(HCI)做出了重大贡献,从而为自动化航空系统中人类与人工智能协作的未来发展奠定了基础。
{"title":"Do you need help? Identifying and responding to pilots’ troubleshooting through eye-tracking and Large Language Model","authors":"Mengtao Lyu,&nbsp;Fan Li","doi":"10.1016/j.ijhcs.2025.103617","DOIUrl":"10.1016/j.ijhcs.2025.103617","url":null,"abstract":"<div><div>In-time automation support is crucial for enhancing pilots’ performance and flight safety. While extensive research has been conducted on providing automation support to mitigate risks associated with the Out-of-the-Loop (OOTL) phenomenon, limited attention has been given to supporting pilots who are actively engaged, known as In-the-Loop (ITL) status. Despite their active engagement, ITL pilots face challenges in managing multiple tasks simultaneously without additional support. For instance, providing critical information through in-time automation support can significantly improve efficiency and flight safety when pilots need to visually troubleshoot unexpected incidents while monitoring the aircraft’s flying status. This study addresses the gap in ITL support by introducing a method that utilizes eye-tracking data tokenized into Visual Attention Matrices (VAMs), integrated with a Large Language Model (LLM) to identify and respond to troubleshooting activities of ITL pilots. We address two primary challenges: capturing the complex troubleshooting status of pilots, which blends with normal monitoring behaviors, and effectively processing non-semantic eye-tracking data using LLM. The proposed VAM approach provides a structured representation of visual attention that supports LLM reasoning, while empirical VAMs enhance the model’s ability to efficiently identify critical features. A case study involving 19 licensed pilots validates the efficacy of the proposed approach in identifying and responding to pilots’ troubleshooting activities. This research contributes significantly to adaptive Human–Computer Interaction (HCI) in aviation by improving support for ITL pilots, thereby laying a foundation for future advancements in human–AI collaboration within automated aviation systems.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"205 ","pages":"Article 103617"},"PeriodicalIF":5.1,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145107302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Shaping the fairness journey: The roles of AI literacy, explanation, and interpersonal interaction in AI interviews 塑造公平之旅:人工智能素养、解释和人际互动在人工智能访谈中的作用
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2025-09-06 DOI: 10.1016/j.ijhcs.2025.103629
Yi Xu, Zhiyun Chen, Mengyuan Dong
Grounded in organizational justice theory, this two-study investigation provides a comprehensive examination of fairness perceptions across the entire AI interviews process. Through a three-stage experimental design (Study 1, N = 113; Study 2, N = 206), we explored how design and outcome factors influence procedural and distributive justice. We manipulated the AI’s explanation (With vs. Without) and level of interpersonal interaction (High vs. low) during the interview process, and the interview result (Pass vs. Fail) and decision agent (100% AI vs. 50% AI + 50% Human) in the post-decision stage. Results indicate that while candidate AI literacy, human-in-the-loop decision-making, and positive outcomes consistently improved fairness perceptions, the effects of system design were more complex. Design features intended to enhance the user experience, such as high AI interactivity and detailed explanations, improved aspects of procedural justice during the interview. Yet, these sometimes backfired by diminishing the perceived distributive justice of the final decision. This reveals a critical tension between a positive process experience and a fair outcome evaluation. These complex effects underscore the practical need for a holistic design approach that manages the entire candidate journey. Fair AI systems require not only improving candidate literacy but also carefully designing system explanations to manage applicant expectations effectively.
在组织公正理论的基础上,这项两项研究调查提供了对整个人工智能面试过程中公平观念的全面检查。通过三阶段实验设计(研究1,N = 113;研究2,N = 206),我们探讨了设计和结果因素如何影响程序和分配公正。我们在面试过程中操纵了AI的解释(With vs. Without)和人际互动水平(High vs. low),并在决策后阶段操纵了面试结果(Pass vs. Fail)和决策代理(100% AI vs. 50% AI + 50%人类)。结果表明,虽然候选人的人工智能素养、人在循环中的决策和积极的结果持续提高了公平观念,但系统设计的影响更为复杂。旨在增强用户体验的设计功能,如高人工智能交互性和详细的解释,提高了面试过程中的程序公平性。然而,这些有时会适得其反,因为它们削弱了最终决定的分配公正性。这揭示了积极的过程体验和公平的结果评估之间的关键张力。这些复杂的影响强调了一种管理整个候选人旅程的整体设计方法的实际需求。公平的人工智能系统不仅需要提高候选人的素养,还需要精心设计系统解释,以有效地管理求职者的期望。
{"title":"Shaping the fairness journey: The roles of AI literacy, explanation, and interpersonal interaction in AI interviews","authors":"Yi Xu,&nbsp;Zhiyun Chen,&nbsp;Mengyuan Dong","doi":"10.1016/j.ijhcs.2025.103629","DOIUrl":"10.1016/j.ijhcs.2025.103629","url":null,"abstract":"<div><div>Grounded in organizational justice theory, this two-study investigation provides a comprehensive examination of fairness perceptions across the entire AI interviews process. Through a three-stage experimental design (Study 1, <em>N</em> = 113; Study 2, <em>N</em> = 206), we explored how design and outcome factors influence procedural and distributive justice. We manipulated the AI’s explanation (With vs. Without) and level of interpersonal interaction (High vs. low) during the interview process, and the interview result (Pass vs. Fail) and decision agent (100% AI vs. 50% AI + 50% Human) in the post-decision stage. Results indicate that while candidate AI literacy, human-in-the-loop decision-making, and positive outcomes consistently improved fairness perceptions, the effects of system design were more complex. Design features intended to enhance the user experience, such as high AI interactivity and detailed explanations, improved aspects of procedural justice during the interview. Yet, these sometimes backfired by diminishing the perceived distributive justice of the final decision. This reveals a critical tension between a positive process experience and a fair outcome evaluation. These complex effects underscore the practical need for a holistic design approach that manages the entire candidate journey. Fair AI systems require not only improving candidate literacy but also carefully designing system explanations to manage applicant expectations effectively.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"205 ","pages":"Article 103629"},"PeriodicalIF":5.1,"publicationDate":"2025-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145027836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uncovering the dynamics of human-AI hybrid performance: A qualitative meta-analysis of empirical studies 揭示人类-人工智能混合绩效的动态:实证研究的定性荟萃分析
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2025-09-06 DOI: 10.1016/j.ijhcs.2025.103622
Dóra Göndöcs , Szabolcs Horváth , Viktor Dörfler
Human-AI collaboration is an increasingly important area of research as AI systems are integrated into everyday workflows and moving beyond mere automation and augmentation to more collaborative roles. However, existing research often overlooks the dynamics and performance aspects of this interaction. Our study addresses this gap through a review of empirical AI studies from 2018–2024, focusing on the key factors influencing human-AI collaboration outcomes within the spectrum of Human-Centered Artificial Intelligence (HCAI).
We identify 24 critical performance factors that influence hybrid performance, grouped into four categories using thematic analysis. Then, we uncover and analyze the complex, non-linear interdependencies between these factors. We present these relationships in a factor dependency graph, highlighting the most influential nodes.
The graph and specific factor interactions supported by the papers reveal a quite complex web, an interconnectedness of factors. As opposed to being an easy-to-predict combination of inputs, human-AI collaboration in a given context likely leads to a dynamic, evolving system with often non-linear effects on its hybrid performance. Our findings and the previous research on automation technologies suggest that the application of AI tools in collaborative scenarios would benefit from a comprehensive performance framework. Our study intends to contribute to this future line of research with this initial framework.
随着人工智能系统被集成到日常工作流程中,并从单纯的自动化和增强转向更具协作性的角色,人类与人工智能的协作是一个越来越重要的研究领域。然而,现有的研究往往忽视了这种相互作用的动力学和性能方面。我们的研究通过回顾2018-2024年的人工智能实证研究来解决这一差距,重点关注在以人为中心的人工智能(HCAI)范围内影响人类-人工智能协作结果的关键因素。我们确定了影响混合动力车性能的24个关键性能因素,使用主题分析将其分为四类。然后,我们发现并分析了这些因素之间复杂的非线性相互依赖关系。我们在一个因素依赖关系图中展示了这些关系,突出显示了最具影响力的节点。论文支持的图表和具体因素相互作用揭示了一个相当复杂的网络,一个相互联系的因素。与易于预测的输入组合相反,人类与人工智能在给定环境中的合作可能会导致一个动态的、不断发展的系统,其混合性能通常是非线性的。我们的发现和之前对自动化技术的研究表明,人工智能工具在协作场景中的应用将受益于一个全面的绩效框架。我们的研究旨在为这一初步框架的未来研究做出贡献。
{"title":"Uncovering the dynamics of human-AI hybrid performance: A qualitative meta-analysis of empirical studies","authors":"Dóra Göndöcs ,&nbsp;Szabolcs Horváth ,&nbsp;Viktor Dörfler","doi":"10.1016/j.ijhcs.2025.103622","DOIUrl":"10.1016/j.ijhcs.2025.103622","url":null,"abstract":"<div><div>Human-AI collaboration is an increasingly important area of research as AI systems are integrated into everyday workflows and moving beyond mere automation and augmentation to more collaborative roles. However, existing research often overlooks the dynamics and performance aspects of this interaction. Our study addresses this gap through a review of empirical AI studies from 2018–2024, focusing on the key factors influencing human-AI collaboration outcomes within the spectrum of Human-Centered Artificial Intelligence (HCAI).</div><div>We identify 24 critical performance factors that influence hybrid performance, grouped into four categories using thematic analysis. Then, we uncover and analyze the complex, non-linear interdependencies between these factors. We present these relationships in a factor dependency graph, highlighting the most influential nodes.</div><div>The graph and specific factor interactions supported by the papers reveal a quite complex web, an interconnectedness of factors. As opposed to being an easy-to-predict combination of inputs, human-AI collaboration in a given context likely leads to a dynamic, evolving system with often non-linear effects on its hybrid performance. Our findings and the previous research on automation technologies suggest that the application of AI tools in collaborative scenarios would benefit from a comprehensive performance framework. Our study intends to contribute to this future line of research with this initial framework.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"205 ","pages":"Article 103622"},"PeriodicalIF":5.1,"publicationDate":"2025-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145222122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Examining dual-task interference effects of visual and auditory perceptual load in virtual reality 虚拟现实中视觉和听觉感知负荷的双任务干扰效应研究
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2025-09-04 DOI: 10.1016/j.ijhcs.2025.103619
Mohamad El Iskandarani, Matthew Bolton, Sara Lu Riggs
Immersive environments often require users to perform tasks that vary in sensory modality and processing demands. Dual-task interference arises when such tasks are performed concurrently, often leading to performance declines and safety risks in applied settings. Yet, it remains unclear how perceptual load and task type jointly shape such interference in virtual reality (VR). To address this gap, we examined intramodal and crossmodal effects of visual and auditory load on dual-task interference in a VR dual-task paradigm. Participants performed a continuous visual tracking task while concurrently completing auditory detection tasks of two types: spatial and object-based. Results showed that perceptual load and task type differentially influenced intramodal interference, with stronger effects in the auditory detection task. Contrary to predictions, no crossmodal interference was observed, suggesting a degree of independence between vision and audition. These findings provide valuable insights for VR interface designers on how to present multimodal content in VR environments, which has implications on user safety and efficacy.
沉浸式环境通常要求用户执行不同感官模式和处理需求的任务。当这些任务同时执行时,就会出现双任务干扰,通常会导致应用环境中的性能下降和安全风险。然而,目前尚不清楚感知负荷和任务类型如何共同影响虚拟现实(VR)中的这种干扰。为了解决这一差距,我们在虚拟现实双任务范式中研究了视觉和听觉负荷对双任务干扰的模态内和跨模态影响。参与者在完成连续的视觉跟踪任务的同时,还要完成两种类型的听觉检测任务:基于空间的和基于物体的。结果表明,知觉负荷和任务类型对模态内干扰的影响存在差异,其中听觉检测任务对模态内干扰的影响更大。与预测相反,没有观察到交叉模态干扰,这表明视觉和听觉之间有一定程度的独立性。这些发现为VR界面设计师如何在VR环境中呈现多模式内容提供了有价值的见解,这对用户的安全性和有效性具有重要意义。
{"title":"Examining dual-task interference effects of visual and auditory perceptual load in virtual reality","authors":"Mohamad El Iskandarani,&nbsp;Matthew Bolton,&nbsp;Sara Lu Riggs","doi":"10.1016/j.ijhcs.2025.103619","DOIUrl":"10.1016/j.ijhcs.2025.103619","url":null,"abstract":"<div><div>Immersive environments often require users to perform tasks that vary in sensory modality and processing demands. Dual-task interference arises when such tasks are performed concurrently, often leading to performance declines and safety risks in applied settings. Yet, it remains unclear how perceptual load and task type jointly shape such interference in virtual reality (VR). To address this gap, we examined intramodal and crossmodal effects of visual and auditory load on dual-task interference in a VR dual-task paradigm. Participants performed a continuous visual tracking task while concurrently completing auditory detection tasks of two types: spatial and object-based. Results showed that perceptual load and task type differentially influenced intramodal interference, with stronger effects in the auditory detection task. Contrary to predictions, no crossmodal interference was observed, suggesting a degree of independence between vision and audition. These findings provide valuable insights for VR interface designers on how to present multimodal content in VR environments, which has implications on user safety and efficacy.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"205 ","pages":"Article 103619"},"PeriodicalIF":5.1,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145050630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conversational agents and charitable behavioral intentions: The roles of modality, communication style, and perceived anthropomorphism 会话主体与慈善行为意向:情态、沟通风格和感知拟人化的作用
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2025-09-03 DOI: 10.1016/j.ijhcs.2025.103616
Junqi Shao , Leona Yi-Fan Su , Ziyang Gong , Minrui Chen
Conversational agents (CAs) are increasingly utilized by organizations for fundraising and volunteer recruitment. Yet, little is understood about how voice-based CAs could serve these purposes optimally. This experimental study therefore compares voice-based CAs against text-based ones in terms of their ability to foster users’ intentions to make charitable contributions, and investigates the potential mediation of such effects by two dimensions of user-perceived anthropomorphism. Additionally, it examines how a CA’s communication style moderates these effects. It found that, when a voice-based CA employed a formal communication style, mindless anthropomorphism was a significant mediator of its positive association with charitable behavioral intentions. Conversely, when employing an informal communication style, a text-based CA elicited significantly higher levels of mindful anthropomorphism, and also was positively linked to charitable behavioral intentions. These findings expand our theoretical understanding of how CA modalities influence people’s moral responses toward computers; how this effect could be impaired, or strengthened, by different communication styles; and the underlying mechanisms of two dimensions of anthropomorphism. Practical implications are also discussed.
会话代理(ca)越来越多地被组织用于筹款和志愿者招募。然而,对于基于语音的ca如何才能最优地服务于这些目的,人们知之甚少。因此,本实验研究比较了基于语音的ca与基于文本的ca在促进用户慈善捐款意愿方面的能力,并通过用户感知的拟人化的两个维度调查了这种影响的潜在中介作用。此外,它还研究了CA的通信风格如何缓和这些影响。研究发现,当基于语音的CA采用正式的沟通方式时,无意识拟人化是其与慈善行为意图正相关的重要中介。相反,当采用非正式的沟通方式时,基于文本的CA引发了更高水平的正念拟人化,并且与慈善行为意图呈正相关。这些发现扩展了我们对CA模式如何影响人们对计算机的道德反应的理论理解;不同的沟通方式如何削弱或加强这种效果;以及拟人论的两个维度的潜在机制。本文还讨论了实际意义。
{"title":"Conversational agents and charitable behavioral intentions: The roles of modality, communication style, and perceived anthropomorphism","authors":"Junqi Shao ,&nbsp;Leona Yi-Fan Su ,&nbsp;Ziyang Gong ,&nbsp;Minrui Chen","doi":"10.1016/j.ijhcs.2025.103616","DOIUrl":"10.1016/j.ijhcs.2025.103616","url":null,"abstract":"<div><div>Conversational agents (CAs) are increasingly utilized by organizations for fundraising and volunteer recruitment. Yet, little is understood about how voice-based CAs could serve these purposes optimally. This experimental study therefore compares voice-based CAs against text-based ones in terms of their ability to foster users’ intentions to make charitable contributions, and investigates the potential mediation of such effects by two dimensions of user-perceived anthropomorphism. Additionally, it examines how a CA’s communication style moderates these effects. It found that, when a voice-based CA employed a formal communication style, mindless anthropomorphism was a significant mediator of its positive association with charitable behavioral intentions. Conversely, when employing an informal communication style, a text-based CA elicited significantly higher levels of mindful anthropomorphism, and also was positively linked to charitable behavioral intentions. These findings expand our theoretical understanding of how CA modalities influence people’s moral responses toward computers; how this effect could be impaired, or strengthened, by different communication styles; and the underlying mechanisms of two dimensions of anthropomorphism. Practical implications are also discussed.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"205 ","pages":"Article 103616"},"PeriodicalIF":5.1,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145050577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Human-Computer Studies
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1