首页 > 最新文献

International Journal of Human-Computer Studies最新文献

英文 中文
Breaking down barriers: A new approach to virtual museum navigation for people with visual impairments through voice assistants 打破障碍:通过语音助手为视觉障碍者提供虚拟博物馆导航的新方法
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2024-11-17 DOI: 10.1016/j.ijhcs.2024.103403
Yeliz Yücel, Kerem Rızvanoğlu
People with visual imparments (PWVI) encounter challenges in accessing cultural, historical, and practical information in a predominantly visual world, limiting their participation in various activities, including visits to museums.Museums, as important centers for exploration and learning, often overlook these accessibility issues.This abstract presents the iMuse Model, an innovative approach to create accessible and inclusive museum environments for them.The iMuse Model centers around the co-design of a prototype voice assistant integrated into Google Home, aimed at enabling remote navigation for PWVI within the Basilica Cistern museum in Turkey.This model consists of a two-layer study.The first layer involves collaboration with PWVI and their sight loss instructors to develop a five level framework tailored to their unique needs and challenges.The second layer focuses on testing this design with 30 people with visual impairments, employing various methodologies, including the Wizard of Oz technique.Our prototype provides inclusive audio descriptions that encompass sensory, emotional, historical, and structural elements, along with spatialized sounds from the museum environment, improving spatial understanding and cognitive map development.Notably, we have developed two versions of the voice assistant: one with a humorous interaction and one with a non-humorous approach. Users expressed a preference for the humorous version, leading to increased interaction, enjoyment, and social learning, as supported by both qualitative and quantitative results.In conclusion, the iMuse Model highlights the potential of co-designed, humor-infused, and culturally sensitive voice assistants.Our model not only aid PWVI in navigating unfamiliar spaces but also enhance their social learning, engagement, and appreciation of cultural heritage within museum environments.
视觉障碍者(PWVI)在以视觉为主的世界中获取文化、历史和实用信息时会遇到困难,这限制了他们参与各种活动,包括参观博物馆。本摘要介绍了 iMuse 模型,这是一种为他们创造无障碍和包容性博物馆环境的创新方法。iMuse 模型的核心是共同设计一个集成在谷歌 Home 中的语音助手原型,目的是在土耳其大教堂蓄水池博物馆内为残疾人提供远程导航。我们的原型提供了包含感官、情感、历史和结构元素的包容性语音描述,以及来自博物馆环境的空间化声音,从而改善了空间理解和认知地图开发。总之,iMuse 模型凸显了共同设计、注入幽默和文化敏感性的语音助手的潜力。我们的模型不仅能帮助无障碍参观者浏览陌生空间,还能提高他们在博物馆环境中的社交学习、参与度和对文化遗产的鉴赏力。
{"title":"Breaking down barriers: A new approach to virtual museum navigation for people with visual impairments through voice assistants","authors":"Yeliz Yücel,&nbsp;Kerem Rızvanoğlu","doi":"10.1016/j.ijhcs.2024.103403","DOIUrl":"10.1016/j.ijhcs.2024.103403","url":null,"abstract":"<div><div>People with visual imparments (PWVI) encounter challenges in accessing cultural, historical, and practical information in a predominantly visual world, limiting their participation in various activities, including visits to museums.Museums, as important centers for exploration and learning, often overlook these accessibility issues.This abstract presents the iMuse Model, an innovative approach to create accessible and inclusive museum environments for them.The iMuse Model centers around the co-design of a prototype voice assistant integrated into Google Home, aimed at enabling remote navigation for PWVI within the Basilica Cistern museum in Turkey.This model consists of a two-layer study.The first layer involves collaboration with PWVI and their sight loss instructors to develop a five level framework tailored to their unique needs and challenges.The second layer focuses on testing this design with 30 people with visual impairments, employing various methodologies, including the Wizard of Oz technique.Our prototype provides inclusive audio descriptions that encompass sensory, emotional, historical, and structural elements, along with spatialized sounds from the museum environment, improving spatial understanding and cognitive map development.Notably, we have developed two versions of the voice assistant: one with a humorous interaction and one with a non-humorous approach. Users expressed a preference for the humorous version, leading to increased interaction, enjoyment, and social learning, as supported by both qualitative and quantitative results.In conclusion, the iMuse Model highlights the potential of co-designed, humor-infused, and culturally sensitive voice assistants.Our model not only aid PWVI in navigating unfamiliar spaces but also enhance their social learning, engagement, and appreciation of cultural heritage within museum environments.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"194 ","pages":"Article 103403"},"PeriodicalIF":5.3,"publicationDate":"2024-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142654542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating augmented reality and LLM for enhanced cognitive support in critical audio communications 将增强现实与 LLM 相结合,增强关键音频通信中的认知支持
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2024-11-06 DOI: 10.1016/j.ijhcs.2024.103402
Fang Xu , Tianyu Zhou , Tri Nguyen , Haohui Bao , Christine Lin , Jing Du
Operation and Maintenance (O&M) missions are often time-sensitive and accuracy-dependent, requiring rapid and precise information processing in noisy, chaotic environments where oral communication can lead to cognitive overload and impaired decision-making. Augmented Reality (AR) and Large Language Models (LLMs) offer potential for enhancing situational awareness and lowering cognitive load by integrating digital visualizations with the physical world and improving dialogue management. However, synthesizing these technologies into a real-time system that effectively aids operators remains a challenge. This study explores the integration of AR and GPT-4, an advanced LLM, in time-sensitive O&M tasks, aiming to enhance situational awareness and manage cognitive load during oral communications. A customized AR system, incorporating the Microsoft HoloLens2 for cognitive monitoring and GPT-4 for decision making assistance, was tested in a human subject experiment with 30 participants. The 2×2 factorial experiment evaluated the effects of AR and LLM assistance on task performance and cognitive load. Results demonstrated significant improvements in task accuracy and reductions in cognitive load, highlighting the effectiveness of AR and LLM integration in supporting O&M missions. These findings emphasize the need for further research to optimize operational strategies in mission critical environments.
运行与维护(O&M)任务通常具有时间敏感性和精确性,需要在嘈杂混乱的环境中快速精确地处理信息,而在这种环境中,口头交流可能会导致认知超载和决策受损。增强现实(AR)和大型语言模型(LLMs)通过将数字可视化与物理世界相结合并改进对话管理,为增强态势感知和降低认知负荷提供了潜力。然而,如何将这些技术整合到实时系统中,为操作员提供有效帮助,仍然是一项挑战。本研究探讨了将 AR 和 GPT-4(一种先进的 LLM)整合到时间敏感的 O&M 任务中,旨在增强口语交流过程中的态势感知和管理认知负荷。一个定制的 AR 系统结合了用于认知监控的 Microsoft HoloLens2 和用于辅助决策的 GPT-4,在一项有 30 名参与者参加的人体实验中进行了测试。该 2×2 因式实验评估了 AR 和 LLM 辅助对任务表现和认知负荷的影响。结果表明,任务准确性有了明显提高,认知负荷也有所减轻,这凸显了 AR 和 LLM 集成在支持 O&M 任务中的有效性。这些发现强调了进一步研究的必要性,以优化关键任务环境中的操作策略。
{"title":"Integrating augmented reality and LLM for enhanced cognitive support in critical audio communications","authors":"Fang Xu ,&nbsp;Tianyu Zhou ,&nbsp;Tri Nguyen ,&nbsp;Haohui Bao ,&nbsp;Christine Lin ,&nbsp;Jing Du","doi":"10.1016/j.ijhcs.2024.103402","DOIUrl":"10.1016/j.ijhcs.2024.103402","url":null,"abstract":"<div><div>Operation and Maintenance (O&amp;M) missions are often time-sensitive and accuracy-dependent, requiring rapid and precise information processing in noisy, chaotic environments where oral communication can lead to cognitive overload and impaired decision-making. Augmented Reality (AR) and Large Language Models (LLMs) offer potential for enhancing situational awareness and lowering cognitive load by integrating digital visualizations with the physical world and improving dialogue management. However, synthesizing these technologies into a real-time system that effectively aids operators remains a challenge. This study explores the integration of AR and GPT-4, an advanced LLM, in time-sensitive O&amp;M tasks, aiming to enhance situational awareness and manage cognitive load during oral communications. A customized AR system, incorporating the Microsoft HoloLens2 for cognitive monitoring and GPT-4 for decision making assistance, was tested in a human subject experiment with 30 participants. The 2×2 factorial experiment evaluated the effects of AR and LLM assistance on task performance and cognitive load. Results demonstrated significant improvements in task accuracy and reductions in cognitive load, highlighting the effectiveness of AR and LLM integration in supporting O&amp;M missions. These findings emphasize the need for further research to optimize operational strategies in mission critical environments.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"194 ","pages":"Article 103402"},"PeriodicalIF":5.3,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142654540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ChatGPT and me: First-time and experienced users’ perceptions of ChatGPT’s communicative ability as a dialogue partner 我和 ChatGPT:首次用户和资深用户对 ChatGPT 作为对话伙伴的交流能力的看法
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2024-11-04 DOI: 10.1016/j.ijhcs.2024.103400
Iona Gessinger , Katie Seaborn , Madeleine Steeds , Benjamin R. Cowan
Chatbots like ChatGPT have the potential to produce more natural conversational user interface interactions. Yet, we currently know little about perceptions of ChatGPT as a dialogue partner, and if interaction changes these. Through an online, two-stage, mixed methods study conducted in July 2023, in which first-time and experienced users living in the UK or Ireland engaged in tasks with ChatGPT, we show that interaction improves attitudes towards the system for first-time users, while these attitudes are already positive and stable in experienced users. We further show that first-time users’ perceptions of ChatGPT’s communicative ability (competence, human-likeness, and flexibility) are more dynamic than those of experienced users, although the experienced users’ perceptions also peak post-interaction. When reflecting on their interaction experience with ChatGPT, both groups were positive with little mention of limitations. We discuss the implications of these findings for user perceptions of ChatGPT as a dialogue partner, and highlight the potential risks of uncritical adoption of such technology.
像 ChatGPT 这样的聊天机器人有可能产生更自然的对话式用户界面交互。然而,我们目前对 ChatGPT 作为对话伙伴的感知以及互动是否会改变这些感知知之甚少。通过 2023 年 7 月进行的一项在线、两阶段、混合方法研究,我们发现,交互改善了首次用户对系统的态度,而经验丰富的用户对系统的态度已经积极且稳定。我们进一步表明,与经验丰富的用户相比,初次使用 ChatGPT 的用户对其交流能力(能力、人性化和灵活性)的感知更为动态,尽管经验丰富的用户的感知也会在互动后达到顶峰。在反思与 ChatGPT 的交互体验时,两组用户都表现积极,很少提到局限性。我们讨论了这些发现对用户将 ChatGPT 视为对话伙伴的影响,并强调了不加批判地采用此类技术的潜在风险。
{"title":"ChatGPT and me: First-time and experienced users’ perceptions of ChatGPT’s communicative ability as a dialogue partner","authors":"Iona Gessinger ,&nbsp;Katie Seaborn ,&nbsp;Madeleine Steeds ,&nbsp;Benjamin R. Cowan","doi":"10.1016/j.ijhcs.2024.103400","DOIUrl":"10.1016/j.ijhcs.2024.103400","url":null,"abstract":"<div><div>Chatbots like ChatGPT have the potential to produce more natural conversational user interface interactions. Yet, we currently know little about perceptions of ChatGPT as a dialogue partner, and if interaction changes these. Through an online, two-stage, mixed methods study conducted in July 2023, in which first-time and experienced users living in the UK or Ireland engaged in tasks with ChatGPT, we show that interaction improves attitudes towards the system for first-time users, while these attitudes are already positive and stable in experienced users. We further show that first-time users’ perceptions of ChatGPT’s communicative ability (competence, human-likeness, and flexibility) are more dynamic than those of experienced users, although the experienced users’ perceptions also peak post-interaction. When reflecting on their interaction experience with ChatGPT, both groups were positive with little mention of limitations. We discuss the implications of these findings for user perceptions of ChatGPT as a dialogue partner, and highlight the potential risks of uncritical adoption of such technology.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"194 ","pages":"Article 103400"},"PeriodicalIF":5.3,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142654541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Traceable teleportation: Improving spatial learning in virtual locomotion 可追踪的远程传送:改进虚拟运动中的空间学习
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2024-11-02 DOI: 10.1016/j.ijhcs.2024.103399
Ye Jia , Zackary P.T. Sin , Chen Li , Peter H.F. Ng , Xiao Huang , George Baciu , Jiannong Cao , Qing Li
In virtual reality, point-and-teleport (P&T) is a locomotion technique that is popular for its user-friendliness, lowering workload and mitigating cybersickness. However, most P&T schemes use instantaneous transitions, which has been known to hinder spatial learning. While replacing instantaneous transitions with animated interpolations can address this issue, they may inadvertently induce cybersickness. To counter these deficiencies, we propose Traceable Teleportation (TTP), an enhanced locomotion technique grounded in a theoretical framework that was designed to improve spatial learning. TTP incorporates two novel features: an Undo-Redo mechanism that facilitates rapid back-and-forth movements, and a Visualized Path that offers additional visual cues. We have conducted a user study via a set of spatial learning tests within a virtual labyrinth to assess the effect of these enhancements on the P&T technique. Our findings indicate that the TTP Undo-Redo design generally facilitates the learning of orientational spatial knowledge without incurring additional cybersickness or diminishing sense of presence.
在虚拟现实中,"定点传送"(P&T)是一种运动技术,因其方便用户、降低工作量和减轻晕机感而广受欢迎。然而,大多数 P&T 方案都使用瞬时转换,这已被认为会阻碍空间学习。虽然用动画插值代替瞬时转换可以解决这个问题,但可能会无意中引起晕机。为了弥补这些不足,我们提出了可追踪远距传物(TTP),这是一种基于理论框架的增强型运动技术,旨在提高空间学习能力。TTP 融合了两个新功能:一个是便于快速前后移动的撤消-重做机制,另一个是提供额外视觉提示的可视化路径。我们在虚拟迷宫中进行了一系列空间学习测试,以评估这些增强功能对 P&T 技术的影响。我们的研究结果表明,TTP "撤销-重做 "设计总体上促进了定向空间知识的学习,而不会引起额外的晕机或降低临场感。
{"title":"Traceable teleportation: Improving spatial learning in virtual locomotion","authors":"Ye Jia ,&nbsp;Zackary P.T. Sin ,&nbsp;Chen Li ,&nbsp;Peter H.F. Ng ,&nbsp;Xiao Huang ,&nbsp;George Baciu ,&nbsp;Jiannong Cao ,&nbsp;Qing Li","doi":"10.1016/j.ijhcs.2024.103399","DOIUrl":"10.1016/j.ijhcs.2024.103399","url":null,"abstract":"<div><div>In virtual reality, point-and-teleport (P&amp;T) is a locomotion technique that is popular for its user-friendliness, lowering workload and mitigating cybersickness. However, most P&amp;T schemes use instantaneous transitions, which has been known to hinder spatial learning. While replacing instantaneous transitions with animated interpolations can address this issue, they may inadvertently induce cybersickness. To counter these deficiencies, we propose <em><strong>Traceable Teleportation (TTP)</strong></em>, an enhanced locomotion technique grounded in a theoretical framework that was designed to improve spatial learning. <em>TTP</em> incorporates two novel features: an <em>Undo-Redo</em> mechanism that facilitates rapid back-and-forth movements, and a <em>Visualized Path</em> that offers additional visual cues. We have conducted a user study via a set of spatial learning tests within a virtual labyrinth to assess the effect of these enhancements on the P&amp;T technique. Our findings indicate that the <em>TTP Undo-Redo</em> design generally facilitates the learning of orientational spatial knowledge without incurring additional cybersickness or diminishing sense of presence.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"194 ","pages":"Article 103399"},"PeriodicalIF":5.3,"publicationDate":"2024-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142593503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AniBalloons: Animated chat balloons as affective augmentation for social messaging and chatbot interaction AniBalloons:将动画聊天气球作为社交信息和聊天机器人互动的情感增强工具
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2024-10-18 DOI: 10.1016/j.ijhcs.2024.103365
Pengcheng An , Chaoyu Zhang , Haichen Gao , Ziqi Zhou , Yage Xiao , Jian Zhao
Despite being prominent and ubiquitous, message-based communication is limited in nonverbally conveying emotions. Besides emoticons or stickers, messaging users continue seeking richer options for affective communication. Recent research explored using chat-balloons’ shape and color to communicate emotional states. However, little work explored whether and how chat-balloon animations could be designed to convey emotions. We present the design of AniBalloons, 30 chat-balloon animations conveying Joy, Anger, Sadness, Surprise, Fear, and Calmness. Using AniBalloons as a research means, we conducted three studies to assess the animations’ affect recognizability and emotional properties (N=40), and probe how animated chat-balloons would influence communication experience in typical scenarios including instant messaging (N=72) and chatbot service (N=70). Our exploration contributes a set of chat-balloon animations to complement nonverbal affective communication for a range of text-message interfaces, and empirical insights into how animated chat-balloons might mediate particular conversation experiences (e.g., perceived interpersonal closeness, or chatbot personality).
尽管基于信息的通信非常突出且无处不在,但在非口头传达情感方面却很有限。除了表情符号或贴纸,信息用户还在继续寻求更丰富的情感交流方式。最近的研究探索了利用聊天气球的形状和颜色来传达情绪状态。然而,很少有研究探讨是否以及如何设计聊天气球动画来传达情感。我们展示了 AniBalloons 的设计,30 个聊天气球动画分别传达了快乐、愤怒、悲伤、惊喜、恐惧和平静。以 AniBalloons 为研究手段,我们进行了三项研究,以评估动画的情感可识别性和情感属性(N=40),并探究动画聊天气球如何影响即时通讯(N=72)和聊天机器人服务(N=70)等典型场景中的交流体验。我们的探索为一系列文本消息界面的非语言情感交流提供了一套聊天气球动画补充,并就聊天气球动画如何调解特定对话体验(如感知到的人际亲密度或聊天机器人个性)提供了经验见解。
{"title":"AniBalloons: Animated chat balloons as affective augmentation for social messaging and chatbot interaction","authors":"Pengcheng An ,&nbsp;Chaoyu Zhang ,&nbsp;Haichen Gao ,&nbsp;Ziqi Zhou ,&nbsp;Yage Xiao ,&nbsp;Jian Zhao","doi":"10.1016/j.ijhcs.2024.103365","DOIUrl":"10.1016/j.ijhcs.2024.103365","url":null,"abstract":"<div><div>Despite being prominent and ubiquitous, message-based communication is limited in nonverbally conveying emotions. Besides emoticons or stickers, messaging users continue seeking richer options for affective communication. Recent research explored using chat-balloons’ shape and color to communicate emotional states. However, little work explored whether and how chat-balloon animations could be designed to convey emotions. We present the design of AniBalloons, 30 chat-balloon animations conveying Joy, Anger, Sadness, Surprise, Fear, and Calmness. Using AniBalloons as a research means, we conducted three studies to assess the animations’ affect recognizability and emotional properties (<span><math><mrow><mi>N</mi><mo>=</mo><mn>40</mn></mrow></math></span>), and probe how animated chat-balloons would influence communication experience in typical scenarios including instant messaging (<span><math><mrow><mi>N</mi><mo>=</mo><mn>72</mn></mrow></math></span>) and chatbot service (<span><math><mrow><mi>N</mi><mo>=</mo><mn>70</mn></mrow></math></span>). Our exploration contributes a set of chat-balloon animations to complement nonverbal affective communication for a range of text-message interfaces, and empirical insights into how animated chat-balloons might mediate particular conversation experiences (e.g., perceived interpersonal closeness, or chatbot personality).</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"194 ","pages":"Article 103365"},"PeriodicalIF":5.3,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142539661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring amBiDiguity: UI item direction interpretation by Arabic and Hebrew users 探索 amBiDiguity:阿拉伯语和希伯来语用户对用户界面项目方向的解释
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2024-10-17 DOI: 10.1016/j.ijhcs.2024.103383
Yulia Goldenberg, Noam Tractinsky
Bidirectional user interfaces serve more than half a billion users worldwide. Despite increasing diversity-driven approaches to interface development, bidirectional interfaces still use UI elements inconsistently. In particular, UI items containing ambiguous information that BiDi users might process both from right-to-left and left-to-right pose a challenge to designers. We use the term amBiDiguous to denote such items and suggest that they are susceptible to ineffective use.
This paper reports on an empirical study with 1705 Arabic and Hebrew users, in which we collected explicit and implicit data about ambiguous UI items in bidirectional interfaces. We explored the directional interpretation of amBiDiguous UI items and investigated the influence of individual, linguistic, and UI design factors on how people perceive them. The findings suggest a complex picture in which various factors affect ambiguous items’ interpretation. While the analysis indicates that preventing all interpretation errors is probably impossible, a large portion of those errors can be addressed by proper design.
双向用户界面为全球 5 亿多用户提供服务。尽管越来越多的界面开发方法以多样性为导向,但双向界面使用的用户界面元素仍然不一致。尤其是包含模糊信息的用户界面项目,双向用户可能同时从右向左和从左向右处理这些信息,这给设计者带来了挑战。本文报告了一项对 1705 名阿拉伯语和希伯来语用户进行的实证研究,在这项研究中,我们收集了有关双向界面中模棱两可的用户界面项目的显性和隐性数据。我们探讨了amBiDiguous用户界面项目的方向性解释,并研究了个人、语言和用户界面设计因素对人们如何感知这些项目的影响。研究结果表明,各种因素对模棱两可项目的解释产生了复杂的影响。虽然分析表明,要防止所有的解释错误可能是不可能的,但其中很大一部分错误是可以通过适当的设计来解决的。
{"title":"Exploring amBiDiguity: UI item direction interpretation by Arabic and Hebrew users","authors":"Yulia Goldenberg,&nbsp;Noam Tractinsky","doi":"10.1016/j.ijhcs.2024.103383","DOIUrl":"10.1016/j.ijhcs.2024.103383","url":null,"abstract":"<div><div>Bidirectional user interfaces serve more than half a billion users worldwide. Despite increasing diversity-driven approaches to interface development, bidirectional interfaces still use UI elements inconsistently. In particular, UI items containing ambiguous information that BiDi users might process both from right-to-left and left-to-right pose a challenge to designers. We use the term amBiDiguous to denote such items and suggest that they are susceptible to ineffective use.</div><div>This paper reports on an empirical study with 1705 Arabic and Hebrew users, in which we collected explicit and implicit data about ambiguous UI items in bidirectional interfaces. We explored the directional interpretation of amBiDiguous UI items and investigated the influence of individual, linguistic, and UI design factors on how people perceive them. The findings suggest a complex picture in which various factors affect ambiguous items’ interpretation. While the analysis indicates that preventing all interpretation errors is probably impossible, a large portion of those errors can be addressed by proper design.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"194 ","pages":"Article 103383"},"PeriodicalIF":5.3,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142539514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visualizing speech styles in captions for deaf and hard-of-hearing viewers 为聋人和重听观众可视化字幕中的语言风格
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2024-10-16 DOI: 10.1016/j.ijhcs.2024.103386
SooYeon Ahn , JooYeong Kim , Choonsung Shin , Jin-Hyuk Hong
Speech styles such as extension, emphasis, and pause play an important role in capturing the audience's attention and conveying a message accurately. Unfortunately, it is challenging for Deaf and Hard-of-Hearing (DHH) people to enjoy these benefits when watching lectures with common captions. In this paper, we propose a new caption system that automatically analyzes speech styles from audio and visualizes them using visualization elements such as punctuation, paint-on, color, and boldness. We conducted a comparative study with 26 DHH viewers and found that the proposed caption system enabled them to recognize the speaker's speech style in lectures. As a result, the DHH viewers were able to watch lecture videos more vividly and were more engaged with the lectures. In particular, punctuation can be a practical solution to visualize speech styles and ensure legibility. Participants expressed a desire to use our caption system in their daily lives, providing valuable insights for future sound-visualized caption research.
延伸、强调和停顿等演讲方式在吸引听众注意力和准确传达信息方面发挥着重要作用。遗憾的是,聋人和重听人(DHH)在观看使用普通字幕的讲座时,很难享受到这些好处。在本文中,我们提出了一种新的字幕系统,它能自动分析音频中的语音风格,并使用标点符号、上色、颜色和粗体等可视化元素将其可视化。我们对 26 名 DHH 观众进行了对比研究,发现所提出的字幕系统使他们能够识别演讲者在讲座中的演讲风格。因此,DHH 观众能够更生动地观看讲座视频,并更投入地听讲。特别是,标点符号可以成为可视化语言风格和确保可读性的实用解决方案。参与者表示希望在日常生活中使用我们的字幕系统,这为未来的声音可视化字幕研究提供了宝贵的启示。
{"title":"Visualizing speech styles in captions for deaf and hard-of-hearing viewers","authors":"SooYeon Ahn ,&nbsp;JooYeong Kim ,&nbsp;Choonsung Shin ,&nbsp;Jin-Hyuk Hong","doi":"10.1016/j.ijhcs.2024.103386","DOIUrl":"10.1016/j.ijhcs.2024.103386","url":null,"abstract":"<div><div>Speech styles such as extension, emphasis, and pause play an important role in capturing the audience's attention and conveying a message accurately. Unfortunately, it is challenging for Deaf and Hard-of-Hearing (DHH) people to enjoy these benefits when watching lectures with common captions. In this paper, we propose a new caption system that automatically analyzes speech styles from audio and visualizes them using visualization elements such as punctuation, paint-on, color, and boldness. We conducted a comparative study with 26 DHH viewers and found that the proposed caption system enabled them to recognize the speaker's speech style in lectures. As a result, the DHH viewers were able to watch lecture videos more vividly and were more engaged with the lectures. In particular, punctuation can be a practical solution to visualize speech styles and ensure legibility. Participants expressed a desire to use our caption system in their daily lives, providing valuable insights for future sound-visualized caption research.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"194 ","pages":"Article 103386"},"PeriodicalIF":5.3,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142539513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
When more is less: Finding the optimal balance of intelligent agents’ transparency in level 3 automated vehicles 当多则少:在第三级自动驾驶汽车中寻找智能代理透明度的最佳平衡点
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2024-10-12 DOI: 10.1016/j.ijhcs.2024.103384
Jing Zang, Myounghoon Jeon
In automated vehicles, transparency of in-vehicle intelligent agents (IVIAs) is an important contributor to drivers’ perception, situation awareness, and driving performance. Our experiment focused on IVIA's transparency regarding information level and reliability on drivers’ perception and performance in level 3 automated vehicles. A 3 × 2 mixed factorial design was used in this study, with transparency (low, medium, high) as a between-subject variable and reliability (high vs. low) as a within-subjects variable. Forty-eight participants were recruited. Results suggested that transparency influenced drivers’ takeover time, lane keeping, and jerk. The high-reliability agent was associated with a higher perception of system accuracy and response speed and resulted in a longer takeover time than the low-reliability agent. Particularly, participants in medium transparency showed higher cognitive trust, lower workload, and higher situation awareness only when system reliability was high. Our findings can contribute to the advancement of intelligent agent transparency design in automated vehicles.
在自动驾驶车辆中,车载智能代理(IVIA)的透明度对驾驶员的感知、情况意识和驾驶性能有重要影响。我们的实验重点研究了 IVIA 在信息级别和可靠性方面的透明度对驾驶员在 3 级自动驾驶车辆中的感知和表现的影响。本研究采用了 3 × 2 混合因子设计,透明度(低、中、高)为主体间变量,可靠性(高与低)为主体内变量。共招募了 48 名参与者。结果表明,透明度会影响驾驶员的接管时间、车道保持和颠簸。与低可靠性代理相比,高可靠性代理对系统准确性和响应速度的感知更高,导致接管时间更长。特别是,只有当系统可靠性高时,中等透明度的参与者才会表现出更高的认知信任、更低的工作量和更高的情境意识。我们的研究结果有助于推动自动驾驶汽车中智能代理透明度设计的发展。
{"title":"When more is less: Finding the optimal balance of intelligent agents’ transparency in level 3 automated vehicles","authors":"Jing Zang,&nbsp;Myounghoon Jeon","doi":"10.1016/j.ijhcs.2024.103384","DOIUrl":"10.1016/j.ijhcs.2024.103384","url":null,"abstract":"<div><div>In automated vehicles, transparency of in-vehicle intelligent agents (IVIAs) is an important contributor to drivers’ perception, situation awareness, and driving performance. Our experiment focused on IVIA's transparency regarding information level and reliability on drivers’ perception and performance in level 3 automated vehicles. A 3 × 2 mixed factorial design was used in this study, with transparency (low, medium, high) as a between-subject variable and reliability (high vs. low) as a within-subjects variable. Forty-eight participants were recruited. Results suggested that transparency influenced drivers’ takeover time, lane keeping, and jerk. The high-reliability agent was associated with a higher perception of system accuracy and response speed and resulted in a longer takeover time than the low-reliability agent. Particularly, participants in medium transparency showed higher cognitive trust, lower workload, and higher situation awareness only when system reliability was high. Our findings can contribute to the advancement of intelligent agent transparency design in automated vehicles.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"193 ","pages":"Article 103384"},"PeriodicalIF":5.3,"publicationDate":"2024-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142441178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparing typing methods for uppercase input in virtual reality: Modifier Key vs. alternative approaches 比较虚拟现实中的大写输入法:修改键与其他方法
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2024-10-12 DOI: 10.1016/j.ijhcs.2024.103385
Min Joo Kim , Yu Gyeong Son , Yong Min Kim , Donggun Park
Typing tasks are basic interactions in a virtual environment (VE). The presence of uppercase letters affects the meanings of words and their readability. By typing uppercase letters on a QWERTY keyboard, the layers can be switched using a modifier key. Considering that VE controllers are typically used in a VE, this input method can result in user fatigue and errors. Thus, this study proposed new alternative interactions for the modifier key input and compared their typing performance and user experience. In an experiment, 30 participants were instructed to type 10 sentences using different typing interaction methods (shift, long press, and double-tap) on a virtual keyboard in a VE. The typing speed, error rate, and number of backspace inputs were measured to compare typing performance. Upon the completion of the typing task, the usability, workload, and sickness associated with each typing method were evaluated. The results showed that the double-tap method exhibited significantly higher typing speed, error rate, ease of use, satisfaction, and workload. This result is consistent with those of previous studies demonstrating that selection tasks were more efficient with fewer hand movements. Thus, this study implies that the double-tap method can be considered as a potential typing interaction for the VEs instead of the traditional method using the shift as a modifier key. Therefore, this study is expected to contribute to the design and development of user-friendly interactions.
打字任务是虚拟环境(VE)中的基本互动。大写字母的存在会影响单词的含义和可读性。在 QWERTY 键盘上键入大写字母后,可以使用修改键切换层次。考虑到虚拟现实控制器通常在虚拟现实中使用,这种输入方法可能会导致用户疲劳和错误。因此,本研究为修改键输入提出了新的替代交互方式,并比较了它们的输入性能和用户体验。在一项实验中,30 名参与者在虚拟环境的虚拟键盘上使用不同的键入交互方法(移位、长按和双击)键入 10 个句子。通过测量打字速度、错误率和退格输入次数来比较打字性能。完成键入任务后,对每种键入方法的可用性、工作量和不适感进行了评估。结果显示,双击法的打字速度、出错率、易用性、满意度和工作量都明显更高。这一结果与之前的研究结果一致,即选择任务时手部动作越少效率越高。因此,这项研究表明,双击法可被视为虚拟电子工程师潜在的打字交互方式,而不是使用 shift 作为修改键的传统方法。因此,本研究有望为设计和开发用户友好型交互做出贡献。
{"title":"Comparing typing methods for uppercase input in virtual reality: Modifier Key vs. alternative approaches","authors":"Min Joo Kim ,&nbsp;Yu Gyeong Son ,&nbsp;Yong Min Kim ,&nbsp;Donggun Park","doi":"10.1016/j.ijhcs.2024.103385","DOIUrl":"10.1016/j.ijhcs.2024.103385","url":null,"abstract":"<div><div>Typing tasks are basic interactions in a virtual environment (VE). The presence of uppercase letters affects the meanings of words and their readability. By typing uppercase letters on a QWERTY keyboard, the layers can be switched using a modifier key. Considering that VE controllers are typically used in a VE, this input method can result in user fatigue and errors. Thus, this study proposed new alternative interactions for the modifier key input and compared their typing performance and user experience. In an experiment, 30 participants were instructed to type 10 sentences using different typing interaction methods (shift, long press, and double-tap) on a virtual keyboard in a VE. The typing speed, error rate, and number of backspace inputs were measured to compare typing performance. Upon the completion of the typing task, the usability, workload, and sickness associated with each typing method were evaluated. The results showed that the double-tap method exhibited significantly higher typing speed, error rate, ease of use, satisfaction, and workload. This result is consistent with those of previous studies demonstrating that selection tasks were more efficient with fewer hand movements. Thus, this study implies that the double-tap method can be considered as a potential typing interaction for the VEs instead of the traditional method using the shift as a modifier key. Therefore, this study is expected to contribute to the design and development of user-friendly interactions.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"193 ","pages":"Article 103385"},"PeriodicalIF":5.3,"publicationDate":"2024-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142441080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing collaborative signing songwriting experience of the d/Deaf individuals 增强聋人/聋人的合作手语歌曲创作经验
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2024-10-09 DOI: 10.1016/j.ijhcs.2024.103382
Youjin Choi , ChungHa Lee , Songmin Chung , Eunhye Cho , Suhyeon Yoo , Jin-Hyuk Hong
Songwriting can be an important means of developing the personal and social skills of d/Deaf individuals, but there is a lack of research on understanding and supporting their songwriting. We aimed to understand the d/Deaf people's songwriting experience for the song signing genre, which visually represents music with sign language and body movement. Through two workshops in which mixed-hearing individuals collaborated in songwriting activities, we identified the potentials and challenges of the songwriting experience and developed a music-sensory substitution system that multimodally presents music in sound as well as visual, and vibrotactile feedback. The proposed system enables mixed-hearing partners to have better collaborative interaction and signing songwriting experience. Consequently, we found that the process of signing songwriting is valued by d/Deaf individuals as a means of musical self-expression and social connecting, and our system has increased their musical engagement while encouraging them to express themselves more through music and sign language.
歌曲创作是发展聋人/失聪者个人和社交技能的重要手段,但目前还缺乏对聋人/失聪者歌曲创作的理解和支持的研究。我们的目标是了解聋人/失聪者在歌曲手语体裁方面的创作经验,这种体裁通过手语和肢体动作直观地表现音乐。通过两场由混听人士合作进行歌曲创作活动的研讨会,我们确定了歌曲创作体验的潜力和挑战,并开发了一套音乐-感官替代系统,该系统以声音、视觉和振动反馈等多模态方式呈现音乐。该系统能让混听伙伴进行更好的协作互动,并获得更好的歌曲创作体验。因此,我们发现聋人/失聪者重视手语歌曲创作过程,将其视为音乐自我表达和社会联系的一种手段,我们的系统提高了他们的音乐参与度,同时鼓励他们通过音乐和手语更多地表达自己。
{"title":"Enhancing collaborative signing songwriting experience of the d/Deaf individuals","authors":"Youjin Choi ,&nbsp;ChungHa Lee ,&nbsp;Songmin Chung ,&nbsp;Eunhye Cho ,&nbsp;Suhyeon Yoo ,&nbsp;Jin-Hyuk Hong","doi":"10.1016/j.ijhcs.2024.103382","DOIUrl":"10.1016/j.ijhcs.2024.103382","url":null,"abstract":"<div><div>Songwriting can be an important means of developing the personal and social skills of d/Deaf individuals, but there is a lack of research on understanding and supporting their songwriting. We aimed to understand the d/Deaf people's songwriting experience for the song signing genre, which visually represents music with sign language and body movement. Through two workshops in which mixed-hearing individuals collaborated in songwriting activities, we identified the potentials and challenges of the songwriting experience and developed a music-sensory substitution system that multimodally presents music in sound as well as visual, and vibrotactile feedback. The proposed system enables mixed-hearing partners to have better collaborative interaction and signing songwriting experience. Consequently, we found that the process of signing songwriting is valued by d/Deaf individuals as a means of musical self-expression and social connecting, and our system has increased their musical engagement while encouraging them to express themselves more through music and sign language.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"193 ","pages":"Article 103382"},"PeriodicalIF":5.3,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142433466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Human-Computer Studies
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1