Pub Date : 2025-11-08DOI: 10.1016/j.ijhcs.2025.103682
Chutian Jiang , Yinan Fan , Junan Xie , Emily Kuang , Kaihao Zhang , Mingming Fan
Previewing routes to unfamiliar destinations is a crucial task for many blind and low vision (BLV) individuals to ensure safety and confidence before their journey. While prior work has primarily supported navigation during travel, less research has focused on how best to assist BLV people in previewing routes on a map. We designed a novel electrotactile system around the fingertip and the Trip Preview Assistant (TPA) to convey map elements, route conditions, and trajectories. TPA harnesses large language models (LLMs) to dynamically control and personalize electrotactile feedback, enhancing the interpretability of complex spatial map data for BLV users. In a user study with twelve BLV participants, our system demonstrated improvements in efficiency and user experience for previewing maps and routes. This work contributes to advancing the accessibility of visual map information for BLV users when previewing trips.
{"title":"LLM-powered assistant with electrotactile feedback to assist blind and low vision people with maps and routes preview","authors":"Chutian Jiang , Yinan Fan , Junan Xie , Emily Kuang , Kaihao Zhang , Mingming Fan","doi":"10.1016/j.ijhcs.2025.103682","DOIUrl":"10.1016/j.ijhcs.2025.103682","url":null,"abstract":"<div><div>Previewing routes to unfamiliar destinations is a crucial task for many blind and low vision (BLV) individuals to ensure safety and confidence before their journey. While prior work has primarily supported navigation during travel, less research has focused on how best to assist BLV people in previewing routes on a map. We designed a novel electrotactile system around the fingertip and the Trip Preview Assistant (TPA) to convey map elements, route conditions, and trajectories. TPA harnesses large language models (LLMs) to dynamically control and personalize electrotactile feedback, enhancing the interpretability of complex spatial map data for BLV users. In a user study with twelve BLV participants, our system demonstrated improvements in efficiency and user experience for previewing maps and routes. This work contributes to advancing the accessibility of visual map information for BLV users when previewing trips.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"207 ","pages":"Article 103682"},"PeriodicalIF":5.1,"publicationDate":"2025-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-07DOI: 10.1016/j.ijhcs.2025.103675
Valentin Bauer , Giacomo Caslini , Marco Mores , Mattia Gianotti , Franca Garzotto
People with NeuroDevelopmental Conditions (NDC) and associated Intellectual Disability (ID) often face social and emotional issues affecting their well-being. Although art therapies like music-making or painting can be beneficial, they often lack adaptability for those with moderate-to-severe ID or limited language abilities. Multisensory creative technologies combining art practices offer promising solutions but remain under-explored. This research investigates the impact of integrating music-making and painting within an interactive multisensory environment to support social communication, emotional regulation, and well-being in adults with moderate-to-severe NDCs and ID. The two-player music and painting activity called MusicTraces was first created through co-design process involving caregivers and people with NDCs and ID. Two field studies were conducted. The first study compared MusicTraces with traditional group painting with music for ten adults, while the second examined its impact over two sessions on eight adults with more severe conditions. Both studies showed improved social communication, social regulation and well-being. Insights are discussed to enhance collaboration between people with NDCs and ID through interactive creative technologies.
{"title":"MusicTraces: Combining music and painting to support adults with neurodevelopmental conditions and intellectual disabilities","authors":"Valentin Bauer , Giacomo Caslini , Marco Mores , Mattia Gianotti , Franca Garzotto","doi":"10.1016/j.ijhcs.2025.103675","DOIUrl":"10.1016/j.ijhcs.2025.103675","url":null,"abstract":"<div><div>People with NeuroDevelopmental Conditions (NDC) and associated Intellectual Disability (ID) often face social and emotional issues affecting their well-being. Although art therapies like music-making or painting can be beneficial, they often lack adaptability for those with moderate-to-severe ID or limited language abilities. Multisensory creative technologies combining art practices offer promising solutions but remain under-explored. This research investigates the impact of integrating music-making and painting within an interactive multisensory environment to support social communication, emotional regulation, and well-being in adults with moderate-to-severe NDCs and ID. The two-player music and painting activity called <em>MusicTraces</em> was first created through co-design process involving caregivers and people with NDCs and ID. Two field studies were conducted. The first study compared <em>MusicTraces</em> with traditional group painting with music for ten adults, while the second examined its impact over two sessions on eight adults with more severe conditions. Both studies showed improved social communication, social regulation and well-being. Insights are discussed to enhance collaboration between people with NDCs and ID through interactive creative technologies.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"207 ","pages":"Article 103675"},"PeriodicalIF":5.1,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-07DOI: 10.1016/j.ijhcs.2025.103684
Qianhui Wei , Jun Hu , Min Li
The advanced haptic technology in smartphones makes mediated social touch (MST) possible and provides rich mobile communication between people. This paper presents a generation method for MST signals on smartphones. We translate MST gesture pressure to MST signal intensity, specifically by varying frequency through a function that maps pressure to frequency. We set the duration and provide different compound waveform compositions for MST signals. We conducted two user studies, each with 20 participants. The pilot study explored how likely the designed MST signals could be understood as intended MST gestures. We screened 23 MST signals that were suitable for intended MST gestures. Then, we conducted a recognition task in the main study to explore to which extent the designed MST signals could be recognized as intended MST gestures. The recall results exhibited a range of 13.3–71.7 %, while the precision results varied from 15.1 to 55.1 %. These results can be referenced when designing MST signals. Our design implications include adjusting signal parameters to better match MST gestures and creating context-specific signals for different expressions. We suggest controlling the number of signals, using varied compound waveform composition forms, and adding visual stickers with vibrotactile stimuli. MST signals should also be evaluated in specific contexts, especially for mobile communication.
{"title":"Designing mediated social touch for mobile communication: From hand gestures to touch signals","authors":"Qianhui Wei , Jun Hu , Min Li","doi":"10.1016/j.ijhcs.2025.103684","DOIUrl":"10.1016/j.ijhcs.2025.103684","url":null,"abstract":"<div><div>The advanced haptic technology in smartphones makes mediated social touch (MST) possible and provides rich mobile communication between people. This paper presents a generation method for MST signals on smartphones. We translate MST gesture pressure to MST signal intensity, specifically by varying frequency through a function that maps pressure to frequency. We set the duration and provide different compound waveform compositions for MST signals. We conducted two user studies, each with 20 participants. The pilot study explored how likely the designed MST signals could be understood as intended MST gestures. We screened 23 MST signals that were suitable for intended MST gestures. Then, we conducted a recognition task in the main study to explore to which extent the designed MST signals could be recognized as intended MST gestures. The recall results exhibited a range of 13.3–71.7 %, while the precision results varied from 15.1 to 55.1 %. These results can be referenced when designing MST signals. Our design implications include adjusting signal parameters to better match MST gestures and creating context-specific signals for different expressions. We suggest controlling the number of signals, using varied compound waveform composition forms, and adding visual stickers with vibrotactile stimuli. MST signals should also be evaluated in specific contexts, especially for mobile communication.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"207 ","pages":"Article 103684"},"PeriodicalIF":5.1,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-07DOI: 10.1016/j.ijhcs.2025.103683
Madeleine Steeds , Marius Claudy , Benjamin R. Cowan , Anshu Suri
A 2019 UNESCO report raised concerns that female gendered voice assistants (VAs) may be perpetuating gender stereotypical views. Subsequent research has investigated remedies for this harm, but there is little research into the potential spillover effect of using gendered VAs onto stereotypical views of gender. We take a quantitative approach to address this gap. Over two studies (n = 235, 351), we find little predictive or causational evidence of gendered VAs relating to gender stereotypical views, despite stereotypes being ascribed to VAs. This implies that, while we should continue to strive for equality and equity in technological design, everyday VA use may not be perpetuating gender stereotypes to the extent expected. We highlight the need for longitudinal research studying gendered technology use, and into problematic use cases such as technology use to simulate harassment. Further we suggest the need for work understanding common stereotypes held about diverse gender identities.
{"title":"Do voice agents affect people’s gender stereotypes? Quantitative investigation of stereotype spillover effects from interacting with gendered voice agents","authors":"Madeleine Steeds , Marius Claudy , Benjamin R. Cowan , Anshu Suri","doi":"10.1016/j.ijhcs.2025.103683","DOIUrl":"10.1016/j.ijhcs.2025.103683","url":null,"abstract":"<div><div>A 2019 UNESCO report raised concerns that female gendered voice assistants (VAs) may be perpetuating gender stereotypical views. Subsequent research has investigated remedies for this harm, but there is little research into the potential spillover effect of using gendered VAs onto stereotypical views of gender. We take a quantitative approach to address this gap. Over two studies (<em>n</em> = 235, 351), we find little predictive or causational evidence of gendered VAs relating to gender stereotypical views, despite stereotypes being ascribed to VAs. This implies that, while we should continue to strive for equality and equity in technological design, everyday VA use may not be perpetuating gender stereotypes to the extent expected. We highlight the need for longitudinal research studying gendered technology use, and into problematic use cases such as technology use to simulate harassment. Further we suggest the need for work understanding common stereotypes held about diverse gender identities.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"207 ","pages":"Article 103683"},"PeriodicalIF":5.1,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-07DOI: 10.1016/j.ijhcs.2025.103681
Xinyi Chen , Yao Yao
In human-to-human conversations, people sometimes interpret hesitations from their conversational partners as a clue for rejection (e.g., “um I’ll tell you later if I could come to the party” may be interpreted as “I won’t come to the party”). This type of interpretation is deeply embedded in human talkers’ understanding of social etiquette and modeling of the state of mind of the interlocutors. In this study, we examine how human listeners interpret hesitations in robot speech in a human-robot interactive context, as compared to how they interpret human-produced hesitations. In Experiment 1, participants (N = 63) watched videos of conversations between a humanoid robot talker and a human talker, where the robot talker would give responses, with or without hesitations, to the human talker’s requests or inquiries. The participants then completed a memory test of what they remembered from the conversations. The memory test results showed that participants were significantly more likely to interpret hesitant responses from the robot as rejections compared to completely fluent robot responses. The hesitation-triggered bias toward negative interpretations was replicated in Experiment 2 with a separate group of participants (N = 59), who listened to the same conversations but as human-to-human interactions. Combined analysis found no difference in the magnitude of the hesitation bias between the two conditions. These results provide evidence that human listeners draw similar inferences from hesitant speech produced by robots and those by human talkers. This study offers valuable insights for the future design of conversational AI agents, highlighting the importance of subtle speech cues in human-machine interaction.
{"title":"To ‘errr’ is robot: How humans interpret hesitations in the speech of a humanoid robot","authors":"Xinyi Chen , Yao Yao","doi":"10.1016/j.ijhcs.2025.103681","DOIUrl":"10.1016/j.ijhcs.2025.103681","url":null,"abstract":"<div><div>In human-to-human conversations, people sometimes interpret hesitations from their conversational partners as a clue for rejection (e.g., “<em>um</em> I’ll tell you later if I could come to the party” may be interpreted as “I won’t come to the party”). This type of interpretation is deeply embedded in human talkers’ understanding of social etiquette and modeling of the state of mind of the interlocutors. In this study, we examine how human listeners interpret hesitations in robot speech in a human-robot interactive context, as compared to how they interpret human-produced hesitations. In Experiment 1, participants (<em>N</em> = 63) watched videos of conversations between a humanoid robot talker and a human talker, where the robot talker would give responses, with or without hesitations, to the human talker’s requests or inquiries. The participants then completed a memory test of what they remembered from the conversations. The memory test results showed that participants were significantly more likely to interpret hesitant responses from the robot as rejections compared to completely fluent robot responses. The hesitation-triggered bias toward negative interpretations was replicated in Experiment 2 with a separate group of participants (<em>N</em> = 59), who listened to the same conversations but as human-to-human interactions. Combined analysis found no difference in the magnitude of the hesitation bias between the two conditions. These results provide evidence that human listeners draw similar inferences from hesitant speech produced by robots and those by human talkers. This study offers valuable insights for the future design of conversational AI agents, highlighting the importance of subtle speech cues in human-machine interaction.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"208 ","pages":"Article 103681"},"PeriodicalIF":5.1,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145624450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-06DOI: 10.1016/j.ijhcs.2025.103673
Jiyeon Oh, Jin-Woo Jeong
The rise of user-centric experiences in the digital landscape has led to a surge in demand for personalized multimedia content. Users now seek to customize not only visual but also auditory components to suit their preferences. In this context, sound design plays a crucial role, enabling users to tailor audio experiences accordingly. However, its inherent complexity poses various challenges, particularly for non-expert users. To address this challenge, we introduce SnapSound, a novel assistive system designed specifically for non-experts in sound design for video content. Our system leverages generative AI to streamline the sound design process and offers intuitive tools for sound selection, synchronization, and seamless integration with visuals. Through a user study, we evaluate SnapSound’s usability and effectiveness compared to manual editing. Furthermore, our study provides valuable insights and design recommendations for enhancing user experience of future AI-based sound design systems. This work represents a significant step forward in empowering non-experts to easily customize their auditory experiences.
{"title":"SnapSound: Empowering everyone to customize sound experience with Generative AI","authors":"Jiyeon Oh, Jin-Woo Jeong","doi":"10.1016/j.ijhcs.2025.103673","DOIUrl":"10.1016/j.ijhcs.2025.103673","url":null,"abstract":"<div><div>The rise of user-centric experiences in the digital landscape has led to a surge in demand for personalized multimedia content. Users now seek to customize not only visual but also auditory components to suit their preferences. In this context, sound design plays a crucial role, enabling users to tailor audio experiences accordingly. However, its inherent complexity poses various challenges, particularly for non-expert users. To address this challenge, we introduce SnapSound, a novel assistive system designed specifically for non-experts in sound design for video content. Our system leverages generative AI to streamline the sound design process and offers intuitive tools for sound selection, synchronization, and seamless integration with visuals. Through a user study, we evaluate SnapSound’s usability and effectiveness compared to manual editing. Furthermore, our study provides valuable insights and design recommendations for enhancing user experience of future AI-based sound design systems. This work represents a significant step forward in empowering non-experts to easily customize their auditory experiences.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"207 ","pages":"Article 103673"},"PeriodicalIF":5.1,"publicationDate":"2025-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-06DOI: 10.1016/j.ijhcs.2025.103672
Gary Perelman, Marcos Serrano, Aurélien Marchal, Emmanuel Dubois
Timelines involving 3D objects can be rendered in VR to facilitate their visualization and various forms of data analysis, such as object location or pattern detection. While different timeline shapes have been proposed in 3D, such as convex or linear, the input interaction is usually based on inherited UIs (e.g. sliders), leaving the rich VR controllers unexploited. Hence, there is still room for more efficient interaction with 3D timelines in VR. Our first contribution is the experimental comparison of a concave timeline shape against other existing shapes. We demonstrate that users prefer the concave shape, which allows for faster object selection and pattern detection. Our second contribution is the design of four controller-based navigation techniques using adaptive speed, i.e. allowing the users to instantly adjust the panning speed in the timeline. We experimentally compared their performance to two baselines: a slider widget and a dual-speed navigation technique. We demonstrate that users prefer the techniques based on adaptive speed, which allow for faster object selection and pattern detection. Finally, in a third experiment we assess the scalability of the best techniques with a timeline containing a large number of elements. Our results show that the adaptive speed technique remains the most efficient with timelines containing thousands of elements.
{"title":"3D timelines in VR: Adaptive speed for 3D data navigation on a concave timeline","authors":"Gary Perelman, Marcos Serrano, Aurélien Marchal, Emmanuel Dubois","doi":"10.1016/j.ijhcs.2025.103672","DOIUrl":"10.1016/j.ijhcs.2025.103672","url":null,"abstract":"<div><div>Timelines involving 3D objects can be rendered in VR to facilitate their visualization and various forms of data analysis, such as object location or pattern detection. While different timeline shapes have been proposed in 3D, such as convex or linear, the input interaction is usually based on inherited UIs (e.g. sliders), leaving the rich VR controllers unexploited. Hence, there is still room for more efficient interaction with 3D timelines in VR. Our first contribution is the experimental comparison of a concave timeline shape against other existing shapes. We demonstrate that users prefer the concave shape, which allows for faster object selection and pattern detection. Our second contribution is the design of four controller-based navigation techniques using adaptive speed, i.e. allowing the users to instantly adjust the panning speed in the timeline. We experimentally compared their performance to two baselines: a slider widget and a dual-speed navigation technique. We demonstrate that users prefer the techniques based on adaptive speed, which allow for faster object selection and pattern detection. Finally, in a third experiment we assess the scalability of the best techniques with a timeline containing a large number of elements. Our results show that the adaptive speed technique remains the most efficient with timelines containing thousands of elements.</div><div>CCS CONCEPTS • Human-centered computing ∼ Human computer interaction (HCI) ∼ Interaction techniques</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"207 ","pages":"Article 103672"},"PeriodicalIF":5.1,"publicationDate":"2025-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01DOI: 10.1016/j.ijhcs.2025.103661
Xiaozhu Hu , Xiaoyu Mo , Xiaofu Jin , Yuan Chai , Yongquan Hu , Mingming Fan , Tristan Braud
Participant-involved formative evaluations is necessary to ensure the intuitiveness of UI transition in mobile apps, but they are neither scalable nor immediate. Recent advances in AI-driven user simulation show promise, but they have not specifically targeted this scenario. This work introduces UTP (UI Transition Predictor), a tool designed to facilitate formative evaluations of UI transitions through two key user simulation models: 1. Predicting and explaining potential user uncertainty during navigation. 2. Predicting the UI element users would most likely select to transition between screens and explaining the corresponding reasons. These models are built on a human-annotated dataset of UI transitions, comprising 140 UI screen pairs and encompassing both high-fidelity and low-fidelity counterparts of UI screen pairs. Technical evaluation indicates that the models outperform GPT-4o in predicting user uncertainty and achieve comparable performance in predicting users’ selection of UI elements for transitions using a lighter, open-weight model. The tool has been validated to support the rapid screening of design flaws, and the confirmation of UI transitions appears to be intuitive.
{"title":"Toward AI-driven UI transition intuitiveness inspection for smartphone apps","authors":"Xiaozhu Hu , Xiaoyu Mo , Xiaofu Jin , Yuan Chai , Yongquan Hu , Mingming Fan , Tristan Braud","doi":"10.1016/j.ijhcs.2025.103661","DOIUrl":"10.1016/j.ijhcs.2025.103661","url":null,"abstract":"<div><div>Participant-involved formative evaluations is necessary to ensure the intuitiveness of UI transition in mobile apps, but they are neither scalable nor immediate. Recent advances in AI-driven user simulation show promise, but they have not specifically targeted this scenario. This work introduces UTP (UI Transition Predictor), a tool designed to facilitate formative evaluations of UI transitions through two key user simulation models: 1. Predicting and explaining potential user uncertainty during navigation. 2. Predicting the UI element users would most likely select to transition between screens and explaining the corresponding reasons. These models are built on a human-annotated dataset of UI transitions, comprising 140 UI screen pairs and encompassing both high-fidelity and low-fidelity counterparts of UI screen pairs. Technical evaluation indicates that the models outperform GPT-4o in predicting user uncertainty and achieve comparable performance in predicting users’ selection of UI elements for transitions using a lighter, open-weight model. The tool has been validated to support the rapid screening of design flaws, and the confirmation of UI transitions appears to be intuitive.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"206 ","pages":"Article 103661"},"PeriodicalIF":5.1,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145467866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01DOI: 10.1016/j.ijhcs.2025.103662
Wen-Jie Tseng , Kasper Hornbæk
While body ownership is central to research in virtual reality (VR), it remains unclear how experiencing an avatar over time shapes a person’s summary judgment of it. Such a judgment could be a simple average of the bodily experience, or it could follow the peak-end rule, which suggests that people’s retrospective judgment correlates with the most intense and recent moments in their experiencing. We systematically manipulate body ownership over a three-minute avatar embodiment using visuomotor asynchrony. Asynchrony here serves to negatively influence body ownership. We conducted one lab study () and two online studies (pilot: and formal: ) to investigate the influence of visuomotor asynchrony given (1) order, meaning early or late, (2) duration and magnitude while controlling the order, and (3) the interaction between order and magnitude. Our results indicate a significant order effect — later visuomotor asynchrony decreased the rating of body-ownership judgments more — but no convergent evidence on magnitude. We discuss how body-ownership judgments may be formed sensorily or affectively.
{"title":"Does the peak-end rule apply to judgments of body ownership in virtual reality?","authors":"Wen-Jie Tseng , Kasper Hornbæk","doi":"10.1016/j.ijhcs.2025.103662","DOIUrl":"10.1016/j.ijhcs.2025.103662","url":null,"abstract":"<div><div>While body ownership is central to research in virtual reality (VR), it remains unclear how experiencing an avatar over time shapes a person’s summary judgment of it. Such a judgment could be a simple average of the bodily experience, or it could follow the peak-end rule, which suggests that people’s retrospective judgment correlates with the most intense and recent moments in their experiencing. We systematically manipulate body ownership over a three-minute avatar embodiment using visuomotor asynchrony. Asynchrony here serves to negatively influence body ownership. We conducted one lab study (<span><math><mrow><mi>N</mi><mo>=</mo><mn>28</mn></mrow></math></span>) and two online studies (pilot: <span><math><mrow><mi>N</mi><mo>=</mo><mn>97</mn></mrow></math></span> and formal: <span><math><mrow><mi>N</mi><mo>=</mo><mn>128</mn></mrow></math></span>) to investigate the influence of visuomotor asynchrony given (1) order, meaning early or late, (2) duration and magnitude while controlling the order, and (3) the interaction between order and magnitude. Our results indicate a significant order effect — later visuomotor asynchrony decreased the rating of body-ownership judgments more — but no convergent evidence on magnitude. We discuss how body-ownership judgments may be formed sensorily or affectively.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"206 ","pages":"Article 103662"},"PeriodicalIF":5.1,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145467865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-30DOI: 10.1016/j.ijhcs.2025.103663
Joo Chan Kim , Karan Mitra , Saguna Saguna , Christer Åhlund , Teemu H. Laine
Technological developments, such as mobile augmented reality (MAR) and Internet of Things (IoT) devices, have expanded available data and interaction modalities for mobile applications. This development enables intuitive data presentation and provides real-time insights into the user’s context. Due to the proliferation of available IoT data sources, user interfaces (UIs) have become complex and diversified, while mobile devices have limited screen spaces. This state increases the necessity of design principles that help to secure sufficient user experience (UX). We found that studies of design principles for IoT-enabled MAR applications are limited. Therefore, we conducted a systematic literature review to identify existing design principles applicable to IoT-enabled MAR applications. From the state-of-the-art research, we compiled and categorized 26 existing design principles into seven categories. We analyzed the UIs of three IoT-enabled MAR applications with the identified design principles and user feedback gathered from each application’s evaluation to understand what design principles can be considered in designing these applications. Among the 26 principles, we find eight principles that are commonly identified as possible improvements for the applications based on their purposes. We demonstrate the practical use of the identified principles by redesigning the UIs, and we propose five new design principles derived from the application analysis. As a result, we summarized a total of 31 design principles, including the five new ones. We expect that our findings will give insight into the UX/UI design of IoT-enabled MAR applications for researchers, educators, and practitioners interested in UX/UI development.
{"title":"Designwise: Design principles for multimodal interfaces with augmented reality in internet of things-enabled smart regions","authors":"Joo Chan Kim , Karan Mitra , Saguna Saguna , Christer Åhlund , Teemu H. Laine","doi":"10.1016/j.ijhcs.2025.103663","DOIUrl":"10.1016/j.ijhcs.2025.103663","url":null,"abstract":"<div><div>Technological developments, such as mobile augmented reality (MAR) and Internet of Things (IoT) devices, have expanded available data and interaction modalities for mobile applications. This development enables intuitive data presentation and provides real-time insights into the user’s context. Due to the proliferation of available IoT data sources, user interfaces (UIs) have become complex and diversified, while mobile devices have limited screen spaces. This state increases the necessity of design principles that help to secure sufficient user experience (UX). We found that studies of design principles for IoT-enabled MAR applications are limited. Therefore, we conducted a systematic literature review to identify existing design principles applicable to IoT-enabled MAR applications. From the state-of-the-art research, we compiled and categorized 26 existing design principles into seven categories. We analyzed the UIs of three IoT-enabled MAR applications with the identified design principles and user feedback gathered from each application’s evaluation to understand what design principles can be considered in designing these applications. Among the 26 principles, we find eight principles that are commonly identified as possible improvements for the applications based on their purposes. We demonstrate the practical use of the identified principles by redesigning the UIs, and we propose five new design principles derived from the application analysis. As a result, we summarized a total of 31 design principles, including the five new ones. We expect that our findings will give insight into the UX/UI design of IoT-enabled MAR applications for researchers, educators, and practitioners interested in UX/UI development.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"207 ","pages":"Article 103663"},"PeriodicalIF":5.1,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145476258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}