Pub Date : 2025-11-06DOI: 10.1016/j.ijhcs.2025.103673
Jiyeon Oh, Jin-Woo Jeong
The rise of user-centric experiences in the digital landscape has led to a surge in demand for personalized multimedia content. Users now seek to customize not only visual but also auditory components to suit their preferences. In this context, sound design plays a crucial role, enabling users to tailor audio experiences accordingly. However, its inherent complexity poses various challenges, particularly for non-expert users. To address this challenge, we introduce SnapSound, a novel assistive system designed specifically for non-experts in sound design for video content. Our system leverages generative AI to streamline the sound design process and offers intuitive tools for sound selection, synchronization, and seamless integration with visuals. Through a user study, we evaluate SnapSound’s usability and effectiveness compared to manual editing. Furthermore, our study provides valuable insights and design recommendations for enhancing user experience of future AI-based sound design systems. This work represents a significant step forward in empowering non-experts to easily customize their auditory experiences.
{"title":"SnapSound: Empowering everyone to customize sound experience with Generative AI","authors":"Jiyeon Oh, Jin-Woo Jeong","doi":"10.1016/j.ijhcs.2025.103673","DOIUrl":"10.1016/j.ijhcs.2025.103673","url":null,"abstract":"<div><div>The rise of user-centric experiences in the digital landscape has led to a surge in demand for personalized multimedia content. Users now seek to customize not only visual but also auditory components to suit their preferences. In this context, sound design plays a crucial role, enabling users to tailor audio experiences accordingly. However, its inherent complexity poses various challenges, particularly for non-expert users. To address this challenge, we introduce SnapSound, a novel assistive system designed specifically for non-experts in sound design for video content. Our system leverages generative AI to streamline the sound design process and offers intuitive tools for sound selection, synchronization, and seamless integration with visuals. Through a user study, we evaluate SnapSound’s usability and effectiveness compared to manual editing. Furthermore, our study provides valuable insights and design recommendations for enhancing user experience of future AI-based sound design systems. This work represents a significant step forward in empowering non-experts to easily customize their auditory experiences.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"207 ","pages":"Article 103673"},"PeriodicalIF":5.1,"publicationDate":"2025-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-06DOI: 10.1016/j.ijhcs.2025.103672
Gary Perelman, Marcos Serrano, Aurélien Marchal, Emmanuel Dubois
Timelines involving 3D objects can be rendered in VR to facilitate their visualization and various forms of data analysis, such as object location or pattern detection. While different timeline shapes have been proposed in 3D, such as convex or linear, the input interaction is usually based on inherited UIs (e.g. sliders), leaving the rich VR controllers unexploited. Hence, there is still room for more efficient interaction with 3D timelines in VR. Our first contribution is the experimental comparison of a concave timeline shape against other existing shapes. We demonstrate that users prefer the concave shape, which allows for faster object selection and pattern detection. Our second contribution is the design of four controller-based navigation techniques using adaptive speed, i.e. allowing the users to instantly adjust the panning speed in the timeline. We experimentally compared their performance to two baselines: a slider widget and a dual-speed navigation technique. We demonstrate that users prefer the techniques based on adaptive speed, which allow for faster object selection and pattern detection. Finally, in a third experiment we assess the scalability of the best techniques with a timeline containing a large number of elements. Our results show that the adaptive speed technique remains the most efficient with timelines containing thousands of elements.
{"title":"3D timelines in VR: Adaptive speed for 3D data navigation on a concave timeline","authors":"Gary Perelman, Marcos Serrano, Aurélien Marchal, Emmanuel Dubois","doi":"10.1016/j.ijhcs.2025.103672","DOIUrl":"10.1016/j.ijhcs.2025.103672","url":null,"abstract":"<div><div>Timelines involving 3D objects can be rendered in VR to facilitate their visualization and various forms of data analysis, such as object location or pattern detection. While different timeline shapes have been proposed in 3D, such as convex or linear, the input interaction is usually based on inherited UIs (e.g. sliders), leaving the rich VR controllers unexploited. Hence, there is still room for more efficient interaction with 3D timelines in VR. Our first contribution is the experimental comparison of a concave timeline shape against other existing shapes. We demonstrate that users prefer the concave shape, which allows for faster object selection and pattern detection. Our second contribution is the design of four controller-based navigation techniques using adaptive speed, i.e. allowing the users to instantly adjust the panning speed in the timeline. We experimentally compared their performance to two baselines: a slider widget and a dual-speed navigation technique. We demonstrate that users prefer the techniques based on adaptive speed, which allow for faster object selection and pattern detection. Finally, in a third experiment we assess the scalability of the best techniques with a timeline containing a large number of elements. Our results show that the adaptive speed technique remains the most efficient with timelines containing thousands of elements.</div><div>CCS CONCEPTS • Human-centered computing ∼ Human computer interaction (HCI) ∼ Interaction techniques</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"207 ","pages":"Article 103672"},"PeriodicalIF":5.1,"publicationDate":"2025-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01DOI: 10.1016/j.ijhcs.2025.103661
Xiaozhu Hu , Xiaoyu Mo , Xiaofu Jin , Yuan Chai , Yongquan Hu , Mingming Fan , Tristan Braud
Participant-involved formative evaluations is necessary to ensure the intuitiveness of UI transition in mobile apps, but they are neither scalable nor immediate. Recent advances in AI-driven user simulation show promise, but they have not specifically targeted this scenario. This work introduces UTP (UI Transition Predictor), a tool designed to facilitate formative evaluations of UI transitions through two key user simulation models: 1. Predicting and explaining potential user uncertainty during navigation. 2. Predicting the UI element users would most likely select to transition between screens and explaining the corresponding reasons. These models are built on a human-annotated dataset of UI transitions, comprising 140 UI screen pairs and encompassing both high-fidelity and low-fidelity counterparts of UI screen pairs. Technical evaluation indicates that the models outperform GPT-4o in predicting user uncertainty and achieve comparable performance in predicting users’ selection of UI elements for transitions using a lighter, open-weight model. The tool has been validated to support the rapid screening of design flaws, and the confirmation of UI transitions appears to be intuitive.
{"title":"Toward AI-driven UI transition intuitiveness inspection for smartphone apps","authors":"Xiaozhu Hu , Xiaoyu Mo , Xiaofu Jin , Yuan Chai , Yongquan Hu , Mingming Fan , Tristan Braud","doi":"10.1016/j.ijhcs.2025.103661","DOIUrl":"10.1016/j.ijhcs.2025.103661","url":null,"abstract":"<div><div>Participant-involved formative evaluations is necessary to ensure the intuitiveness of UI transition in mobile apps, but they are neither scalable nor immediate. Recent advances in AI-driven user simulation show promise, but they have not specifically targeted this scenario. This work introduces UTP (UI Transition Predictor), a tool designed to facilitate formative evaluations of UI transitions through two key user simulation models: 1. Predicting and explaining potential user uncertainty during navigation. 2. Predicting the UI element users would most likely select to transition between screens and explaining the corresponding reasons. These models are built on a human-annotated dataset of UI transitions, comprising 140 UI screen pairs and encompassing both high-fidelity and low-fidelity counterparts of UI screen pairs. Technical evaluation indicates that the models outperform GPT-4o in predicting user uncertainty and achieve comparable performance in predicting users’ selection of UI elements for transitions using a lighter, open-weight model. The tool has been validated to support the rapid screening of design flaws, and the confirmation of UI transitions appears to be intuitive.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"206 ","pages":"Article 103661"},"PeriodicalIF":5.1,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145467866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01DOI: 10.1016/j.ijhcs.2025.103662
Wen-Jie Tseng , Kasper Hornbæk
While body ownership is central to research in virtual reality (VR), it remains unclear how experiencing an avatar over time shapes a person’s summary judgment of it. Such a judgment could be a simple average of the bodily experience, or it could follow the peak-end rule, which suggests that people’s retrospective judgment correlates with the most intense and recent moments in their experiencing. We systematically manipulate body ownership over a three-minute avatar embodiment using visuomotor asynchrony. Asynchrony here serves to negatively influence body ownership. We conducted one lab study () and two online studies (pilot: and formal: ) to investigate the influence of visuomotor asynchrony given (1) order, meaning early or late, (2) duration and magnitude while controlling the order, and (3) the interaction between order and magnitude. Our results indicate a significant order effect — later visuomotor asynchrony decreased the rating of body-ownership judgments more — but no convergent evidence on magnitude. We discuss how body-ownership judgments may be formed sensorily or affectively.
{"title":"Does the peak-end rule apply to judgments of body ownership in virtual reality?","authors":"Wen-Jie Tseng , Kasper Hornbæk","doi":"10.1016/j.ijhcs.2025.103662","DOIUrl":"10.1016/j.ijhcs.2025.103662","url":null,"abstract":"<div><div>While body ownership is central to research in virtual reality (VR), it remains unclear how experiencing an avatar over time shapes a person’s summary judgment of it. Such a judgment could be a simple average of the bodily experience, or it could follow the peak-end rule, which suggests that people’s retrospective judgment correlates with the most intense and recent moments in their experiencing. We systematically manipulate body ownership over a three-minute avatar embodiment using visuomotor asynchrony. Asynchrony here serves to negatively influence body ownership. We conducted one lab study (<span><math><mrow><mi>N</mi><mo>=</mo><mn>28</mn></mrow></math></span>) and two online studies (pilot: <span><math><mrow><mi>N</mi><mo>=</mo><mn>97</mn></mrow></math></span> and formal: <span><math><mrow><mi>N</mi><mo>=</mo><mn>128</mn></mrow></math></span>) to investigate the influence of visuomotor asynchrony given (1) order, meaning early or late, (2) duration and magnitude while controlling the order, and (3) the interaction between order and magnitude. Our results indicate a significant order effect — later visuomotor asynchrony decreased the rating of body-ownership judgments more — but no convergent evidence on magnitude. We discuss how body-ownership judgments may be formed sensorily or affectively.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"206 ","pages":"Article 103662"},"PeriodicalIF":5.1,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145467865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-30DOI: 10.1016/j.ijhcs.2025.103663
Joo Chan Kim , Karan Mitra , Saguna Saguna , Christer Åhlund , Teemu H. Laine
Technological developments, such as mobile augmented reality (MAR) and Internet of Things (IoT) devices, have expanded available data and interaction modalities for mobile applications. This development enables intuitive data presentation and provides real-time insights into the user’s context. Due to the proliferation of available IoT data sources, user interfaces (UIs) have become complex and diversified, while mobile devices have limited screen spaces. This state increases the necessity of design principles that help to secure sufficient user experience (UX). We found that studies of design principles for IoT-enabled MAR applications are limited. Therefore, we conducted a systematic literature review to identify existing design principles applicable to IoT-enabled MAR applications. From the state-of-the-art research, we compiled and categorized 26 existing design principles into seven categories. We analyzed the UIs of three IoT-enabled MAR applications with the identified design principles and user feedback gathered from each application’s evaluation to understand what design principles can be considered in designing these applications. Among the 26 principles, we find eight principles that are commonly identified as possible improvements for the applications based on their purposes. We demonstrate the practical use of the identified principles by redesigning the UIs, and we propose five new design principles derived from the application analysis. As a result, we summarized a total of 31 design principles, including the five new ones. We expect that our findings will give insight into the UX/UI design of IoT-enabled MAR applications for researchers, educators, and practitioners interested in UX/UI development.
{"title":"Designwise: Design principles for multimodal interfaces with augmented reality in internet of things-enabled smart regions","authors":"Joo Chan Kim , Karan Mitra , Saguna Saguna , Christer Åhlund , Teemu H. Laine","doi":"10.1016/j.ijhcs.2025.103663","DOIUrl":"10.1016/j.ijhcs.2025.103663","url":null,"abstract":"<div><div>Technological developments, such as mobile augmented reality (MAR) and Internet of Things (IoT) devices, have expanded available data and interaction modalities for mobile applications. This development enables intuitive data presentation and provides real-time insights into the user’s context. Due to the proliferation of available IoT data sources, user interfaces (UIs) have become complex and diversified, while mobile devices have limited screen spaces. This state increases the necessity of design principles that help to secure sufficient user experience (UX). We found that studies of design principles for IoT-enabled MAR applications are limited. Therefore, we conducted a systematic literature review to identify existing design principles applicable to IoT-enabled MAR applications. From the state-of-the-art research, we compiled and categorized 26 existing design principles into seven categories. We analyzed the UIs of three IoT-enabled MAR applications with the identified design principles and user feedback gathered from each application’s evaluation to understand what design principles can be considered in designing these applications. Among the 26 principles, we find eight principles that are commonly identified as possible improvements for the applications based on their purposes. We demonstrate the practical use of the identified principles by redesigning the UIs, and we propose five new design principles derived from the application analysis. As a result, we summarized a total of 31 design principles, including the five new ones. We expect that our findings will give insight into the UX/UI design of IoT-enabled MAR applications for researchers, educators, and practitioners interested in UX/UI development.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"207 ","pages":"Article 103663"},"PeriodicalIF":5.1,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145476258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-27DOI: 10.1016/j.ijhcs.2025.103620
Jose Pablo De la Rosa Gutierrez , Thiago Rocha Silva , Anders Stengaard Sørensen , Gitte Rasmussen
Strokes are a leading cause of disability, with many survivors experiencing significant mobility impairments. While robot-assisted rehabilitation offers a promising solution, its adoption seems challenged by high upfront costs, low flexibility and complex configuration workflows. To address these challenges, this study conducts an interaction analysis to identify key requirements for empowering therapists to self-configure and adapt a flexible, cost-effective robot during gait rehabilitation tasks. Our analysis is based on 20 training sequences from a 60-hour video dataset collected across three experimental setups, involving young adults with motor impairments training with the assistance of the robot. Drawing methods from Ethnomethodology and Conversation Analysis (EMCA), we examined the sequential organization of actions between technology, therapist and participant, identifying three main stages: setup, training, and completion. During setup, coordinated actions prepare the participant and the robot for the main training task; during training, participant and robot engage in movement while the therapist iterates adjustments; and in completion, coordinated actions prepare the participant for transitioning back to their conventional support system. Our analysis highlights some requirements and strategies for developing End-User Development (EUD) environments for robot-assisted physiotherapy, emphasizing user-driven workflows, multimodal transitions between robot’s assistance modes, and real-time robotic feedback that maintains coherence with therapists’ practices.
{"title":"Oh no! She is Flying! Interaction Analysis of Therapist-Robot Collaboration in Gait Rehabilitation","authors":"Jose Pablo De la Rosa Gutierrez , Thiago Rocha Silva , Anders Stengaard Sørensen , Gitte Rasmussen","doi":"10.1016/j.ijhcs.2025.103620","DOIUrl":"10.1016/j.ijhcs.2025.103620","url":null,"abstract":"<div><div>Strokes are a leading cause of disability, with many survivors experiencing significant mobility impairments. While robot-assisted rehabilitation offers a promising solution, its adoption seems challenged by high upfront costs, low flexibility and complex configuration workflows. To address these challenges, this study conducts an interaction analysis to identify key requirements for empowering therapists to self-configure and adapt a flexible, cost-effective robot during gait rehabilitation tasks. Our analysis is based on 20 training sequences from a 60-hour video dataset collected across three experimental setups, involving young adults with motor impairments training with the assistance of the robot. Drawing methods from Ethnomethodology and Conversation Analysis (EMCA), we examined the sequential organization of actions between technology, therapist and participant, identifying three main stages: setup, training, and completion. During setup, coordinated actions prepare the participant and the robot for the main training task; during training, participant and robot engage in movement while the therapist iterates adjustments; and in completion, coordinated actions prepare the participant for transitioning back to their conventional support system. Our analysis highlights some requirements and strategies for developing End-User Development (EUD) environments for robot-assisted physiotherapy, emphasizing user-driven workflows, multimodal transitions between robot’s assistance modes, and real-time robotic feedback that maintains coherence with therapists’ practices.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"206 ","pages":"Article 103620"},"PeriodicalIF":5.1,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145420527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-27DOI: 10.1016/j.ijhcs.2025.103659
Christian Gonzalez , Luis M. Zamudio , Jesus Favela , Monica Tentori
Replicating real-world textures using haptic technology poses significant challenges, particularly in accurately capturing complex tactile qualities such as smoothness, roughness, and sharpness through vibration-based information. This research investigates how vibration parameters, intensity, duration, and off-time, can be manipulated to simulate textures inspired by fruit skins, including smooth-apple, rough-melon, bumpy-corn, sharp-pineapple, and adhesive-banana. Using a user-centered design methodology, we developed five vibrotactile patterns and implemented them within the Cactus Mobile Haptic Interface. The patterns were delivered via two interaction devices, a thimble and a stylus pen, designed to enable touch-based activities and tactile experiences. An empirical study with 36 participants evaluated those devices within three tasks: Tolerance to 10-second exposures, change perception, and the ability to associate patterns with real-world textures. Results showed that most patterns were accurately classified (76% with the thimble and 78% the stylus), although the adhesive texture proved more difficult to identify. Rough and sharp textures received slightly lower comfort ratings. While there were notable confusions between similar patterns, such as the sharp–bumpy pair, the type of interaction device did not significantly affect classification accuracy. These findings highlight the potential of low-cost haptic interfaces to mimic sensations from natural, realistic textures and emphasize the importance of refining vibration design for improved perceptual clarity. This work contributes to the development of accessible, portable, and customizable haptic systems designed for sensory interaction in therapeutic, educational, and interactive environments. Future research will focus on refining ambiguous patterns and exploring sensory integration to expand the system’s usability in tactile therapy.
{"title":"Mimicking fruit textures with low-cost stylus interfaces for accessible sensory interaction","authors":"Christian Gonzalez , Luis M. Zamudio , Jesus Favela , Monica Tentori","doi":"10.1016/j.ijhcs.2025.103659","DOIUrl":"10.1016/j.ijhcs.2025.103659","url":null,"abstract":"<div><div>Replicating real-world textures using haptic technology poses significant challenges, particularly in accurately capturing complex tactile qualities such as smoothness, roughness, and sharpness through vibration-based information. This research investigates how vibration parameters, intensity, duration, and off-time, can be manipulated to simulate textures inspired by fruit skins, including smooth-apple, rough-melon, bumpy-corn, sharp-pineapple, and adhesive-banana. Using a user-centered design methodology, we developed five vibrotactile patterns and implemented them within the Cactus Mobile Haptic Interface. The patterns were delivered via two interaction devices, a thimble and a stylus pen, designed to enable touch-based activities and tactile experiences. An empirical study with 36 participants evaluated those devices within three tasks: Tolerance to 10-second exposures, change perception, and the ability to associate patterns with real-world textures. Results showed that most patterns were accurately classified (76% with the thimble and 78% the stylus), although the adhesive texture proved more difficult to identify. Rough and sharp textures received slightly lower comfort ratings. While there were notable confusions between similar patterns, such as the sharp–bumpy pair, the type of interaction device did not significantly affect classification accuracy. These findings highlight the potential of low-cost haptic interfaces to mimic sensations from natural, realistic textures and emphasize the importance of refining vibration design for improved perceptual clarity. This work contributes to the development of accessible, portable, and customizable haptic systems designed for sensory interaction in therapeutic, educational, and interactive environments. Future research will focus on refining ambiguous patterns and exploring sensory integration to expand the system’s usability in tactile therapy.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"206 ","pages":"Article 103659"},"PeriodicalIF":5.1,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145420528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-24DOI: 10.1016/j.ijhcs.2025.103660
Hoai Lan Duong, Thi Kim Oanh Vo
As artificial intelligence (AI) becomes more prominent in shaping media experiences, short form video platforms such as TikTok, Instagram Reels, and YouTube Shorts have emerged as dynamic environments for complex interaction between humans and AI. While most research in human computer interaction has focused on AI in practical or task based roles, there is still limited understanding of how young people perceive and respond to AI-generated short-form videos in everyday digital contexts. This study helps fill that gap by focusing on Generation Z, the most active and influential demographic on social media, and the first to encounter generative AI as an integrated part of everyday digital life. Drawing on the Stimulus Organism Response model and Uncanny Valley Theory, and based on in-depth interviews, this study explores (1) how Gen Z perceives the human likeness and emotional authenticity of AI generated short form videos, and (2) what interactional and design features shape their trust and engagement. The findings reveal a mix of admiration and unease. Key design elements such as visuals, voice, storytelling, and transparency about authorship play a critical role in how young users assess authenticity and build trust. The study expands the scope of Human-Computer Interaction by treating AI as a social and creative presence. It introduces the Perceptual Response Zone to explain how emotional realism shapes user trust and proposes a new framework for understanding how design stimuli influence engagement behaviors. These insights offer practical guidance for creating emotionally resonant AI content for youth audiences.
{"title":"How do young users perceive and respond to AI-generated short-form videos? An exploration of Generation Z’s perceptions, emotional responses, and trust in AI-created video on social media platforms","authors":"Hoai Lan Duong, Thi Kim Oanh Vo","doi":"10.1016/j.ijhcs.2025.103660","DOIUrl":"10.1016/j.ijhcs.2025.103660","url":null,"abstract":"<div><div>As artificial intelligence (AI) becomes more prominent in shaping media experiences, short form video platforms such as TikTok, Instagram Reels, and YouTube Shorts have emerged as dynamic environments for complex interaction between humans and AI. While most research in human computer interaction has focused on AI in practical or task based roles, there is still limited understanding of how young people perceive and respond to AI-generated short-form videos in everyday digital contexts. This study helps fill that gap by focusing on Generation Z, the most active and influential demographic on social media, and the first to encounter generative AI as an integrated part of everyday digital life. Drawing on the Stimulus Organism Response model and Uncanny Valley Theory, and based on in-depth interviews, this study explores (1) how Gen Z perceives the human likeness and emotional authenticity of AI generated short form videos, and (2) what interactional and design features shape their trust and engagement. The findings reveal a mix of admiration and unease. Key design elements such as visuals, voice, storytelling, and transparency about authorship play a critical role in how young users assess authenticity and build trust. The study expands the scope of Human-Computer Interaction by treating AI as a social and creative presence. It introduces the Perceptual Response Zone to explain how emotional realism shapes user trust and proposes a new framework for understanding how design stimuli influence engagement behaviors. These insights offer practical guidance for creating emotionally resonant AI content for youth audiences.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"206 ","pages":"Article 103660"},"PeriodicalIF":5.1,"publicationDate":"2025-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145374280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-17DOI: 10.1016/j.ijhcs.2025.103656
Wei Zhao , Ryan M. Kelly , Melissa J. Rogerson , Jenny Waycott
Virtual reality (VR) technology is being used in diverse ways to enable older adults to reminisce and connect. An underexplored area is the use of VR for creating and sharing personally meaningful stories. This paper examines the use of VR technology to help nine older adults share and experience others’ stories about their gardens. Through a co-design approach involving home visits, interviews, and a workshop, we examined older gardeners’ perceptions and experiences of co-creating, sharing, and viewing narrated 3D garden tours captured with a VR camera. Our findings demonstrate that participants showcased the meaning, creativity, and expertise associated with their gardens and gardening practices through the VR garden tours. Viewing the garden tours created by others fostered a sense of connection among participants through shared passions and interests. Participants valued the authenticity and individuality of the tours, perceiving them as avenues for learning, inspiration, and motivation. Viewers experienced a strong sense of presence in the garden tours, particularly due to the spatial immersion, holistic perspectives, the first-person view, and the ability to exert control in the VR system. The study reveals opportunities to enhance the VR garden tour experience, including increasing sociality, improving usability and guidance, and building communities around the tours. Drawing from these insights, we develop a framework to guide future practices of conducting participatory VR storytelling with older adults. The framework outlines three critical roles involved in the process: Storyteller, Facilitator, and Audience. It also discusses five key elements that need to be considered: Intention, Narrative, Medium, Context, and Engagement.
{"title":"Older adults co-creating and experiencing VR garden tours: Developing a participatory VR storytelling framework","authors":"Wei Zhao , Ryan M. Kelly , Melissa J. Rogerson , Jenny Waycott","doi":"10.1016/j.ijhcs.2025.103656","DOIUrl":"10.1016/j.ijhcs.2025.103656","url":null,"abstract":"<div><div>Virtual reality (VR) technology is being used in diverse ways to enable older adults to reminisce and connect. An underexplored area is the use of VR for creating and sharing personally meaningful stories. This paper examines the use of VR technology to help nine older adults share and experience others’ stories about their gardens. Through a co-design approach involving home visits, interviews, and a workshop, we examined older gardeners’ perceptions and experiences of co-creating, sharing, and viewing narrated 3D garden tours captured with a VR camera. Our findings demonstrate that participants showcased the meaning, creativity, and expertise associated with their gardens and gardening practices through the VR garden tours. Viewing the garden tours created by others fostered a sense of connection among participants through shared passions and interests. Participants valued the authenticity and individuality of the tours, perceiving them as avenues for learning, inspiration, and motivation. Viewers experienced a strong sense of presence in the garden tours, particularly due to the spatial immersion, holistic perspectives, the first-person view, and the ability to exert control in the VR system. The study reveals opportunities to enhance the VR garden tour experience, including increasing sociality, improving usability and guidance, and building communities around the tours. Drawing from these insights, we develop a framework to guide future practices of conducting participatory VR storytelling with older adults. The framework outlines three critical roles involved in the process: Storyteller, Facilitator, and Audience. It also discusses five key elements that need to be considered: Intention, Narrative, Medium, Context, and Engagement.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"205 ","pages":"Article 103656"},"PeriodicalIF":5.1,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145363527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-17DOI: 10.1016/j.ijhcs.2025.103657
Danial Amin , Joni Salminen , Bernard J. Jansen , Joongi Shin , Dae Hyun Kim
Generative AI personas (GenAIPs) promise user-centred design efficiency, but their impact on different persona challenges remains unexplored. Inspired by Dijkstra’s classic essay on harmful programming constructs, we analyze twenty challenges in persona development using Human-Centered AI principles. Through literature review and expert survey (n = 17), we find that GenAIPs transform rather than eliminate traditional persona challenges. Experts rated all challenges as problematic for GenAIPs (M > 4.0), with the highest concerns for hallucinations (M = 5.94), over-sanitization (M = 5.82), and lack of standardization (M = 5.59). 12 out of 20 challenges are considered more problematic for GenAIPs than conventional personas, particularly bias amplification, validation challenges, and accessibility without expertise. We provide HCAI-grounded guidelines demonstrating that effective GenAIP implementation requires human-AI collaboration rather than automation and prioritizing user welfare over technical efficiency.
{"title":"Generative AI personas considered harmful? Putting forth twenty challenges of algorithmic user representation in human-computer interaction","authors":"Danial Amin , Joni Salminen , Bernard J. Jansen , Joongi Shin , Dae Hyun Kim","doi":"10.1016/j.ijhcs.2025.103657","DOIUrl":"10.1016/j.ijhcs.2025.103657","url":null,"abstract":"<div><div>Generative AI personas (GenAIPs) promise user-centred design efficiency, but their impact on different persona challenges remains unexplored. Inspired by Dijkstra’s classic essay on harmful programming constructs, we analyze twenty challenges in persona development using Human-Centered AI principles. Through literature review and expert survey (<em>n</em> = 17), we find that GenAIPs transform rather than eliminate traditional persona challenges. Experts rated all challenges as problematic for GenAIPs (<em>M</em> > 4.0), with the highest concerns for hallucinations (<em>M</em> = 5.94), over-sanitization (<em>M</em> = 5.82), and lack of standardization (<em>M</em> = 5.59). 12 out of 20 challenges are considered more problematic for GenAIPs than conventional personas, particularly bias amplification, validation challenges, and accessibility without expertise. We provide HCAI-grounded guidelines demonstrating that effective GenAIP implementation requires human-AI collaboration rather than automation and prioritizing user welfare over technical efficiency.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"205 ","pages":"Article 103657"},"PeriodicalIF":5.1,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145363526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}