Pub Date : 2025-12-26DOI: 10.1016/j.ijhcs.2025.103723
Mia Huong Nguyen, Dixon Prem Daniel Rajendran, Suranga Chandima Nanayakkara
Growing evidence in affective neuroscience suggests that emotion words do more than simply label emotions; they actively shape emotion concepts, which in turn influence how we experience and regulate emotions. Emotion concepts are mental representations that the brain uses to categorize and make sense of affective experiences. Expanding one’s repertoire of nuanced emotion concepts has been linked to better emotion regulation, yet this promising potential has been largely overlooked by existing technologies and interventions. To address this, we developed a conversational agent informed by the Theory of Constructed Emotions, employing granular affect labeling to expand users’ emotion concepts and strengthen regulation abilities. Through a 24-day controlled field study involving 33 participants, we found that our chatbot broadens participants’ active emotion vocabulary—a direct window to one’s emotion concepts—and significantly improves their ability to regulate emotions compared to a control emotional support chatbot. Our findings contribute to a deeper understanding of how granular affect labeling can be leveraged in technology to cultivate more nuanced emotional experiences and enhance well-being.
{"title":"Beyond happy and sad: Exploring granular affect labeling to enhance emotion regulation ability","authors":"Mia Huong Nguyen, Dixon Prem Daniel Rajendran, Suranga Chandima Nanayakkara","doi":"10.1016/j.ijhcs.2025.103723","DOIUrl":"10.1016/j.ijhcs.2025.103723","url":null,"abstract":"<div><div>Growing evidence in affective neuroscience suggests that emotion words do more than simply label emotions; they actively shape emotion concepts, which in turn influence how we experience and regulate emotions. Emotion concepts are mental representations that the brain uses to categorize and make sense of affective experiences. Expanding one’s repertoire of nuanced emotion concepts has been linked to better emotion regulation, yet this promising potential has been largely overlooked by existing technologies and interventions. To address this, we developed a conversational agent informed by the Theory of Constructed Emotions, employing <em>granular affect labeling</em> to expand users’ emotion concepts and strengthen regulation abilities. Through a 24-day controlled field study involving 33 participants, we found that our chatbot broadens participants’ active emotion vocabulary—a direct window to one’s emotion concepts—and significantly improves their ability to regulate emotions compared to a control emotional support chatbot. Our findings contribute to a deeper understanding of how granular affect labeling can be leveraged in technology to cultivate more nuanced emotional experiences and enhance well-being.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"209 ","pages":"Article 103723"},"PeriodicalIF":5.1,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145898025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-23DOI: 10.1016/j.ijhcs.2025.103722
Enricoandrea Laviola, Michele Gattullo, Antonio Emmanuele Uva
This study aims to validate STAAR-TECH, an Augmented Reality (AR) authoring tool for technical documentation designed for end users without expertise in AR development. Previous research on AR authoring tools has primarily focused on simplifying interfaces and interactions to enhance usability. However, these tools are inadequate for applying the Minimal AR authoring approach, according to which the information provided through AR visual assets should be the minimum needed to accomplish the task. The focus of this study is to propose an authoring tool that comprehensively adheres to this approach, ensuring the selection and customization of visual assets and their properties, which are essential for optimally conveying all information types according to the literature. To validate its effectiveness, we conducted a comparative user study with Microsoft Dynamics 365 Guides as baseline, involving 32 participants using HoloLens. We measured the authoring performance, the perceived workload, and the user experience while using both tools. Our findings confirmed that STAAR-TECH is a highly suitable tool for AR manual creation. Participants appreciated its extensive visual asset customization and flexibility, reinforcing its potential as an effective solution for AR authoring.
{"title":"A comprehensive on-site authoring tool for creating augmented reality technical documentation (STAAR-TECH)","authors":"Enricoandrea Laviola, Michele Gattullo, Antonio Emmanuele Uva","doi":"10.1016/j.ijhcs.2025.103722","DOIUrl":"10.1016/j.ijhcs.2025.103722","url":null,"abstract":"<div><div>This study aims to validate STAAR-TECH, an Augmented Reality (AR) authoring tool for technical documentation designed for end users without expertise in AR development. Previous research on AR authoring tools has primarily focused on simplifying interfaces and interactions to enhance usability. However, these tools are inadequate for applying the Minimal AR authoring approach, according to which the information provided through AR visual assets should be the minimum needed to accomplish the task. The focus of this study is to propose an authoring tool that comprehensively adheres to this approach, ensuring the selection and customization of visual assets and their properties, which are essential for optimally conveying all information types according to the literature. To validate its effectiveness, we conducted a comparative user study with Microsoft Dynamics 365 Guides as baseline, involving 32 participants using HoloLens. We measured the authoring performance, the perceived workload, and the user experience while using both tools. Our findings confirmed that STAAR-TECH is a highly suitable tool for AR manual creation. Participants appreciated its extensive visual asset customization and flexibility, reinforcing its potential as an effective solution for AR authoring.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"208 ","pages":"Article 103722"},"PeriodicalIF":5.1,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-18DOI: 10.1016/j.ijhcs.2025.103716
Fangli Song , Xinyi Gao , Wei Wang
Far periphery visual fields (≥ 60° eccentricities) offer great potential for designing informative peripheral displays. While Field of View (FOV) of various displays begins to push the boundary beyond far periphery, the potential of using the far periphery to encode and convey diverse information remains underexplored. Therefore, this study aims to provide information in far periphery, particularly for secondary tasks by investigating the possibility of using diverse Trace motion patterns. Each trace is generated by a small dot (1°-3°) moving across frames. We investigate the perception of geometric shape traces and directional line traces, revealing a mean accuracy of 92.7% for 12 geometric shapes and 95.7% for 64 directional lines at approximately 10° size at 60° eccentricity. We then select 22 Trace patterns and evaluate their effectiveness compared with Blink in conveying diverse secondary information in a driving scenario, a typical visually intensive task with high cognitive workload. The results demonstrate that Trace is generally more noticeable, understandable, comfortable, and less distracting to the primary task. Taken together, we suggest that diverse Trace motion patterns can be easily distinguished and are effective in presenting information in the far periphery. We discuss our findings and provide design implications for future motion graphic design in peripheral displays.
{"title":"Investigating diverse Trace motion patterns as informative cues in the far periphery","authors":"Fangli Song , Xinyi Gao , Wei Wang","doi":"10.1016/j.ijhcs.2025.103716","DOIUrl":"10.1016/j.ijhcs.2025.103716","url":null,"abstract":"<div><div>Far periphery visual fields (≥ 60° eccentricities) offer great potential for designing informative peripheral displays. While Field of View (FOV) of various displays begins to push the boundary beyond far periphery, the potential of using the far periphery to encode and convey diverse information remains underexplored. Therefore, this study aims to provide information in far periphery, particularly for secondary tasks by investigating the possibility of using diverse Trace motion patterns. Each trace is generated by a small dot (1°-3°) moving across frames. We investigate the perception of geometric shape traces and directional line traces, revealing a mean accuracy of 92.7% for 12 geometric shapes and 95.7% for 64 directional lines at approximately 10° size at 60° eccentricity. We then select 22 Trace patterns and evaluate their effectiveness compared with Blink in conveying diverse secondary information in a driving scenario, a typical visually intensive task with high cognitive workload. The results demonstrate that Trace is generally more noticeable, understandable, comfortable, and less distracting to the primary task. Taken together, we suggest that diverse Trace motion patterns can be easily distinguished and are effective in presenting information in the far periphery. We discuss our findings and provide design implications for future motion graphic design in peripheral displays.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"208 ","pages":"Article 103716"},"PeriodicalIF":5.1,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-17DOI: 10.1016/j.ijhcs.2025.103714
Xin Sun , Rongjun Ma , Shu Wei , Pablo Cesar , Jos A. Bosch , Abdallah El Ali
As AI-generated health information proliferates online and becomes increasingly indistinguishable from human-sourced information, it becomes critical to understand how people trust and label such content, especially when the information is inaccurate. We conducted two complementary studies: (1) a mixed-methods survey (N=142) employing a 2 (source: Human vs. LLM) 2 (label: Human vs. AI) 3 (type: General, Symptom, Treatment) design, and (2) a within-subjects lab study (N=40) incorporating eye-tracking and physiological sensing (ECG, EDA, skin temperature). Participants were presented with health information varying by source-label combinations and asked to rate their trust, while their gaze behavior and physiological signals were recorded. We found that LLM-generated information was trusted more than human-generated content, whereas information labeled as human was trusted more than that labeled as AI. Trust remained consistent across information types. Eye-tracking and physiological responses varied significantly by source and label. Machine learning models trained on these behavioral and physiological features predicted binary self-reported trust levels with 73 % accuracy and information source with 65 % accuracy. Our findings demonstrate that adding transparency labels to online health information modulates trust. Behavioral and physiological features show potential to verify trust perceptions and indicate if additional transparency is needed.
{"title":"Understanding trust toward human versus AI-generated health information through behavioral and physiological sensing","authors":"Xin Sun , Rongjun Ma , Shu Wei , Pablo Cesar , Jos A. Bosch , Abdallah El Ali","doi":"10.1016/j.ijhcs.2025.103714","DOIUrl":"10.1016/j.ijhcs.2025.103714","url":null,"abstract":"<div><div>As AI-generated health information proliferates online and becomes increasingly indistinguishable from human-sourced information, it becomes critical to understand how people trust and label such content, especially when the information is inaccurate. We conducted two complementary studies: (1) a mixed-methods survey (N=142) employing a 2 (source: Human vs. LLM) <span><math><mo>×</mo></math></span> 2 (label: Human vs. AI) <span><math><mo>×</mo></math></span> 3 (type: General, Symptom, Treatment) design, and (2) a within-subjects lab study (N=40) incorporating eye-tracking and physiological sensing (ECG, EDA, skin temperature). Participants were presented with health information varying by source-label combinations and asked to rate their trust, while their gaze behavior and physiological signals were recorded. We found that LLM-generated information was trusted more than human-generated content, whereas information labeled as human was trusted more than that labeled as AI. Trust remained consistent across information types. Eye-tracking and physiological responses varied significantly by source and label. Machine learning models trained on these behavioral and physiological features predicted binary self-reported trust levels with 73 % accuracy and information source with 65 % accuracy. Our findings demonstrate that adding transparency labels to online health information modulates trust. Behavioral and physiological features show potential to verify trust perceptions and indicate if additional transparency is needed.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"209 ","pages":"Article 103714"},"PeriodicalIF":5.1,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145980255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-17DOI: 10.1016/j.ijhcs.2025.103717
Paul Raingeard de la Bletiere , Mark Neerincx , Rebecca Schaefer , Catharine Oertel
Music is widely used in human–computer interaction (HCI) to enhance engagement, sustain attention, and support cognitive stimulation. Yet its potential for deliberate mood regulation, particularly through personalized memory recall, remains largely unexplored.
Music-evoked autobiographical memories (MEAMs) are often elicited by well-known, favorite songs, yielding stronger mood effects than music without personal memory associations. However, songs can also trigger distressing memories, and will never capture all positive personal memories. Since happy personal memories can enhance mood, broader methods for retrieval are needed.
To address this, we introduce Constructed Music-Evoked Episodic Memories (CoMEEMs), a framework linking chosen episodic memories to music. By creating a personalized song-memory database, CoMEEMs enable autonomous mood regulation and communication in interactive systems, integrating memory cues—such as people and places—alongside mood congruence, to help choose songs with high mood regulatory impact.
In an experiment with 71 Dutch and French adults, participants described 87 positive memories and received song recommendations based on associated people and places, with and without mood matching. Results showed that song familiarity and genre were the strongest predictors of perceived fit, while valence, arousal, tempo, and lyrics played smaller roles. Mood congruence, especially in valence, significantly influenced song relevance. Participants emphasized the need for user input on emotional states and memory context. Based on these findings, we propose design guidelines to improve future music recommendation systems targeting memories.
{"title":"A music recommendation system for constructed music-evoked episodic memories (CoMEEMs)","authors":"Paul Raingeard de la Bletiere , Mark Neerincx , Rebecca Schaefer , Catharine Oertel","doi":"10.1016/j.ijhcs.2025.103717","DOIUrl":"10.1016/j.ijhcs.2025.103717","url":null,"abstract":"<div><div>Music is widely used in human–computer interaction (HCI) to enhance engagement, sustain attention, and support cognitive stimulation. Yet its potential for deliberate mood regulation, particularly through personalized memory recall, remains largely unexplored.</div><div>Music-evoked autobiographical memories (MEAMs) are often elicited by well-known, favorite songs, yielding stronger mood effects than music without personal memory associations. However, songs can also trigger distressing memories, and will never capture all positive personal memories. Since happy personal memories can enhance mood, broader methods for retrieval are needed.</div><div>To address this, we introduce Constructed Music-Evoked Episodic Memories (CoMEEMs), a framework linking chosen episodic memories to music. By creating a personalized song-memory database, CoMEEMs enable autonomous mood regulation and communication in interactive systems, integrating memory cues—such as people and places—alongside mood congruence, to help choose songs with high mood regulatory impact.</div><div>In an experiment with 71 Dutch and French adults, participants described 87 positive memories and received song recommendations based on associated people and places, with and without mood matching. Results showed that song familiarity and genre were the strongest predictors of perceived fit, while valence, arousal, tempo, and lyrics played smaller roles. Mood congruence, especially in valence, significantly influenced song relevance. Participants emphasized the need for user input on emotional states and memory context. Based on these findings, we propose design guidelines to improve future music recommendation systems targeting memories.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"208 ","pages":"Article 103717"},"PeriodicalIF":5.1,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-17DOI: 10.1016/j.ijhcs.2025.103715
Jianhui Yan , Jiesi Zhang , Haoqiang Hua , Wenxuan Wu , Hongnan Lin , Qiwei Xiong , Jianxiu Jin , Lin Shu
Electrical muscle stimulation (EMS) has become a popular technique in human-computer interaction (HCI). However, a common challenge with EMS is the loss of sense of agency (SoA), where users feel a reduced sense of control over the body parts being stimulated. In this paper, we introduce a novel concept that leverages the loss of sense of agency induced by EMS to enhance the immersion of hand-as-prop interactions. We present EMS Hand Prop, an interaction technique that applies EMS to actuate the users’ non-dominant hands into poses that simulate virtual objects and enables the users to interact with these hand-formed virtual objects using their dominant hands in virtual reality (VR). Through two user studies where participants were tasked to retrieve and interact with objects, comparing the EMS Hand Prop with non-EMS conditions, we collected both quantitative and qualitative feedback. The results indicate that EMS Hand Prop can enhance the perceptual transformation of users’ non-dominant hands into virtual objects and increase the immersion of interacting with the hand-formed virtual objects.
肌电刺激(EMS)已成为人机交互(HCI)中的一种流行技术。然而,EMS的一个常见挑战是代理感(SoA)的丧失,即用户对被刺激的身体部位的控制感降低。在本文中,我们引入了一个新的概念,利用由EMS引起的代理感的丧失来增强手-道具交互的沉浸感。我们提出了EMS Hand Prop,这是一种应用EMS的交互技术,它将用户的非主导手驱动成模拟虚拟物体的姿势,并使用户能够在虚拟现实(VR)中使用他们的主导手与这些手工形成的虚拟物体进行交互。通过两项用户研究,参与者被要求检索并与物体互动,比较EMS Hand Prop和非EMS条件,我们收集了定量和定性反馈。结果表明,EMS Hand Prop可以增强用户非惯用手对虚拟物体的感知转化,增加与手形虚拟物体交互的沉浸感。
{"title":"EMS hand prop: Leveraging the loss of sense of agency caused by electrical muscle stimulation to make hands serve better as virtual objects","authors":"Jianhui Yan , Jiesi Zhang , Haoqiang Hua , Wenxuan Wu , Hongnan Lin , Qiwei Xiong , Jianxiu Jin , Lin Shu","doi":"10.1016/j.ijhcs.2025.103715","DOIUrl":"10.1016/j.ijhcs.2025.103715","url":null,"abstract":"<div><div>Electrical muscle stimulation (EMS) has become a popular technique in human-computer interaction (HCI). However, a common challenge with EMS is the loss of sense of agency (SoA), where users feel a reduced sense of control over the body parts being stimulated. In this paper, we introduce a novel concept that leverages the loss of sense of agency induced by EMS to enhance the immersion of hand-as-prop interactions. We present EMS Hand Prop, an interaction technique that applies EMS to actuate the users’ non-dominant hands into poses that simulate virtual objects and enables the users to interact with these hand-formed virtual objects using their dominant hands in virtual reality (VR). Through two user studies where participants were tasked to retrieve and interact with objects, comparing the EMS Hand Prop with non-EMS conditions, we collected both quantitative and qualitative feedback. The results indicate that EMS Hand Prop can enhance the perceptual transformation of users’ non-dominant hands into virtual objects and increase the immersion of interacting with the hand-formed virtual objects.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"208 ","pages":"Article 103715"},"PeriodicalIF":5.1,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recommender systems increasingly depend on explanations to convey the importance and relevance of the options recommended to their users. Previous research on explanations has predominantly concentrated on developing new explanation methods and studying how explanations can enhance recommendation effectiveness and understandability. However, limited attention has been given to understanding the persuasiveness of recommendations, particularly whether manipulating explanations to be persuasive can influence users’ choices and preferences for options. Here, we report a study (N=231) in which participants were required to make choices among recommended options across three tasks based on the provided explanations. These explanations were presented in a form of natural language. To study the effect of persuasiveness, the explanations were systematically manipulated by incorporating principles of reciprocity, commitment, authority, social proof, liking, and scarcity. In addition, explanations were manipulated for their rationality. The results reveal that the persuasiveness of explanations significantly impacts participants’ recommendation choice behavior, with persuasive explanations being preferred over non-persuasive ones. Moreover, persuasiveness also significantly increased participants’ tendencies toward irrational choices: an increased proportion of selecting a lower utility option when accompanied by a persuasive explanation. The results demonstrate that explanations substantially influence user choice behavior in recommender systems and that users’ choices, as well as the resulting user profiles, can be intentionally or unintentionally shaped by the introduction of persuasive elements in explanations. This could have unprecedented and cumulative effects on the bias of recommendations and can raise ethical concerns about the effect of persuasiveness on users’ long-term behavior in systems integrated with recommendation support.
{"title":"Persuasive explanations for recommender systems: how explanations can influence users’ choices?","authors":"S.M.Tahsinur Rahman , Dominik Siemon , Tuukka Ruotsalo","doi":"10.1016/j.ijhcs.2025.103720","DOIUrl":"10.1016/j.ijhcs.2025.103720","url":null,"abstract":"<div><div>Recommender systems increasingly depend on explanations to convey the importance and relevance of the options recommended to their users. Previous research on explanations has predominantly concentrated on developing new explanation methods and studying how explanations can enhance recommendation effectiveness and understandability. However, limited attention has been given to understanding the persuasiveness of recommendations, particularly whether manipulating explanations to be persuasive can influence users’ choices and preferences for options. Here, we report a study (N=231) in which participants were required to make choices among recommended options across three tasks based on the provided explanations. These explanations were presented in a form of natural language. To study the effect of persuasiveness, the explanations were systematically manipulated by incorporating principles of reciprocity, commitment, authority, social proof, liking, and scarcity. In addition, explanations were manipulated for their rationality. The results reveal that the persuasiveness of explanations significantly impacts participants’ recommendation choice behavior, with persuasive explanations being preferred over non-persuasive ones. Moreover, persuasiveness also significantly increased participants’ tendencies toward irrational choices: an increased proportion of selecting a lower utility option when accompanied by a persuasive explanation. The results demonstrate that explanations substantially influence user choice behavior in recommender systems and that users’ choices, as well as the resulting user profiles, can be intentionally or unintentionally shaped by the introduction of persuasive elements in explanations. This could have unprecedented and cumulative effects on the bias of recommendations and can raise ethical concerns about the effect of persuasiveness on users’ long-term behavior in systems integrated with recommendation support.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"208 ","pages":"Article 103720"},"PeriodicalIF":5.1,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-16DOI: 10.1016/j.ijhcs.2025.103718
Carla Luttmann, Petra Jansen
Virtual reality has emerged as a helpful tool in exercise facilitation and performance enhancement. As a highly immersive medium, it enables diversion of attention, embodiment of virtual avatars, and display of false performance feedback. These mechanisms can reduce strain, improve affective responses, and influence behaviour or performance. While the role of avatar design in these relations has been extensively studied, other areas of virtual reality require further investigation. Especially scene configurations bear the potential to enable realistic first-person scenarios of exercise, for instance, by showing action feedback such as visual flow during locomotion. The present study aims to investigate the impact of unmatching visual flow speed on heart rate and subjective effort during moderate intensity cycling in virtual reality. Both were expected to decrease with increasing visual flow speed. Eighty university sports students cycled at a self-chosen cadence for ten minutes, viewing a virtual cycling track at one of two velocities (0.5*actual speed or 1.5*actual speed). Each participant experienced both velocity conditions in counterbalanced order. Heart rate and perceived exertion were recorded continuously. Linear mixed model analyses revealed a significant main effect of visual flow speed on heart rate but not on perceived exertion. This suggests a dissociation between physiological and perceived effort, which calls for future research to identify the specific sources of effort ratings.
{"title":"Heart rate changes with visual flow speed in virtual reality cycling","authors":"Carla Luttmann, Petra Jansen","doi":"10.1016/j.ijhcs.2025.103718","DOIUrl":"10.1016/j.ijhcs.2025.103718","url":null,"abstract":"<div><div>Virtual reality has emerged as a helpful tool in exercise facilitation and performance enhancement. As a highly immersive medium, it enables diversion of attention, embodiment of virtual avatars, and display of false performance feedback. These mechanisms can reduce strain, improve affective responses, and influence behaviour or performance. While the role of avatar design in these relations has been extensively studied, other areas of virtual reality require further investigation. Especially scene configurations bear the potential to enable realistic first-person scenarios of exercise, for instance, by showing action feedback such as visual flow during locomotion. The present study aims to investigate the impact of unmatching visual flow speed on heart rate and subjective effort during moderate intensity cycling in virtual reality. Both were expected to decrease with increasing visual flow speed. Eighty university sports students cycled at a self-chosen cadence for ten minutes, viewing a virtual cycling track at one of two velocities (0.5*actual speed or 1.5*actual speed). Each participant experienced both velocity conditions in counterbalanced order. Heart rate and perceived exertion were recorded continuously. Linear mixed model analyses revealed a significant main effect of visual flow speed on heart rate but not on perceived exertion. This suggests a dissociation between physiological and perceived effort, which calls for future research to identify the specific sources of effort ratings.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"208 ","pages":"Article 103718"},"PeriodicalIF":5.1,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-03DOI: 10.1016/j.ijhcs.2025.103599
Sharmin Ahmed , Emilee Rader , Sameer Patil
People are often asked to provide their email addresses for identification, authentication, or communication purposes. In many such circumstances, people provide made-up email addresses instead of their own. To understand why people provide made-up email addresses, we interviewed 20 people who reported doing so. We found that the participants provided made-up email addresses to avoid information overload and protect privacy. The participants chose to provide made-up email addresses based on several factors, such as the context, personal benefits and risks, past experiences, and verification requirements. When composing made-up email addresses, the participants employed several common patterns based on their mental models of email address formats and threat models for undesirable uses of their email addresses. The participants reported using these patterns strategically to navigate the social expectations to comply with such requests and to avoid embarrassment from being perceived as deceptive. We connect our findings to email privacy more broadly through the theoretical perspectives of boundary regulation, communication privacy management, contextual integrity, social desirability, and interdependent privacy. Our insight points to design and regulatory suggestions to address the interdependent privacy issues resulting from made-up email addresses and to help users deal more effectively with email overload and email marketing.
{"title":"Strategic misdirection: Attempts to protect privacy with made-up email addresses","authors":"Sharmin Ahmed , Emilee Rader , Sameer Patil","doi":"10.1016/j.ijhcs.2025.103599","DOIUrl":"10.1016/j.ijhcs.2025.103599","url":null,"abstract":"<div><div>People are often asked to provide their email addresses for identification, authentication, or communication purposes. In many such circumstances, people provide made-up email addresses instead of their own. To understand why people provide made-up email addresses, we interviewed 20 people who reported doing so. We found that the participants provided made-up email addresses to avoid information overload and protect privacy. The participants chose to provide made-up email addresses based on several factors, such as the context, personal benefits and risks, past experiences, and verification requirements. When composing made-up email addresses, the participants employed several common patterns based on their mental models of email address formats and threat models for undesirable uses of their email addresses. The participants reported using these patterns strategically to navigate the social expectations to comply with such requests and to avoid embarrassment from being perceived as deceptive. We connect our findings to email privacy more broadly through the theoretical perspectives of boundary regulation, communication privacy management, contextual integrity, social desirability, and interdependent privacy. Our insight points to design and regulatory suggestions to address the interdependent privacy issues resulting from made-up email addresses and to help users deal more effectively with email overload and email marketing.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"208 ","pages":"Article 103599"},"PeriodicalIF":5.1,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1016/j.ijhcs.2025.103631
Bibhushan Raj Joshi , Ana Lucia Diaz de Leon Derby , Jennifer J. Llewellyn , Kristina Llewellyn , Jennifer Roberts-Smith , Oliver Schneider
Haptic technology offers new opportunities for interaction, yet remains inaccessible to people unfamiliar with the technology due to challenges in rapid prototyping and the absence of a widely understood vocabulary, making early-stage design communication difficult. To address these challenges, we developed Feel–Play–Imagine (FPI), a method for haptics experts to involve team members and stakeholders in the early stages of design, and explored its use in the context of storytelling. FPI involves introducing people to haptics through experiencing polished haptic experiences in context (Feel) and experimenting with alternative modalities (Play), then engaging in discussions using stories to imagine designed experiences (Imagine). We report on the results of using FPI in an ongoing co-design project and a lab study with 10 expert storytellers from various backgrounds. Our findings include the value of hands-on and playful experiences to learn about haptic technologies, the ability of FPI to support design decisions, the ability of our developed Worksheet to structure discussion in some contexts, and the need to support multimodal and gestural communication when discussing haptic and tangible interaction.
{"title":"Feel–Play–Imagine: Structured introduction and imagination of haptics with storytellers","authors":"Bibhushan Raj Joshi , Ana Lucia Diaz de Leon Derby , Jennifer J. Llewellyn , Kristina Llewellyn , Jennifer Roberts-Smith , Oliver Schneider","doi":"10.1016/j.ijhcs.2025.103631","DOIUrl":"10.1016/j.ijhcs.2025.103631","url":null,"abstract":"<div><div>Haptic technology offers new opportunities for interaction, yet remains inaccessible to people unfamiliar with the technology due to challenges in rapid prototyping and the absence of a widely understood vocabulary, making early-stage design communication difficult. To address these challenges, we developed Feel–Play–Imagine (FPI), a method for haptics experts to involve team members and stakeholders in the early stages of design, and explored its use in the context of storytelling. FPI involves introducing people to haptics through experiencing polished haptic experiences in context (Feel) and experimenting with alternative modalities (Play), then engaging in discussions using stories to imagine designed experiences (Imagine). We report on the results of using FPI in an ongoing co-design project and a lab study with 10 expert storytellers from various backgrounds. Our findings include the value of hands-on and playful experiences to learn about haptic technologies, the ability of FPI to support design decisions, the ability of our developed Worksheet to structure discussion in some contexts, and the need to support multimodal and gestural communication when discussing haptic and tangible interaction.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"208 ","pages":"Article 103631"},"PeriodicalIF":5.1,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145796792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}