Pub Date : 2024-06-05DOI: 10.1016/j.ijhcs.2024.103303
Cheng Chen , Jingshi Kang , Pejman Sajjadi , S. Shyam Sundar
The autoplay feature of video platforms is often blamed for users going down rabbit holes of binge-watching extreme content. However, autoplay is not necessarily a passive experience, because users can toggle the feature off if they want. While the automation aspect is passive, the toggle option signals interactivity, making it “interpassive,” which lies between completely passive autoplay and manual initiation of each video. We empirically compare these three modes of video viewing in a user study (N = 394), which exposed participants to either extreme or non-extreme content under conditions of manual play, interpassive autoplay, or completely passive autoplay. Results show that interpassive autoplay is favored over the other two. It triggers the control heuristic compared to passive autoplay, but leads to higher inattentiveness compared to manual play. Both the invoked control heuristic and inattentiveness result in higher rabbit hole perception. These findings have implications for socially responsible design of the autoplay feature.
{"title":"Preventing users from going down rabbit holes of extreme video content: A study of the role played by different modes of autoplay","authors":"Cheng Chen , Jingshi Kang , Pejman Sajjadi , S. Shyam Sundar","doi":"10.1016/j.ijhcs.2024.103303","DOIUrl":"10.1016/j.ijhcs.2024.103303","url":null,"abstract":"<div><p>The autoplay feature of video platforms is often blamed for users going down rabbit holes of binge-watching extreme content. However, autoplay is not necessarily a passive experience, because users can toggle the feature off if they want. While the automation aspect is passive, the toggle option signals interactivity, making it “interpassive,” which lies between completely passive autoplay and manual initiation of each video. We empirically compare these three modes of video viewing in a user study (<em>N</em> = 394), which exposed participants to either extreme or non-extreme content under conditions of manual play, interpassive autoplay, or completely passive autoplay. Results show that interpassive autoplay is favored over the other two. It triggers the control heuristic compared to passive autoplay, but leads to higher inattentiveness compared to manual play. Both the invoked control heuristic and inattentiveness result in higher rabbit hole perception. These findings have implications for socially responsible design of the autoplay feature.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141410275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-31DOI: 10.1016/j.ijhcs.2024.103291
Hayoun Moon , Mia Saade , Daniel Enriquez , Zachary Duer , Hye Sung Moon , Sang Won Lee , Myounghoon Jeon
Virtual reality (VR) has opened new possibilities for creative expression, while the 360-degree head-worn display (HWD) delivers a fully immersive experience in the world of art. The immersiveness, however, comes with the cost of blocking out the physical world, including bystanders without an HWD. Therefore, VR experiences in public (e.g., galleries, museums) often lack social interactivity, which plays an important role in forming aesthetic experiences. In the current study, we explored the application of a cross-device mixed reality (MR) platform in the domain of art to enable social and inclusive experiences with artworks that utilize VR technology. Our concept of interest features co-located audiences of HWD and mobile device users who interact across physical and virtual worlds. We conducted focus groups (N=22) and expert interviews (N=7) to identify the concept’s potential scenarios and fundamental components, as well as expected benefits and concerns. We also share our process of creating In-Between Spaces, an interactive artwork in MR that encourages social interactivity among cross-device audiences. Our exploration presents a prospective direction for future VR/MR aesthetic content, especially at public events and exhibitions targeting crowd audiences.
{"title":"Mixed-reality art as shared experience for cross-device users: Materialize, understand, and explore","authors":"Hayoun Moon , Mia Saade , Daniel Enriquez , Zachary Duer , Hye Sung Moon , Sang Won Lee , Myounghoon Jeon","doi":"10.1016/j.ijhcs.2024.103291","DOIUrl":"https://doi.org/10.1016/j.ijhcs.2024.103291","url":null,"abstract":"<div><p>Virtual reality (VR) has opened new possibilities for creative expression, while the 360-degree head-worn display (HWD) delivers a fully immersive experience in the world of art. The immersiveness, however, comes with the cost of blocking out the physical world, including bystanders without an HWD. Therefore, VR experiences in public (e.g., galleries, museums) often lack social interactivity, which plays an important role in forming aesthetic experiences. In the current study, we explored the application of a cross-device mixed reality (MR) platform in the domain of art to enable social and inclusive experiences with artworks that utilize VR technology. Our concept of interest features co-located audiences of HWD and mobile device users who interact across physical and virtual worlds. We conducted focus groups (<em>N</em>=22) and expert interviews (<em>N</em>=7) to identify the concept’s potential scenarios and fundamental components, as well as expected benefits and concerns. We also share our process of creating <em>In-Between Spaces</em>, an interactive artwork in MR that encourages social interactivity among cross-device audiences. Our exploration presents a prospective direction for future VR/MR aesthetic content, especially at public events and exhibitions targeting crowd audiences.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141264275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With high flexibility and rich semantic expressiveness, mid-air gesture interaction is an important part of natural human-computer interaction (HCI) and has broad application prospects. However, there is no unified representation frame for designing, recording, investigating and comparing HCI mid-air gestures. Therefore, this paper proposes an interpretable coding method, DigCode, for HCI mid-air gestures. DigCode converts the unstructured continuous actions into structured discrete string encoding. From the perspective of human cognition and expression, the research employed psychophysical methods to divide gesture actions into discrete intervals, defined the coding rules of representation in letters and numbers, and developed automated programs to enable encoding and decoding by using gesture sensors. The coding method can cover the existing representations of HCI mid-air gestures by considering human understanding and computer recognition and can be applied to HCI mid-air gesture design and gesture library construction.
{"title":"DigCode—A generic mid-air gesture coding method on human-computer interaction","authors":"Xiaozhou Zhou , Lesong Jia , Ruidong Bai , Chengqi Xue","doi":"10.1016/j.ijhcs.2024.103302","DOIUrl":"https://doi.org/10.1016/j.ijhcs.2024.103302","url":null,"abstract":"<div><p>With high flexibility and rich semantic expressiveness, mid-air gesture interaction is an important part of natural human-computer interaction (HCI) and has broad application prospects. However, there is no unified representation frame for designing, recording, investigating and comparing HCI mid-air gestures. Therefore, this paper proposes an interpretable coding method, DigCode, for HCI mid-air gestures. DigCode converts the unstructured continuous actions into structured discrete string encoding. From the perspective of human cognition and expression, the research employed psychophysical methods to divide gesture actions into discrete intervals, defined the coding rules of representation in letters and numbers, and developed automated programs to enable encoding and decoding by using gesture sensors. The coding method can cover the existing representations of HCI mid-air gestures by considering human understanding and computer recognition and can be applied to HCI mid-air gesture design and gesture library construction.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141240866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI systems are increasingly being adopted across various domains and application areas. With this surge, there is a growing research focus and societal concern for actively involving humans in developing, operating, and adopting these systems. Despite this concern, most existing literature on AI and Human–Computer Interaction (HCI) primarily focuses on explaining how AI systems operate and, at times, allowing users to contest AI decisions. Existing studies often overlook more impactful forms of user interaction with AI systems, such as giving users agency beyond contestability and enabling them to adapt and even co-design the AI’s internal mechanics. In this survey, we aim to bridge this gap by reviewing the state-of-the-art in Human-Centered AI literature, the domain where AI and HCI studies converge, extending past Explainable and Contestable AI, delving into the Interactive AI and beyond. Our analysis contributes to shaping the trajectory of future Interactive AI design and advocates for a more user-centric approach that provides users with greater agency, fostering not only their understanding of AI’s workings but also their active engagement in its development and evolution.
{"title":"From explainable to interactive AI: A literature review on current trends in human-AI interaction","authors":"Muhammad Raees , Inge Meijerink , Ioanna Lykourentzou , Vassilis-Javed Khan , Konstantinos Papangelis","doi":"10.1016/j.ijhcs.2024.103301","DOIUrl":"10.1016/j.ijhcs.2024.103301","url":null,"abstract":"<div><p>AI systems are increasingly being adopted across various domains and application areas. With this surge, there is a growing research focus and societal concern for actively involving humans in developing, operating, and adopting these systems. Despite this concern, most existing literature on AI and Human–Computer Interaction (HCI) primarily focuses on explaining how AI systems operate and, at times, allowing users to contest AI decisions. Existing studies often overlook more impactful forms of user interaction with AI systems, such as giving users agency beyond contestability and enabling them to adapt and even co-design the AI’s internal mechanics. In this survey, we aim to bridge this gap by reviewing the state-of-the-art in Human-Centered AI literature, the domain where AI and HCI studies converge, extending past Explainable and Contestable AI, delving into the Interactive AI and beyond. Our analysis contributes to shaping the trajectory of future Interactive AI design and advocates for a more user-centric approach that provides users with greater agency, fostering not only their understanding of AI’s workings but also their active engagement in its development and evolution.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141142214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-22DOI: 10.1016/j.ijhcs.2024.103290
Yucheng Jin, Li Chen, Xianglin Zhao, Wanling Cai
The global pandemic has pushed human society into a mental health crisis, prompting the development of various chatbots to supplement the limited mental health workforce. Several organizations have employed mental health survey chatbots for public mental status assessments. These survey chatbots typically ask closed-ended questions (Closed-EQs) to assess specific psychological issues like anxiety, depression, and loneliness, followed by open-ended questions (Open-EQs) for deeper insights. While Open-EQs are naturally presented conversationally in a survey chatbot, Closed-EQs can be delivered as embedded forms or within conversations, with the length of the questionnaire varying according to the psychological assessment. This study investigates how the interaction style of Closed-EQs and the questionnaire length affect user perceptions regarding survey credibility, enjoyment, and self-awareness, as well as their responses to Open-EQs in terms of quality and self-disclosure in a survey chatbot. We conducted a 2 (interaction style: form-based vs. conversation-based) 3 (questionnaire length: short vs. middle vs. long) between-subjects study (N=213) with a loneliness survey chatbot. The results indicate that the form-based interaction significantly enhances the perceived credibility of the assessment, thereby improving response quality and self-disclosure in subsequent Open-EQs and fostering self-awareness. We discuss our findings for the interaction design of psychological assessment in a survey chatbot for mental health.
{"title":"The way you assess matters: User interaction design of survey chatbots for mental health","authors":"Yucheng Jin, Li Chen, Xianglin Zhao, Wanling Cai","doi":"10.1016/j.ijhcs.2024.103290","DOIUrl":"10.1016/j.ijhcs.2024.103290","url":null,"abstract":"<div><p>The global pandemic has pushed human society into a mental health crisis, prompting the development of various chatbots to supplement the limited mental health workforce. Several organizations have employed mental health survey chatbots for public mental status assessments. These survey chatbots typically ask closed-ended questions (Closed-EQs) to assess specific psychological issues like anxiety, depression, and loneliness, followed by open-ended questions (Open-EQs) for deeper insights. While Open-EQs are naturally presented conversationally in a survey chatbot, Closed-EQs can be delivered as embedded forms or within conversations, with the length of the questionnaire varying according to the psychological assessment. This study investigates how the <em>interaction style</em> of Closed-EQs and the <em>questionnaire length</em> affect user perceptions regarding survey credibility, enjoyment, and self-awareness, as well as their responses to Open-EQs in terms of quality and self-disclosure in a survey chatbot. We conducted a 2 (<em>interaction style</em>: form-based vs. conversation-based) <span><math><mo>×</mo></math></span> 3 (<em>questionnaire length</em>: short vs. middle vs. long) between-subjects study (N=213) with a loneliness survey chatbot. The results indicate that the form-based interaction significantly enhances the perceived credibility of the assessment, thereby improving response quality and self-disclosure in subsequent Open-EQs and fostering self-awareness. We discuss our findings for the interaction design of psychological assessment in a survey chatbot for mental health.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141140499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-16DOI: 10.1016/j.ijhcs.2024.103287
Wenjuan Zhang , Yunmei Liu , David B. Kaber
Unmanned Aerial Vehicle (UAV) control interfaces are critical channels for transferring information between the vehicle and an operator. Research on system performance has focused on enhancing vehicle automation and some work has evaluated cognitive workload for existing UAV interfaces. The potential for usable interface design to reduce cognitive workload during the early design phase has been largely overlooked. This study addresses these gaps by: (1) evaluating the effectiveness of a contemporary UAV interface design tool (the Modified GEDIS-UAV) to moderate user workload; (2) examining the effectiveness of various UAV interface designs for minimizing cognitive workload under different control task pacing; and (3) exploring the use of eye tracking measures, traditionally applied in other domains, as indicators of cognitive workload in UAV operations. We prototyped three different interface designs, classified as “baseline”, “enhanced” and “degraded” interfaces. Cognitive workload in UAV operation was manipulated in terms of levels of vehicle speed (“low” and “high”). Physiological and subjective measures of workload were collected for all combinations of interface design and task demand. Results revealed the “enhanced” interface to yield the lowest operator cognitive workload and supported operator resilience to increased control task demand, as compared to the “baseline” and “degraded” interfaces. In addition, task demand was found to elevate operator cognitive workload, particularly in terms of "mental" and "temporal" demands and operator perceptions of "performance". The study also demonstrated utility of eye-tracking technology for detecting cognitive workload in UAV operations. This research provides practical guidance for UAV control interface design to manage operator workload. The methods employed in the study are applicable to interface evaluation for various types of UAVs and other unmanned systems to enhance human-automation interaction.
{"title":"Effect of interface design on cognitive workload in unmanned aerial vehicle control","authors":"Wenjuan Zhang , Yunmei Liu , David B. Kaber","doi":"10.1016/j.ijhcs.2024.103287","DOIUrl":"10.1016/j.ijhcs.2024.103287","url":null,"abstract":"<div><p>Unmanned Aerial Vehicle (UAV) control interfaces are critical channels for transferring information between the vehicle and an operator. Research on system performance has focused on enhancing vehicle automation and some work has evaluated cognitive workload for existing UAV interfaces. The potential for usable interface design to reduce cognitive workload during the early design phase has been largely overlooked. This study addresses these gaps by: (1) evaluating the effectiveness of a contemporary UAV interface design tool (the Modified GEDIS-UAV) to moderate user workload; (2) examining the effectiveness of various UAV interface designs for minimizing cognitive workload under different control task pacing; and (3) exploring the use of eye tracking measures, traditionally applied in other domains, as indicators of cognitive workload in UAV operations. We prototyped three different interface designs, classified as “baseline”, “enhanced” and “degraded” interfaces. Cognitive workload in UAV operation was manipulated in terms of levels of vehicle speed (“low” and “high”). Physiological and subjective measures of workload were collected for all combinations of interface design and task demand. Results revealed the “enhanced” interface to yield the lowest operator cognitive workload and supported operator resilience to increased control task demand, as compared to the “baseline” and “degraded” interfaces. In addition, task demand was found to elevate operator cognitive workload, particularly in terms of \"mental\" and \"temporal\" demands and operator perceptions of \"performance\". The study also demonstrated utility of eye-tracking technology for detecting cognitive workload in UAV operations. This research provides practical guidance for UAV control interface design to manage operator workload. The methods employed in the study are applicable to interface evaluation for various types of UAVs and other unmanned systems to enhance human-automation interaction.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141035920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-10DOI: 10.1016/j.ijhcs.2024.103288
Yanming He , Qizhang Sun , Peiyao Cheng , Shumeng Hou , Lei Zhou
Gesture elicitation study is an effective method to design gestures for various contexts. Through involving end-users, GES results in intuitive gestures because they directly reflect end-users’ mental models and preferences. However, limited by personal experience, end-users are not capable of taking full advantages of technology while proposing gestures, which is referred as legacy bias. To overcome this, previous studies demonstrate that users’ performance can be improved by priming, such as viewing gestures, watching fictional movies, and experiencing framed scenarios. This research extends this line of studies by considering the developmental origin of image schemas in priming. More specifically, we compared the influences of no-priming, priming with early image schemas (EIS), and priming with late image schemas (LIS) on GES. Controlled experiments were conducted (N = 120) along the three stages of GES: users’ generation of gestures (Experiment 1), final gesture sets (Experiment 2), and end-users’ learnability of gestures (Experiment 3). Results show that users are largely influenced by developmental origin of image schemas in priming. LIS-priming improve gesture proposal production in comparison to no-priming condition. As for end-users’ evaluation, EIS-priming gestures exhibit higher initial and overall learnability.
{"title":"Priming users with babies’ gestures: Investigating the influences of priming with different development origin of image schemas in gesture elicitation study","authors":"Yanming He , Qizhang Sun , Peiyao Cheng , Shumeng Hou , Lei Zhou","doi":"10.1016/j.ijhcs.2024.103288","DOIUrl":"10.1016/j.ijhcs.2024.103288","url":null,"abstract":"<div><p>Gesture elicitation study is an effective method to design gestures for various contexts. Through involving end-users, GES results in intuitive gestures because they directly reflect end-users’ mental models and preferences. However, limited by personal experience, end-users are not capable of taking full advantages of technology while proposing gestures, which is referred as legacy bias. To overcome this, previous studies demonstrate that users’ performance can be improved by priming, such as viewing gestures, watching fictional movies, and experiencing framed scenarios. This research extends this line of studies by considering the developmental origin of image schemas in priming. More specifically, we compared the influences of no-priming, priming with early image schemas (EIS), and priming with late image schemas (LIS) on GES. Controlled experiments were conducted (<em>N</em> = 120) along the three stages of GES: users’ generation of gestures (Experiment 1), final gesture sets (Experiment 2), and end-users’ learnability of gestures (Experiment 3). Results show that users are largely influenced by developmental origin of image schemas in priming. LIS-priming improve gesture proposal production in comparison to no-priming condition. As for end-users’ evaluation, EIS-priming gestures exhibit higher initial and overall learnability.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141023630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-06DOI: 10.1016/j.ijhcs.2024.103289
Nick Ballou , Alena Denisova , Richard Ryan , C. Scott Rigby , Sebastian Deterding
Players’ basic psychological needs for autonomy, competence, and relatedness are among the most commonly used constructs used in research on what makes video games so engaging, and how they might support or undermine user wellbeing. However, existing measures of basic psychological needs in games have important limitations—they either do not measure need frustration, or measure it in a way that may not be appropriate for the video games domain, they struggle to capture feelings of relatedness in both single- and multiplayer contexts, and they often lack validity evidence for certain contexts (e.g., playtesting vs experience with games as a whole). In this paper, we report on the design and validation of a new measure, the Basic Needs in Games Scale (BANGS), whose 6 subscales cover satisfaction and frustration of each basic psychological need in gaming contexts. The scale was validated and evaluated over five studies with a total of 1246 unique participants. Results supported the theorized structure of the scale and provided evidence for discriminant, convergent and criterion validity. Results also show that the scale performs well over different contexts (including evaluating experiences in a single game session or across various sessions) and over time, supporting measurement invariance. Further improvements to the scale are warranted, as results indicated lower reliability in the autonomy frustration subscale, and a surprising non-significant correlation between relatedness satisfaction and frustration. Despite these minor limitations, BANGS is a reliable and theoretically sound tool for researchers to measure basic needs satisfaction and frustration with a degree of domain validity not previously available.
{"title":"The Basic Needs in Games Scale (BANGS): A new tool for investigating positive and negative video game experiences","authors":"Nick Ballou , Alena Denisova , Richard Ryan , C. Scott Rigby , Sebastian Deterding","doi":"10.1016/j.ijhcs.2024.103289","DOIUrl":"https://doi.org/10.1016/j.ijhcs.2024.103289","url":null,"abstract":"<div><p>Players’ basic psychological needs for autonomy, competence, and relatedness are among the most commonly used constructs used in research on what makes video games so engaging, and how they might support or undermine user wellbeing. However, existing measures of basic psychological needs in games have important limitations—they either do not measure need frustration, or measure it in a way that may not be appropriate for the video games domain, they struggle to capture feelings of relatedness in both single- and multiplayer contexts, and they often lack validity evidence for certain contexts (e.g., playtesting vs experience with games as a whole). In this paper, we report on the design and validation of a new measure, the Basic Needs in Games Scale (BANGS), whose 6 subscales cover satisfaction and frustration of each basic psychological need in gaming contexts. The scale was validated and evaluated over five studies with a total of 1246 unique participants. Results supported the theorized structure of the scale and provided evidence for discriminant, convergent and criterion validity. Results also show that the scale performs well over different contexts (including evaluating experiences in a single game session or across various sessions) and over time, supporting measurement invariance. Further improvements to the scale are warranted, as results indicated lower reliability in the autonomy frustration subscale, and a surprising non-significant correlation between relatedness satisfaction and frustration. Despite these minor limitations, BANGS is a reliable and theoretically sound tool for researchers to measure basic needs satisfaction and frustration with a degree of domain validity not previously available.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1071581924000739/pdfft?md5=ca9ad58cbea144bfd6f26708850af19d&pid=1-s2.0-S1071581924000739-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140880663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-27DOI: 10.1016/j.ijhcs.2024.103275
Luca Chittaro, Marta Serafini, Yvonne Vulcano
Virtual reality biofeedback systems for relaxation training can be an effective tool for reducing stress and anxiety levels, but most of them offer a limited user experience associated to the execution of a single task and a biofeedback mechanism that reflects a single physiological measurement. Furthermore, user evaluations of such systems do not typically include a placebo condition, making it difficult to determine the actual contribution of biofeedback. This paper proposes a VR system for breathing and relaxation training that: (i) uses biofeedback mechanisms based on multiple physiological measurements, (ii) provides a richer user experience through a narrative that unfolds in phases where the user is the main character and controls different elements of the virtual environment through biofeedback. To evaluate the system and to assess the actual contribution of biofeedback, we compared two conditions involving 35 participants: a biofeedback condition that exploited real-time measurements of user's breathing, skin conductance, and heart rate; and a placebo control condition, in which changes in the virtual environment followed physiological values recorded from a session with another user. The results showed that the proposed virtual experience helped users relax in both conditions, but real biofeedback produced results that were superior to placebo biofeedback, in terms of both relaxation and sense of presence. These outcomes highlight the important role that biofeedback can play in virtual reality systems for relaxation training, as well as the need for researchers to consider placebo conditions in evaluating this kind of systems.
{"title":"Virtual reality experiences for breathing and relaxation training: The effects of real vs. placebo biofeedback","authors":"Luca Chittaro, Marta Serafini, Yvonne Vulcano","doi":"10.1016/j.ijhcs.2024.103275","DOIUrl":"https://doi.org/10.1016/j.ijhcs.2024.103275","url":null,"abstract":"<div><p>Virtual reality biofeedback systems for relaxation training can be an effective tool for reducing stress and anxiety levels, but most of them offer a limited user experience associated to the execution of a single task and a biofeedback mechanism that reflects a single physiological measurement. Furthermore, user evaluations of such systems do not typically include a placebo condition, making it difficult to determine the actual contribution of biofeedback. This paper proposes a VR system for breathing and relaxation training that: (i) uses biofeedback mechanisms based on multiple physiological measurements, (ii) provides a richer user experience through a narrative that unfolds in phases where the user is the main character and controls different elements of the virtual environment through biofeedback. To evaluate the system and to assess the actual contribution of biofeedback, we compared two conditions involving 35 participants: a biofeedback condition that exploited real-time measurements of user's breathing, skin conductance, and heart rate; and a placebo control condition, in which changes in the virtual environment followed physiological values recorded from a session with another user. The results showed that the proposed virtual experience helped users relax in both conditions, but real biofeedback produced results that were superior to placebo biofeedback, in terms of both relaxation and sense of presence. These outcomes highlight the important role that biofeedback can play in virtual reality systems for relaxation training, as well as the need for researchers to consider placebo conditions in evaluating this kind of systems.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1071581924000594/pdfft?md5=4535cdc0d947a4b827fb903b5c01e2d7&pid=1-s2.0-S1071581924000594-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140822608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-25DOI: 10.1016/j.ijhcs.2024.103273
Jian Wu , Lili Wang , Sio Kei Im , Chan Tong Lam
Object manipulation is the most common form of interaction in virtual reality. We introduced an efficient and ergonomic Big-Arm method to improve the efficiency and comfort of manipulating distant objects in virtual reality. We prolong the upper arm and forearm lengths according to the maximum distance of the manipulation space and construct the linear mapping between the real and virtual elbow angle, which makes manipulation easier to control and more efficient. We propose an optimized elbow angle mapping to further improve the efficiency and comfort of distant object manipulation. Two user studies were designed and conducted to evaluate the performance of our optimized Big-Arm method. The results show that our method achieves significant improvement in efficiency, ergonomic performance, and task load reduction for manipulating the distant object (distance 6 m) compared to the state-of-the-art methods. At the same time, our method exhibits superior usability.
{"title":"EEBA: Efficient and ergonomic Big-Arm for distant object manipulation in VR","authors":"Jian Wu , Lili Wang , Sio Kei Im , Chan Tong Lam","doi":"10.1016/j.ijhcs.2024.103273","DOIUrl":"10.1016/j.ijhcs.2024.103273","url":null,"abstract":"<div><p>Object manipulation is the most common form of interaction in virtual reality. We introduced an efficient and ergonomic Big-Arm method to improve the efficiency and comfort of manipulating distant objects in virtual reality. We prolong the upper arm and forearm lengths according to the maximum distance of the manipulation space and construct the linear mapping between the real and virtual elbow angle, which makes manipulation easier to control and more efficient. We propose an optimized elbow angle mapping to further improve the efficiency and comfort of distant object manipulation. Two user studies were designed and conducted to evaluate the performance of our optimized Big-Arm method. The results show that our method achieves significant improvement in efficiency, ergonomic performance, and task load reduction for manipulating the distant object (distance <span><math><mo>≥</mo></math></span>6 m) compared to the state-of-the-art methods. At the same time, our method exhibits superior usability.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140790634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}