Pub Date : 2024-06-15DOI: 10.1016/j.ijhcs.2024.103318
Houda Elmimouni , Jennifer A. Rode , Selma Šabanović
Robotic Telepresence (TR) is a promising medium for providing classroom access for students who are unable to attend classes in-person. While existing research has focused on TR’s usability, adoption, and embodiment, there is a need for research focusing on how TR supports key user values — like identity, privacy, and courtesy — in educational contexts. To bridge this gap, we engaged 22 university students in a field study using Beam telepresence robots, which enabled us to discern the key manifestations of these three values in classroom human–robot interactions. We also identified articulation work improvised by remote students to maintain these values. Based on our findings, we propose recommendations for use that can support these values and offer design recommendations for future telepresence robots. Our insights offer valuable guidance to educational institutions intending to integrate telepresence robots, as well as to their designers.
{"title":"Articulation work for supporting the values of students attending class via telepresence robots","authors":"Houda Elmimouni , Jennifer A. Rode , Selma Šabanović","doi":"10.1016/j.ijhcs.2024.103318","DOIUrl":"10.1016/j.ijhcs.2024.103318","url":null,"abstract":"<div><p>Robotic Telepresence (TR) is a promising medium for providing classroom access for students who are unable to attend classes in-person. While existing research has focused on TR’s usability, adoption, and embodiment, there is a need for research focusing on how TR supports key user values — like identity, privacy, and courtesy — in educational contexts. To bridge this gap, we engaged 22 university students in a field study using Beam telepresence robots, which enabled us to discern the key manifestations of these three values in classroom human–robot interactions. We also identified articulation work improvised by remote students to maintain these values. Based on our findings, we propose recommendations for use that can support these values and offer design recommendations for future telepresence robots. Our insights offer valuable guidance to educational institutions intending to integrate telepresence robots, as well as to their designers.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"190 ","pages":"Article 103318"},"PeriodicalIF":5.3,"publicationDate":"2024-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141416149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-14DOI: 10.1016/j.ijhcs.2024.103319
Andreas Balaskas , Stephen M. Schueller , Kevin Doherty , Anna L. Cox , Gavin Doherty
Anxiety disorders are the most common mental health problem, and cognitive-behavioral therapy is one of the most widely used, evidence-based treatments. While several mobile apps for anxiety that integrate cognitive-behavioral therapy (CBT) techniques exist, major challenges remain concerning uptake and engagement. Personalization is one strategy that can be used to improve client engagement, and integrating therapist input is one mechanism for such personalization. This study aims to understand therapist practices and identify new possibilities for delivering intervention content between face-to-face CBT therapy sessions. It comprised semi-structured interviews, followed by a series of ideation activities, and thematic analysis of the data. The results showed the central role of clients in shaping the content of therapy sessions, their challenges with homework practice, and therapists’ diverse practices. Analysis of the ideation activities elaborated the potential role of therapists in the personalization of apps for anxiety. We conclude with takeaways for designers of personalized mental health mobile applications.
{"title":"Designing personalized mental health interventions for anxiety: CBT therapists’ perspective","authors":"Andreas Balaskas , Stephen M. Schueller , Kevin Doherty , Anna L. Cox , Gavin Doherty","doi":"10.1016/j.ijhcs.2024.103319","DOIUrl":"10.1016/j.ijhcs.2024.103319","url":null,"abstract":"<div><p>Anxiety disorders are the most common mental health problem, and cognitive-behavioral therapy is one of the most widely used, evidence-based treatments. While several mobile apps for anxiety that integrate cognitive-behavioral therapy (CBT) techniques exist, major challenges remain concerning uptake and engagement. Personalization is one strategy that can be used to improve client engagement, and integrating therapist input is one mechanism for such personalization. This study aims to understand therapist practices and identify new possibilities for delivering intervention content between face-to-face CBT therapy sessions. It comprised semi-structured interviews, followed by a series of ideation activities, and thematic analysis of the data. The results showed the central role of clients in shaping the content of therapy sessions, their challenges with homework practice, and therapists’ diverse practices. Analysis of the ideation activities elaborated the potential role of therapists in the personalization of apps for anxiety. We conclude with takeaways for designers of personalized mental health mobile applications.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"190 ","pages":"Article 103319"},"PeriodicalIF":5.3,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1071581924001034/pdfft?md5=36746f218e488e7d69c0dd7ba0a1bf52&pid=1-s2.0-S1071581924001034-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141410434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-13DOI: 10.1016/j.ijhcs.2024.103317
Aunnoy K Mutasim , Anil Ufuk Batmaz , Moaaz Hudhud Mughrabi , Wolfgang Stuerzlinger
To determine in a user study whether proposed keyboard layouts, such as OPTI, can surpass QWERTY in performance, extended training through longitudinal studies is crucial. However, addressing the challenge of creating trained users presents a logistical bottleneck. A common alternative involves having participants type the same word or phrase repeatedly. We conducted two separate studies to investigate this alternative. The findings reveal that both approaches, repeatedly typing words or phrases, have limitations in accurately estimating trained user performance. Thus, we propose the Guided Evaluation Method (GEM), a novel approach to quickly estimate trained user performance with novices. Our results reveal that in a matter of minutes, participants exhibited performance similar to an existing longitudinal study — OPTI outperforms QWERTY. As it eliminates the need for resource-intensive longitudinal studies, our new GEM thus enables much faster estimation of trained user performance. This outcome will potentially reignite research on better text entry methods.
{"title":"The Guided Evaluation Method: An easier way to empirically estimate trained user performance for unfamiliar keyboard layouts","authors":"Aunnoy K Mutasim , Anil Ufuk Batmaz , Moaaz Hudhud Mughrabi , Wolfgang Stuerzlinger","doi":"10.1016/j.ijhcs.2024.103317","DOIUrl":"10.1016/j.ijhcs.2024.103317","url":null,"abstract":"<div><p>To determine in a user study whether proposed keyboard layouts, such as OPTI, can surpass QWERTY in performance, extended training through longitudinal studies is crucial. However, addressing the challenge of creating trained users presents a logistical bottleneck. A common alternative involves having participants type the same word or phrase repeatedly. We conducted two separate studies to investigate this alternative. The findings reveal that both approaches, repeatedly typing words or phrases, have limitations in accurately estimating trained user performance. Thus, we propose the Guided Evaluation Method (GEM), a novel approach to <em>quickly</em> estimate trained user performance with novices. Our results reveal that in a matter of minutes, participants exhibited performance similar to an existing longitudinal study — OPTI outperforms QWERTY. As it eliminates the need for resource-intensive longitudinal studies, our new GEM thus enables much faster estimation of trained user performance. This outcome will potentially reignite research on better text entry methods.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"190 ","pages":"Article 103317"},"PeriodicalIF":5.3,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141410239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-11DOI: 10.1016/j.ijhcs.2024.103304
Eunhee Chang , Yongjae Lee , Mark Billinghurst , Byounghyun Yoo
When using Virtual Reality (VR) and Augmented Reality (AR) to support remote collaboration, effective communication between a remote expert in VR and a local worker in AR is important for guiding and following task instructions. This is especially crucial for assembly tasks, which require precise identification of parts and clear directions for their combination. Despite the increasing interest in efficient VR-AR communication methods, previous studies have been limited to complex hardware setups and simplified assembly tasks. In this research, we introduce a communication approach for remote collaboration in complex assembly tasks, utilizing simplified hardware configurations. We conducted a user study () and compared three interaction interfaces (hand gestures, 3D drawing, and virtual replicas) in task completion time, subjective questionnaires, and preference rank. The results showed that the use of virtual replicas not only enhances task efficiency but also receives strong preference by users. These findings indicate that virtual replicas can provide intuitive instructions to local workers, resulting in a clearer understanding of the expert’s guidance.
在使用虚拟现实(VR)和增强现实(AR)支持远程协作时,VR 中的远程专家和 AR 中的本地工人之间的有效沟通对于指导和遵循任务指示非常重要。这一点对于装配任务尤为重要,因为装配任务需要精确识别部件并提供清晰的组合指示。尽管人们对高效的 VR-AR 通信方法越来越感兴趣,但以往的研究仅限于复杂的硬件设置和简化的装配任务。在本研究中,我们利用简化的硬件配置,为复杂装配任务中的远程协作引入了一种通信方法。我们进行了一项用户研究(n=30),比较了三种交互界面(手势、三维绘图和虚拟复制品)的任务完成时间、主观问卷和偏好等级。结果表明,使用虚拟复制品不仅能提高任务效率,还能获得用户的强烈偏好。这些研究结果表明,虚拟复制品可以为当地工人提供直观的指示,使他们更清楚地理解专家的指导。
{"title":"Efficient VR-AR communication method using virtual replicas in XR remote collaboration","authors":"Eunhee Chang , Yongjae Lee , Mark Billinghurst , Byounghyun Yoo","doi":"10.1016/j.ijhcs.2024.103304","DOIUrl":"10.1016/j.ijhcs.2024.103304","url":null,"abstract":"<div><p>When using Virtual Reality (VR) and Augmented Reality (AR) to support remote collaboration, effective communication between a remote expert in VR and a local worker in AR is important for guiding and following task instructions. This is especially crucial for assembly tasks, which require precise identification of parts and clear directions for their combination. Despite the increasing interest in efficient VR-AR communication methods, previous studies have been limited to complex hardware setups and simplified assembly tasks. In this research, we introduce a communication approach for remote collaboration in complex assembly tasks, utilizing simplified hardware configurations. We conducted a user study (<span><math><mrow><mi>n</mi><mo>=</mo><mn>30</mn></mrow></math></span>) and compared three interaction interfaces (hand gestures, 3D drawing, and virtual replicas) in task completion time, subjective questionnaires, and preference rank. The results showed that the use of virtual replicas not only enhances task efficiency but also receives strong preference by users. These findings indicate that virtual replicas can provide intuitive instructions to local workers, resulting in a clearer understanding of the expert’s guidance.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"190 ","pages":"Article 103304"},"PeriodicalIF":5.4,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1071581924000880/pdfft?md5=41b9a30018bd997a2842345c45578261&pid=1-s2.0-S1071581924000880-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141405586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-10DOI: 10.1016/j.ijhcs.2024.103314
Jeanine Kirchner-Krath , Maximilian Altmeyer , Linda Schürmann , Bastian Kordyaka , Benedikt Morschheuser , Ana Carolina Tomé Klock , Lennart Nacke , Juho Hamari , Harald F.O. von Korflesch
Gamification has become one of the main areas in information systems and human–computer interaction research related to users’ motivations and behaviors. Within this context, a significant research gap is the lack of understanding of how users’ characteristics, especially in terms of their preferences for gameful interaction (i.e., user typologies), moderate the effects of gamification and, furthermore, how gamification could be tailored to individual needs. Despite their prominence in classifying users, current typologies and their use in research and practice have received severe criticism regarding validity and reliability, as well as the application and interpretation of their results. Therefore, it is essential to reconsider the relationships and foundations of common user typologies and establish a sound empirical basis to critically discuss their value and limits for personalized gamification. To address this research gap, this study investigated the psychometric properties of the most popular player types within tailored gamification literature (i.e., Bartle’s player types, Yee’s motivations to play, BrainHex, and HEXAD) through a survey study () using their respective measurement instruments, followed by a correlation analysis to understand their empirical relations and an exploratory factor analysis to identify the underlying factors. The results confirm that user typologies, despite their different origins, show considerable overlap, some being consistent whereas others contradicted theoretically assumed relationships. Furthermore, we show that these four user typologies overall factor into five underlying and fundamental dimensions of Socialization, Escapism, Achievement, Reward Pursuit, and Independence, which could be considered common concepts that may essentially reflect key determinants of user motivation in gamification. Our findings imply that future research and practice in tailored gamification design should shift the focus from developing and applying ever more nuanced typologies to understanding and measuring the key underlying determinants of user motivation in gameful systems. Moreover, given the considerable interrelationships between these determinants, we also argue that researchers should favor continuous representations of users’ motivations in specific situations instead of a dichotomous operationalization of user types as static manifestations of their preferences.
{"title":"Uncovering the theoretical basis of user types: An empirical analysis and critical discussion of user typologies in research on tailored gameful design","authors":"Jeanine Kirchner-Krath , Maximilian Altmeyer , Linda Schürmann , Bastian Kordyaka , Benedikt Morschheuser , Ana Carolina Tomé Klock , Lennart Nacke , Juho Hamari , Harald F.O. von Korflesch","doi":"10.1016/j.ijhcs.2024.103314","DOIUrl":"https://doi.org/10.1016/j.ijhcs.2024.103314","url":null,"abstract":"<div><p>Gamification has become one of the main areas in information systems and human–computer interaction research related to users’ motivations and behaviors. Within this context, a significant research gap is the lack of understanding of how users’ characteristics, especially in terms of their preferences for gameful interaction (i.e., user typologies), moderate the effects of gamification and, furthermore, how gamification could be tailored to individual needs. Despite their prominence in classifying users, current typologies and their use in research and practice have received severe criticism regarding validity and reliability, as well as the application and interpretation of their results. Therefore, it is essential to reconsider the relationships and foundations of common user typologies and establish a sound empirical basis to critically discuss their value and limits for personalized gamification. To address this research gap, this study investigated the psychometric properties of the most popular player types within tailored gamification literature (i.e., Bartle’s player types, Yee’s motivations to play, BrainHex, and HEXAD) through a survey study (<span><math><mrow><mi>n</mi><mo>=</mo><mn>877</mn></mrow></math></span>) using their respective measurement instruments, followed by a correlation analysis to understand their empirical relations and an exploratory factor analysis to identify the underlying factors. The results confirm that user typologies, despite their different origins, show considerable overlap, some being consistent whereas others contradicted theoretically assumed relationships. Furthermore, we show that these four user typologies overall factor into five underlying and fundamental dimensions of <em>Socialization, Escapism, Achievement, Reward Pursuit, and Independence</em>, which could be considered common concepts that may essentially reflect key determinants of user motivation in gamification. Our findings imply that future research and practice in tailored gamification design should shift the focus from developing and applying ever more nuanced typologies to understanding and measuring the key underlying determinants of user motivation in gameful systems. Moreover, given the considerable interrelationships between these determinants, we also argue that researchers should favor continuous representations of users’ motivations in specific situations instead of a dichotomous operationalization of user types as static manifestations of their preferences.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"190 ","pages":"Article 103314"},"PeriodicalIF":5.4,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1071581924000983/pdfft?md5=87b79cafe0b8b5075108d0b59d2035dc&pid=1-s2.0-S1071581924000983-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141324508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-05DOI: 10.1016/j.ijhcs.2024.103312
Sharon Macias-Velasquez , Hugo I. Medellin-Castillo , Abel Garcia-Barrientos
Virtual reality (VR) systems have been developed to enhance the conventional industrial design and manufacturing process, including worker training and factory planning. However, research studies have shown that the prolonged use of VR systems can result in certain discomforts among users. This research evaluates the user experience (UX) during the first interaction in a semi-immersive and haptic-enabled virtual assembly system. The aim is to evaluate the UX by considering variations in task duration in order to determine whether the length of the time spent on a virtual assembly task has an effect on the improvement or deterioration of the UX during the initial interactions of new users. The UX evaluation is based on key elements that characterize the user experience, such as perceptions of the product, emotions, consequences to use, realism, and physiological factors. In particular, the interest is to investigate whether these factors vary when performing a virtual assembly task along different time frames. The results have revealed significant differences in some dimensions of the user experience, physiological factors, and realism. This information encourages the formulation of guidelines to enhance the user experience of new operators of haptic-enabled virtual assembly systems.
{"title":"New-user experience evaluation in a semi-immersive and haptic-enabled virtual reality system for assembly operations","authors":"Sharon Macias-Velasquez , Hugo I. Medellin-Castillo , Abel Garcia-Barrientos","doi":"10.1016/j.ijhcs.2024.103312","DOIUrl":"https://doi.org/10.1016/j.ijhcs.2024.103312","url":null,"abstract":"<div><p>Virtual reality (VR) systems have been developed to enhance the conventional industrial design and manufacturing process, including worker training and factory planning. However, research studies have shown that the prolonged use of VR systems can result in certain discomforts among users. This research evaluates the user experience (UX) during the first interaction in a semi-immersive and haptic-enabled virtual assembly system. The aim is to evaluate the UX by considering variations in task duration in order to determine whether the length of the time spent on a virtual assembly task has an effect on the improvement or deterioration of the UX during the initial interactions of new users. The UX evaluation is based on key elements that characterize the user experience, such as perceptions of the product, emotions, consequences to use, realism, and physiological factors. In particular, the interest is to investigate whether these factors vary when performing a virtual assembly task along different time frames. The results have revealed significant differences in some dimensions of the user experience, physiological factors, and realism. This information encourages the formulation of guidelines to enhance the user experience of new operators of haptic-enabled virtual assembly systems.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"190 ","pages":"Article 103312"},"PeriodicalIF":5.4,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141290755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-05DOI: 10.1016/j.ijhcs.2024.103303
Cheng Chen , Jingshi Kang , Pejman Sajjadi , S. Shyam Sundar
The autoplay feature of video platforms is often blamed for users going down rabbit holes of binge-watching extreme content. However, autoplay is not necessarily a passive experience, because users can toggle the feature off if they want. While the automation aspect is passive, the toggle option signals interactivity, making it “interpassive,” which lies between completely passive autoplay and manual initiation of each video. We empirically compare these three modes of video viewing in a user study (N = 394), which exposed participants to either extreme or non-extreme content under conditions of manual play, interpassive autoplay, or completely passive autoplay. Results show that interpassive autoplay is favored over the other two. It triggers the control heuristic compared to passive autoplay, but leads to higher inattentiveness compared to manual play. Both the invoked control heuristic and inattentiveness result in higher rabbit hole perception. These findings have implications for socially responsible design of the autoplay feature.
{"title":"Preventing users from going down rabbit holes of extreme video content: A study of the role played by different modes of autoplay","authors":"Cheng Chen , Jingshi Kang , Pejman Sajjadi , S. Shyam Sundar","doi":"10.1016/j.ijhcs.2024.103303","DOIUrl":"10.1016/j.ijhcs.2024.103303","url":null,"abstract":"<div><p>The autoplay feature of video platforms is often blamed for users going down rabbit holes of binge-watching extreme content. However, autoplay is not necessarily a passive experience, because users can toggle the feature off if they want. While the automation aspect is passive, the toggle option signals interactivity, making it “interpassive,” which lies between completely passive autoplay and manual initiation of each video. We empirically compare these three modes of video viewing in a user study (<em>N</em> = 394), which exposed participants to either extreme or non-extreme content under conditions of manual play, interpassive autoplay, or completely passive autoplay. Results show that interpassive autoplay is favored over the other two. It triggers the control heuristic compared to passive autoplay, but leads to higher inattentiveness compared to manual play. Both the invoked control heuristic and inattentiveness result in higher rabbit hole perception. These findings have implications for socially responsible design of the autoplay feature.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"190 ","pages":"Article 103303"},"PeriodicalIF":5.3,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141410275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-31DOI: 10.1016/j.ijhcs.2024.103291
Hayoun Moon , Mia Saade , Daniel Enriquez , Zachary Duer , Hye Sung Moon , Sang Won Lee , Myounghoon Jeon
Virtual reality (VR) has opened new possibilities for creative expression, while the 360-degree head-worn display (HWD) delivers a fully immersive experience in the world of art. The immersiveness, however, comes with the cost of blocking out the physical world, including bystanders without an HWD. Therefore, VR experiences in public (e.g., galleries, museums) often lack social interactivity, which plays an important role in forming aesthetic experiences. In the current study, we explored the application of a cross-device mixed reality (MR) platform in the domain of art to enable social and inclusive experiences with artworks that utilize VR technology. Our concept of interest features co-located audiences of HWD and mobile device users who interact across physical and virtual worlds. We conducted focus groups (N=22) and expert interviews (N=7) to identify the concept’s potential scenarios and fundamental components, as well as expected benefits and concerns. We also share our process of creating In-Between Spaces, an interactive artwork in MR that encourages social interactivity among cross-device audiences. Our exploration presents a prospective direction for future VR/MR aesthetic content, especially at public events and exhibitions targeting crowd audiences.
{"title":"Mixed-reality art as shared experience for cross-device users: Materialize, understand, and explore","authors":"Hayoun Moon , Mia Saade , Daniel Enriquez , Zachary Duer , Hye Sung Moon , Sang Won Lee , Myounghoon Jeon","doi":"10.1016/j.ijhcs.2024.103291","DOIUrl":"https://doi.org/10.1016/j.ijhcs.2024.103291","url":null,"abstract":"<div><p>Virtual reality (VR) has opened new possibilities for creative expression, while the 360-degree head-worn display (HWD) delivers a fully immersive experience in the world of art. The immersiveness, however, comes with the cost of blocking out the physical world, including bystanders without an HWD. Therefore, VR experiences in public (e.g., galleries, museums) often lack social interactivity, which plays an important role in forming aesthetic experiences. In the current study, we explored the application of a cross-device mixed reality (MR) platform in the domain of art to enable social and inclusive experiences with artworks that utilize VR technology. Our concept of interest features co-located audiences of HWD and mobile device users who interact across physical and virtual worlds. We conducted focus groups (<em>N</em>=22) and expert interviews (<em>N</em>=7) to identify the concept’s potential scenarios and fundamental components, as well as expected benefits and concerns. We also share our process of creating <em>In-Between Spaces</em>, an interactive artwork in MR that encourages social interactivity among cross-device audiences. Our exploration presents a prospective direction for future VR/MR aesthetic content, especially at public events and exhibitions targeting crowd audiences.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"190 ","pages":"Article 103291"},"PeriodicalIF":5.4,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141264275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With high flexibility and rich semantic expressiveness, mid-air gesture interaction is an important part of natural human-computer interaction (HCI) and has broad application prospects. However, there is no unified representation frame for designing, recording, investigating and comparing HCI mid-air gestures. Therefore, this paper proposes an interpretable coding method, DigCode, for HCI mid-air gestures. DigCode converts the unstructured continuous actions into structured discrete string encoding. From the perspective of human cognition and expression, the research employed psychophysical methods to divide gesture actions into discrete intervals, defined the coding rules of representation in letters and numbers, and developed automated programs to enable encoding and decoding by using gesture sensors. The coding method can cover the existing representations of HCI mid-air gestures by considering human understanding and computer recognition and can be applied to HCI mid-air gesture design and gesture library construction.
{"title":"DigCode—A generic mid-air gesture coding method on human-computer interaction","authors":"Xiaozhou Zhou , Lesong Jia , Ruidong Bai , Chengqi Xue","doi":"10.1016/j.ijhcs.2024.103302","DOIUrl":"https://doi.org/10.1016/j.ijhcs.2024.103302","url":null,"abstract":"<div><p>With high flexibility and rich semantic expressiveness, mid-air gesture interaction is an important part of natural human-computer interaction (HCI) and has broad application prospects. However, there is no unified representation frame for designing, recording, investigating and comparing HCI mid-air gestures. Therefore, this paper proposes an interpretable coding method, DigCode, for HCI mid-air gestures. DigCode converts the unstructured continuous actions into structured discrete string encoding. From the perspective of human cognition and expression, the research employed psychophysical methods to divide gesture actions into discrete intervals, defined the coding rules of representation in letters and numbers, and developed automated programs to enable encoding and decoding by using gesture sensors. The coding method can cover the existing representations of HCI mid-air gestures by considering human understanding and computer recognition and can be applied to HCI mid-air gesture design and gesture library construction.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"189 ","pages":"Article 103302"},"PeriodicalIF":5.4,"publicationDate":"2024-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141240866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI systems are increasingly being adopted across various domains and application areas. With this surge, there is a growing research focus and societal concern for actively involving humans in developing, operating, and adopting these systems. Despite this concern, most existing literature on AI and Human–Computer Interaction (HCI) primarily focuses on explaining how AI systems operate and, at times, allowing users to contest AI decisions. Existing studies often overlook more impactful forms of user interaction with AI systems, such as giving users agency beyond contestability and enabling them to adapt and even co-design the AI’s internal mechanics. In this survey, we aim to bridge this gap by reviewing the state-of-the-art in Human-Centered AI literature, the domain where AI and HCI studies converge, extending past Explainable and Contestable AI, delving into the Interactive AI and beyond. Our analysis contributes to shaping the trajectory of future Interactive AI design and advocates for a more user-centric approach that provides users with greater agency, fostering not only their understanding of AI’s workings but also their active engagement in its development and evolution.
{"title":"From explainable to interactive AI: A literature review on current trends in human-AI interaction","authors":"Muhammad Raees , Inge Meijerink , Ioanna Lykourentzou , Vassilis-Javed Khan , Konstantinos Papangelis","doi":"10.1016/j.ijhcs.2024.103301","DOIUrl":"10.1016/j.ijhcs.2024.103301","url":null,"abstract":"<div><p>AI systems are increasingly being adopted across various domains and application areas. With this surge, there is a growing research focus and societal concern for actively involving humans in developing, operating, and adopting these systems. Despite this concern, most existing literature on AI and Human–Computer Interaction (HCI) primarily focuses on explaining how AI systems operate and, at times, allowing users to contest AI decisions. Existing studies often overlook more impactful forms of user interaction with AI systems, such as giving users agency beyond contestability and enabling them to adapt and even co-design the AI’s internal mechanics. In this survey, we aim to bridge this gap by reviewing the state-of-the-art in Human-Centered AI literature, the domain where AI and HCI studies converge, extending past Explainable and Contestable AI, delving into the Interactive AI and beyond. Our analysis contributes to shaping the trajectory of future Interactive AI design and advocates for a more user-centric approach that provides users with greater agency, fostering not only their understanding of AI’s workings but also their active engagement in its development and evolution.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"189 ","pages":"Article 103301"},"PeriodicalIF":5.4,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141142214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}