Sebastian Hubenschmid, Johannes Zagermann, Daniel Leicht, Harald Reiterer, Tiare M. Feuchtner
Smartphones conveniently place large information spaces in the palms of our hands. While research has shown that larger screens positively affect spatial memory, workload, and user experience, smartphones remain fairly compact for the sake of device ergonomics and portability. Thus, we investigate the use of hybrid user interfaces to virtually increase the available display size by complementing the smartphone with an augmented reality head-worn display. We thereby combine the benefits of familiar touch interaction with the near-infinite visual display space afforded by augmented reality. To better understand the potential of virtually-extended displays and the possible issues of splitting the user’s visual attention between two screens (real and virtual), we conducted a within-subjects experiment with 24 participants completing navigation tasks using different virtually-augmented display sizes. Our findings reveal that a desktop monitor size represents a “sweet spot” for extending smartphones with augmented reality, informing the design of hybrid user interfaces.
{"title":"ARound the Smartphone: Investigating the Effects of Virtually-Extended Display Size on Spatial Memory","authors":"Sebastian Hubenschmid, Johannes Zagermann, Daniel Leicht, Harald Reiterer, Tiare M. Feuchtner","doi":"10.1145/3544548.3581438","DOIUrl":"https://doi.org/10.1145/3544548.3581438","url":null,"abstract":"Smartphones conveniently place large information spaces in the palms of our hands. While research has shown that larger screens positively affect spatial memory, workload, and user experience, smartphones remain fairly compact for the sake of device ergonomics and portability. Thus, we investigate the use of hybrid user interfaces to virtually increase the available display size by complementing the smartphone with an augmented reality head-worn display. We thereby combine the benefits of familiar touch interaction with the near-infinite visual display space afforded by augmented reality. To better understand the potential of virtually-extended displays and the possible issues of splitting the user’s visual attention between two screens (real and virtual), we conducted a within-subjects experiment with 24 participants completing navigation tasks using different virtually-augmented display sizes. Our findings reveal that a desktop monitor size represents a “sweet spot” for extending smartphones with augmented reality, informing the design of hybrid user interfaces.","PeriodicalId":314098,"journal":{"name":"Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115877187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kexue Fu, Yixing Chen, Jiaxun Cao, Xin Tong, Ray Lc
Increasingly popular social virtual reality (VR) platforms like VRChat created new ways for people to interact with each other, generating dedicated user communities with unique idioms of socializing in an alternative world. In VRChat, users frequently gather in front of mirrors en masse during online interactions. Understanding how user communities deal with the mirror’s unique interactions can generate insights for supporting communication in social VR. In this study, we investigated the mirror’s synergistic effect with avatars on behaviors and dedicated user conversational performance. Qualitative findings indicate that avatar-mediated communication through mirrors provides functions like ensuring synchronization of incarnations, increasing immersion, and enhancing idealized embodiment to express bolder behaviors anonymously. Quantitative studies show that while mirrors improve self-perception, it has a potentially adverse effect on conversational performance, similar to the role of self-viewing in video conferencing. Studying how users interact with mirrors in an immersive environment allows us to explore how digital environments affect spatialized interactions when transported from physical to digital domains.
{"title":"\"I Am a Mirror Dweller\": Probing the Unique Strategies Users Take to Communicate in the Context of Mirrors in Social Virtual Reality","authors":"Kexue Fu, Yixing Chen, Jiaxun Cao, Xin Tong, Ray Lc","doi":"10.1145/3544548.3581464","DOIUrl":"https://doi.org/10.1145/3544548.3581464","url":null,"abstract":"Increasingly popular social virtual reality (VR) platforms like VRChat created new ways for people to interact with each other, generating dedicated user communities with unique idioms of socializing in an alternative world. In VRChat, users frequently gather in front of mirrors en masse during online interactions. Understanding how user communities deal with the mirror’s unique interactions can generate insights for supporting communication in social VR. In this study, we investigated the mirror’s synergistic effect with avatars on behaviors and dedicated user conversational performance. Qualitative findings indicate that avatar-mediated communication through mirrors provides functions like ensuring synchronization of incarnations, increasing immersion, and enhancing idealized embodiment to express bolder behaviors anonymously. Quantitative studies show that while mirrors improve self-perception, it has a potentially adverse effect on conversational performance, similar to the role of self-viewing in video conferencing. Studying how users interact with mirrors in an immersive environment allows us to explore how digital environments affect spatialized interactions when transported from physical to digital domains.","PeriodicalId":314098,"journal":{"name":"Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115975904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Machine learning is applied in a multitude of sectors with very impressive results. This success is due to the availability of an ever-growing amount of data acquired by omnipresent sensor devices and platforms on the internet. But there is a scarcity of labeled data which is required for most ML methods. However, generation of labeled data requires much time and resources. In this paper, we propose a portable, Open Source, simple and responsive manual Tool for 2D multiple object Tracking Annotation (TmoTA). Besides responsiveness, our tool design provides several features like view centering and looped playback that speed up the annotation process. We evaluate our proposed tool by comparing TmoTA with the widely used manual labeling tools CVAT, Label Studio, and two semi-automated tools Supervisely and VATIC with respect to object labeling time and accuracy. The evaluation includes a user study and pre-case studies showing that the annotation time per object frame can be reduced by 20% to 40% over the first 20 annotated objects compared to the manual labeling tools.
{"title":"TmoTA: Simple, Highly Responsive Tool for Multiple Object Tracking Annotation","authors":"M. T. Oyshi, Sebastian Vogt, S. Gumhold","doi":"10.1145/3544548.3581185","DOIUrl":"https://doi.org/10.1145/3544548.3581185","url":null,"abstract":"Machine learning is applied in a multitude of sectors with very impressive results. This success is due to the availability of an ever-growing amount of data acquired by omnipresent sensor devices and platforms on the internet. But there is a scarcity of labeled data which is required for most ML methods. However, generation of labeled data requires much time and resources. In this paper, we propose a portable, Open Source, simple and responsive manual Tool for 2D multiple object Tracking Annotation (TmoTA). Besides responsiveness, our tool design provides several features like view centering and looped playback that speed up the annotation process. We evaluate our proposed tool by comparing TmoTA with the widely used manual labeling tools CVAT, Label Studio, and two semi-automated tools Supervisely and VATIC with respect to object labeling time and accuracy. The evaluation includes a user study and pre-case studies showing that the annotation time per object frame can be reduced by 20% to 40% over the first 20 annotated objects compared to the manual labeling tools.","PeriodicalId":314098,"journal":{"name":"Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132001263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ce Zhong, Ron Wakkary, William Odom, Mikael Wiberg, Amy Yo Sue Chen, D. Oogjes, Jordan White, Minyoung Yoo
This paper presents a long-term field study of the coMorphing stool: a computational thing that can change shape in response to the surrounding light. We deployed 5 coMorphing stools to 5 participants' homes over 9 months. As co-speculators, the participants reflected on their mediated relations with the coMorphing stool. Findings suggest that they perceived the subtle transformations of the coMorphing stool in the early days of the deployment. After becoming familiar with these features, they interpreted their daily entanglements with the coMorphing stool in diverse personalized ways. Over time, the co-speculators accepted the coMorphing stool as part of their homes. These findings contribute new empirical insights to the shape-changing research field in HCI and enrich discussions on higher-level concepts in postphenomenology. Reflecting on these experiences promotes further HCI explorations on computational things.
{"title":"Exploring Long-Term Mediated Relations with a Shape-Changing Thing: A Field Study of coMorphing Stool","authors":"Ce Zhong, Ron Wakkary, William Odom, Mikael Wiberg, Amy Yo Sue Chen, D. Oogjes, Jordan White, Minyoung Yoo","doi":"10.1145/3544548.3581140","DOIUrl":"https://doi.org/10.1145/3544548.3581140","url":null,"abstract":"This paper presents a long-term field study of the coMorphing stool: a computational thing that can change shape in response to the surrounding light. We deployed 5 coMorphing stools to 5 participants' homes over 9 months. As co-speculators, the participants reflected on their mediated relations with the coMorphing stool. Findings suggest that they perceived the subtle transformations of the coMorphing stool in the early days of the deployment. After becoming familiar with these features, they interpreted their daily entanglements with the coMorphing stool in diverse personalized ways. Over time, the co-speculators accepted the coMorphing stool as part of their homes. These findings contribute new empirical insights to the shape-changing research field in HCI and enrich discussions on higher-level concepts in postphenomenology. Reflecting on these experiences promotes further HCI explorations on computational things.","PeriodicalId":314098,"journal":{"name":"Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems","volume":"14 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132057155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtual influencers (VI) are on the rise on Instagram, and companies increasingly cooperate with them for marketing campaigns. This has motivated an increasing number of studies, which investigate our perceptions of these influencers. Most studies propose that VI are often rated lower in perceived trust and higher in uncanniness. Yet, we still lack a deeper understanding as to why this is the case. We conduct 2 studies: 1) a questionnaire with 150 participants to get the general perception for the included influencers, and 2) an electroencephalography (EEG) study to get insights into the underlying neural mechanisms of influencer perception. Our results support findings from related works regarding lower trust and higher uncanniness associated with VI. Interestingly, the EEG components N400 and LPP did not modulate perceived trust, but rather perceived humanness, uncanniness, and intentions to follow recommendations. This provides a fruitful beginning for future research on virtual humans.
{"title":"Are You Human? Investigating the Perceptions and Evaluations of Virtual Versus Human Instagram Influencers","authors":"Anika Nissen, C. Conrad, A. Newman","doi":"10.1145/3544548.3580943","DOIUrl":"https://doi.org/10.1145/3544548.3580943","url":null,"abstract":"Virtual influencers (VI) are on the rise on Instagram, and companies increasingly cooperate with them for marketing campaigns. This has motivated an increasing number of studies, which investigate our perceptions of these influencers. Most studies propose that VI are often rated lower in perceived trust and higher in uncanniness. Yet, we still lack a deeper understanding as to why this is the case. We conduct 2 studies: 1) a questionnaire with 150 participants to get the general perception for the included influencers, and 2) an electroencephalography (EEG) study to get insights into the underlying neural mechanisms of influencer perception. Our results support findings from related works regarding lower trust and higher uncanniness associated with VI. Interestingly, the EEG components N400 and LPP did not modulate perceived trust, but rather perceived humanness, uncanniness, and intentions to follow recommendations. This provides a fruitful beginning for future research on virtual humans.","PeriodicalId":314098,"journal":{"name":"Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130071155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lillio Mok, Lu Sun, Shilad Sen, Bahareh Sarrafzadeh
Organizations are becoming increasingly distributed and many need to collaborate synchronously over great geographical distances. Despite a rich body of literature on spatially-distanced meetings, gaps remain in our understanding of temporally-distanced meetings. Here, we characterize cross time zone collaborations by analyzing 20 million meetings scheduled at a multinational corporation, Microsoft, supported by a survey on how 130 employees perceive their scheduling needs. We find that cross time zone meetings are closely associated with scheduling patterns around early morning and late evening hours, which are challenging and discordant with employees’ stated temporal preferences. Additionally, the burdens of meeting across time boundaries are asymmetrically distributed among workers at different levels of the organization and different geolocations. Nonetheless, we further observe evidence that cross time zone attendees are organizationally distant and diverse, suggesting that addressing these challenges by limiting meetings would disafford employees the opportunities to connect. We conclude by sharing opportunities for facilitating cross time zone meetings that foster healthier global collaborations.
{"title":"Challenging but Connective: Large-Scale Characteristics of Synchronous Collaboration Across Time Zones","authors":"Lillio Mok, Lu Sun, Shilad Sen, Bahareh Sarrafzadeh","doi":"10.1145/3544548.3581141","DOIUrl":"https://doi.org/10.1145/3544548.3581141","url":null,"abstract":"Organizations are becoming increasingly distributed and many need to collaborate synchronously over great geographical distances. Despite a rich body of literature on spatially-distanced meetings, gaps remain in our understanding of temporally-distanced meetings. Here, we characterize cross time zone collaborations by analyzing 20 million meetings scheduled at a multinational corporation, Microsoft, supported by a survey on how 130 employees perceive their scheduling needs. We find that cross time zone meetings are closely associated with scheduling patterns around early morning and late evening hours, which are challenging and discordant with employees’ stated temporal preferences. Additionally, the burdens of meeting across time boundaries are asymmetrically distributed among workers at different levels of the organization and different geolocations. Nonetheless, we further observe evidence that cross time zone attendees are organizationally distant and diverse, suggesting that addressing these challenges by limiting meetings would disafford employees the opportunities to connect. We conclude by sharing opportunities for facilitating cross time zone meetings that foster healthier global collaborations.","PeriodicalId":314098,"journal":{"name":"Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130428096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ewan Soubutts, Elaine Czech, Amid Ayobi, R. Eardley, K. Cater, A. O'Kane
Whilst the use of smart home systems has shown promise in recent years supporting older people's activities at home, there is more evidence needed to understand how these systems impact the type and the amount of shared care in the home. It is important to understand care recipients and caregivers' labour is changed with the introduction of a smart home system to efficiently and effectively support an increasingly aging population with technology. Five older households (8 participants) were interviewed before, immediately after and three months after receiving a Smart Home Health System (SHHS). We provide an identification and documentation of critical incidents and barriers that increased inter-household care work and prevented the SHHS from being successfully accepted within homes. Findings are framed within the growing body of work on smart homes for health and care, and we provide implications for designing future systems for shared home care needs.
{"title":"The Shifting Sands of Labour: Changes in Shared Care Work with a Smart Home Health System","authors":"Ewan Soubutts, Elaine Czech, Amid Ayobi, R. Eardley, K. Cater, A. O'Kane","doi":"10.1145/3544548.3581546","DOIUrl":"https://doi.org/10.1145/3544548.3581546","url":null,"abstract":"Whilst the use of smart home systems has shown promise in recent years supporting older people's activities at home, there is more evidence needed to understand how these systems impact the type and the amount of shared care in the home. It is important to understand care recipients and caregivers' labour is changed with the introduction of a smart home system to efficiently and effectively support an increasingly aging population with technology. Five older households (8 participants) were interviewed before, immediately after and three months after receiving a Smart Home Health System (SHHS). We provide an identification and documentation of critical incidents and barriers that increased inter-household care work and prevented the SHHS from being successfully accepted within homes. Findings are framed within the growing body of work on smart homes for health and care, and we provide implications for designing future systems for shared home care needs.","PeriodicalId":314098,"journal":{"name":"Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134018661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multiplayer online games seek to address toxic behaviors such as trolling and griefing through behavior moderation, where penalties such as chat restriction or account suspension are issued against toxic players in the hope that punishments create a teachable moment for punished players to reflect and improve future behavior. While punishments impact player experience (PX) in profound ways, little is known regarding how players experience behavior moderation. In this study, we conducted a survey of 291 players to understand their experiences with punishments in online multiplayer games. Through several statistical analyses, we found that moderation explanation plays a critical role in improving players’ perceived transparency and fairness of moderation; and these perceptions significantly affect what players do after punishments. We discuss moderation experience as an important facet of PX, bridge the game and moderation literature, and provide design implications for behavior moderation in multiplayer online games.
{"title":"Transparency, Fairness, and Coping: How Players Experience Moderation in Multiplayer Online Games","authors":"Renkai Ma, Yao Li, Yubo Kou","doi":"10.1145/3544548.3581097","DOIUrl":"https://doi.org/10.1145/3544548.3581097","url":null,"abstract":"Multiplayer online games seek to address toxic behaviors such as trolling and griefing through behavior moderation, where penalties such as chat restriction or account suspension are issued against toxic players in the hope that punishments create a teachable moment for punished players to reflect and improve future behavior. While punishments impact player experience (PX) in profound ways, little is known regarding how players experience behavior moderation. In this study, we conducted a survey of 291 players to understand their experiences with punishments in online multiplayer games. Through several statistical analyses, we found that moderation explanation plays a critical role in improving players’ perceived transparency and fairness of moderation; and these perceptions significantly affect what players do after punishments. We discuss moderation experience as an important facet of PX, bridge the game and moderation literature, and provide design implications for behavior moderation in multiplayer online games.","PeriodicalId":314098,"journal":{"name":"Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133944099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. M. Calisto, João Gabriel de Matos Fernandes, Margarida Morais, Carlos Santiago, João Maria Veigas Abrantes, N. Nunes, Jacinto Nascimento
Intelligent agents are showing increasing promise for clinical decision-making in a variety of healthcare settings. While a substantial body of work has contributed to the best strategies to convey these agents’ decisions to clinicians, few have considered the impact of personalizing and customizing these communications on the clinicians’ performance and receptiveness. This raises the question of how intelligent agents should adapt their tone in accordance with their target audience. We designed two approaches to communicate the decisions of an intelligent agent for breast cancer diagnosis with different tones: a suggestive (non-assertive) tone and an imposing (assertive) one. We used an intelligent agent to inform about: (1) number of detected findings; (2) cancer severity on each breast and per medical imaging modality; (3) visual scale representing severity estimates; (4) the sensitivity and specificity of the agent; and (5) clinical arguments of the patient, such as pathological co-variables. Our results demonstrate that assertiveness plays an important role in how this communication is perceived and its benefits. We show that personalizing assertiveness according to the professional experience of each clinician can reduce medical errors and increase satisfaction, bringing a novel perspective to the design of adaptive communication between intelligent agents and clinicians.
{"title":"Assertiveness-based Agent Communication for a Personalized Medicine on Medical Imaging Diagnosis","authors":"F. M. Calisto, João Gabriel de Matos Fernandes, Margarida Morais, Carlos Santiago, João Maria Veigas Abrantes, N. Nunes, Jacinto Nascimento","doi":"10.1145/3544548.3580682","DOIUrl":"https://doi.org/10.1145/3544548.3580682","url":null,"abstract":"Intelligent agents are showing increasing promise for clinical decision-making in a variety of healthcare settings. While a substantial body of work has contributed to the best strategies to convey these agents’ decisions to clinicians, few have considered the impact of personalizing and customizing these communications on the clinicians’ performance and receptiveness. This raises the question of how intelligent agents should adapt their tone in accordance with their target audience. We designed two approaches to communicate the decisions of an intelligent agent for breast cancer diagnosis with different tones: a suggestive (non-assertive) tone and an imposing (assertive) one. We used an intelligent agent to inform about: (1) number of detected findings; (2) cancer severity on each breast and per medical imaging modality; (3) visual scale representing severity estimates; (4) the sensitivity and specificity of the agent; and (5) clinical arguments of the patient, such as pathological co-variables. Our results demonstrate that assertiveness plays an important role in how this communication is perceived and its benefits. We show that personalizing assertiveness according to the professional experience of each clinician can reduce medical errors and increase satisfaction, bringing a novel perspective to the design of adaptive communication between intelligent agents and clinicians.","PeriodicalId":314098,"journal":{"name":"Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems","volume":"418 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131526248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Riccardo Bovo, D. Giunchi, Ludwig Sidenmark, Joshua Newn, Hans-Werner Gellersen, E. Costanza, T. Heinis
Mutual awareness of visual attention is crucial for successful collaboration. Previous research has explored various ways to represent visual attention, such as field-of-view visualizations and cursor visualizations based on eye-tracking, but these methods have limitations. Verbal communication is often utilized as a complementary strategy to overcome such disadvantages. This paper proposes a novel method that combines verbal communication with the Cone of Vision to improve gaze inference and mutual awareness in VR. We conducted a within-group study with pairs of participants who performed a collaborative analysis of data visualizations in VR. We found that our proposed method provides a better approximation of eye gaze than the approximation provided by head direction. Furthermore, we release the first collaborative head, eyes, and verbal behaviour dataset. The results of this study provide a foundation for investigating the potential of verbal communication as a tool for enhancing visual cues for joint attention.
{"title":"Speech-Augmented Cone-of-Vision for Exploratory Data Analysis","authors":"Riccardo Bovo, D. Giunchi, Ludwig Sidenmark, Joshua Newn, Hans-Werner Gellersen, E. Costanza, T. Heinis","doi":"10.1145/3544548.3581283","DOIUrl":"https://doi.org/10.1145/3544548.3581283","url":null,"abstract":"Mutual awareness of visual attention is crucial for successful collaboration. Previous research has explored various ways to represent visual attention, such as field-of-view visualizations and cursor visualizations based on eye-tracking, but these methods have limitations. Verbal communication is often utilized as a complementary strategy to overcome such disadvantages. This paper proposes a novel method that combines verbal communication with the Cone of Vision to improve gaze inference and mutual awareness in VR. We conducted a within-group study with pairs of participants who performed a collaborative analysis of data visualizations in VR. We found that our proposed method provides a better approximation of eye gaze than the approximation provided by head direction. Furthermore, we release the first collaborative head, eyes, and verbal behaviour dataset. The results of this study provide a foundation for investigating the potential of verbal communication as a tool for enhancing visual cues for joint attention.","PeriodicalId":314098,"journal":{"name":"Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131564465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}