Fabrice Matulic, Aditya Ganeshan, Hiroshi Fujiwara, Daniel Vogel
Smartphone touch screens are potentially attractive for interaction in virtual reality (VR). However, the user cannot see the phone or their hands in a fully immersive VR setting, impeding their ability for precise touch input. We propose mounting a mirror above the phone screen such that the front-facing camera captures the thumbs on or near the screen. This enables the creation of semi-transparent overlays of thumb shadows and inference of fingertip hover points with deep learning, which help the user aim for targets on the phone. A study compares the effect of visual feedback on touch precision in a controlled task and qualitatively evaluates three example applications demonstrating the potential of the technique. The results show that the enabled style of feedback is effective for thumb-size targets, and that the VR experience can be enriched by using smartphones as VR controllers supporting precise touch input.
{"title":"Phonetroller: Visual Representations of Fingers for Precise Touch Input with Mobile Phones in VR","authors":"Fabrice Matulic, Aditya Ganeshan, Hiroshi Fujiwara, Daniel Vogel","doi":"10.1145/3411764.3445583","DOIUrl":"https://doi.org/10.1145/3411764.3445583","url":null,"abstract":"Smartphone touch screens are potentially attractive for interaction in virtual reality (VR). However, the user cannot see the phone or their hands in a fully immersive VR setting, impeding their ability for precise touch input. We propose mounting a mirror above the phone screen such that the front-facing camera captures the thumbs on or near the screen. This enables the creation of semi-transparent overlays of thumb shadows and inference of fingertip hover points with deep learning, which help the user aim for targets on the phone. A study compares the effect of visual feedback on touch precision in a controlled task and qualitatively evaluates three example applications demonstrating the potential of the technique. The results show that the enabled style of feedback is effective for thumb-size targets, and that the VR experience can be enriched by using smartphones as VR controllers supporting precise touch input.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"2011 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73776253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Augmented Reality (AR) can deliver engaging user experiences that seamlessly meld virtual content with the physical environment. However, building such experiences is challenging due to the developer’s inability to assess how uncontrolled deployment contexts may influence the user experience. To address this issue, we demonstrate a method for rapidly conducting AR experiments and real-world data collection in the user’s own physical environment using a privacy-conscious mobile web application. The approach leverages the large number of distinct user contexts accessible through crowdsourcing to efficiently source diverse context and perceptual preference data. The insights gathered through this method complement emerging design guidance and sample-limited lab-based studies. The utility of the method is illustrated by re-examining the design challenge of adapting AR text content to the user’s environment. Finally, we demonstrate how gathered design insight can be operationalized to provide adaptive text content functionality in an AR headset.
{"title":"Crowdsourcing Design Guidance for Contextual Adaptation of Text Content in Augmented Reality","authors":"John J. Dudley, Jason T. Jacques, P. Kristensson","doi":"10.1145/3411764.3445493","DOIUrl":"https://doi.org/10.1145/3411764.3445493","url":null,"abstract":"Augmented Reality (AR) can deliver engaging user experiences that seamlessly meld virtual content with the physical environment. However, building such experiences is challenging due to the developer’s inability to assess how uncontrolled deployment contexts may influence the user experience. To address this issue, we demonstrate a method for rapidly conducting AR experiments and real-world data collection in the user’s own physical environment using a privacy-conscious mobile web application. The approach leverages the large number of distinct user contexts accessible through crowdsourcing to efficiently source diverse context and perceptual preference data. The insights gathered through this method complement emerging design guidance and sample-limited lab-based studies. The utility of the method is illustrated by re-examining the design challenge of adapting AR text content to the user’s environment. Finally, we demonstrate how gathered design insight can be operationalized to provide adaptive text content functionality in an AR headset.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"63 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73196499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Teachers of the Visually Impaired (TVIs) teach academic and functional living skills simultaneously to prepare students with vision impairment to be successful and independent. Current educational tools primarily focus on academic instruction rather than this multifaceted approach needed for students. Our work aims to understand how technology can integrate behavioral skills, like independence, and support TVIs in their preferred teaching strategy. We observed elementary classrooms at a school for the blind for six weeks to study how educators design lessons and use technology to supplement their instruction in different subjects. After the observational study, we conducted remote interviews with educators to understand how technology can support students in building academic and behavioral skills in-person and remotely. Educators suggested incorporating audio feedback that motivates students to play and learn consistently, student progress tracking for parents and educators, and designing features that help students build independence and develop collaborative skills.
{"title":"Exploring Technology Design for Students with Vision Impairment in the Classroom and Remotely","authors":"Vinitha Gadiraju, Olwyn Doyle, Shaun K. Kane","doi":"10.1145/3411764.3445755","DOIUrl":"https://doi.org/10.1145/3411764.3445755","url":null,"abstract":"Teachers of the Visually Impaired (TVIs) teach academic and functional living skills simultaneously to prepare students with vision impairment to be successful and independent. Current educational tools primarily focus on academic instruction rather than this multifaceted approach needed for students. Our work aims to understand how technology can integrate behavioral skills, like independence, and support TVIs in their preferred teaching strategy. We observed elementary classrooms at a school for the blind for six weeks to study how educators design lessons and use technology to supplement their instruction in different subjects. After the observational study, we conducted remote interviews with educators to understand how technology can support students in building academic and behavioral skills in-person and remotely. Educators suggested incorporating audio feedback that motivates students to play and learn consistently, student progress tracking for parents and educators, and designing features that help students build independence and develop collaborative skills.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"188 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72745914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Diego Muñoz, S. Favilla, S. Pedell, Andrew Murphy, Jeanie Beh, Tanya Petrovich
This project aims to foster shared positive experiences between people living with moderate to advanced dementia and their visitors as they may struggle to find topics to talk about and engaging things to do together. To promote a better visit, we trialed a previously developed app that includes eight games with twenty-one residents and their partners or carers across four care centers for three months each. Through interviews and data logging, we found that residents preferred games that were closer to their interests and skills, and that gameplay and cooperation fostered meaningful and shared interactions between residents and their visitors. The contribution of this work is twofold: (1) insights and opportunities into dyadic interactions when using an app and into promoting positive social experiences through technology design, and (2) reflections on the challenges of evaluating the benefits of technology for people living with dementia.
{"title":"Evaluating an App to Promote a Better Visit Through Shared Activities for People Living with Dementia and their Families","authors":"Diego Muñoz, S. Favilla, S. Pedell, Andrew Murphy, Jeanie Beh, Tanya Petrovich","doi":"10.1145/3411764.3445764","DOIUrl":"https://doi.org/10.1145/3411764.3445764","url":null,"abstract":"This project aims to foster shared positive experiences between people living with moderate to advanced dementia and their visitors as they may struggle to find topics to talk about and engaging things to do together. To promote a better visit, we trialed a previously developed app that includes eight games with twenty-one residents and their partners or carers across four care centers for three months each. Through interviews and data logging, we found that residents preferred games that were closer to their interests and skills, and that gameplay and cooperation fostered meaningful and shared interactions between residents and their visitors. The contribution of this work is twofold: (1) insights and opportunities into dyadic interactions when using an app and into promoting positive social experiences through technology design, and (2) reflections on the challenges of evaluating the benefits of technology for people living with dementia.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"319 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74819000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work we report on two comprehensive user studies investigating the perception of Augmented Reality (AR) visualizations influenced by real-world backgrounds. Since AR is an emerging technology, it is important to also consider productive use cases, which is why we chose an exemplary and challenging industry 4.0 environment. Our basic perceptual research focuses on both the visual complexity of backgrounds as well as the influence of a secondary task. In contrast to our expectation, data of our 34 study participants indicate that the background has far less influence on the perception of AR visualizations. Moreover, we observed a mismatch between measured and subjectively reported performance. We discuss the importance of the background and recommendations for visual real-world augmentations. Overall, our results suggest that AR can be used in many visually challenging environments without losing the ability to productively work with the visualizations shown.
{"title":"Investigating the Impact of Real-World Environments on the Perception of 2D Visualizations in Augmented Reality","authors":"Marc Satkowski, Raimund Dachselt","doi":"10.1145/3411764.3445330","DOIUrl":"https://doi.org/10.1145/3411764.3445330","url":null,"abstract":"In this work we report on two comprehensive user studies investigating the perception of Augmented Reality (AR) visualizations influenced by real-world backgrounds. Since AR is an emerging technology, it is important to also consider productive use cases, which is why we chose an exemplary and challenging industry 4.0 environment. Our basic perceptual research focuses on both the visual complexity of backgrounds as well as the influence of a secondary task. In contrast to our expectation, data of our 34 study participants indicate that the background has far less influence on the perception of AR visualizations. Moreover, we observed a mismatch between measured and subjectively reported performance. We discuss the importance of the background and recommendations for visual real-world augmentations. Overall, our results suggest that AR can be used in many visually challenging environments without losing the ability to productively work with the visualizations shown.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"44 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76250598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. A. Arboleda, Franziska Rücker, Tim Dierks, J. Gerken
Teleoperating industrial manipulators in co-located spaces can be challenging. Facilitating robot teleoperation by providing additional visual information about the environment and the robot affordances using augmented reality (AR), can improve task performance in manipulation and grasping. In this paper, we present two designs of augmented visual cues, that aim to enhance the visual space of the robot operator through hints about the position of the robot gripper in the workspace and in relation to the target. These visual cues aim to improve the distance perception and thus, the task performance. We evaluate both designs against a baseline in an experiment where participants teleoperate a robotic arm to perform pick-and-place tasks. Our results show performance improvements in different levels, reflecting in objective and subjective measures with trade-offs in terms of time, accuracy, and participants’ views of teleoperation. These findings show the potential of AR not only in teleoperation, but in understanding the human-robot workspace.
{"title":"Assisting Manipulation and Grasping in Robot Teleoperation with Augmented Reality Visual Cues","authors":"S. A. Arboleda, Franziska Rücker, Tim Dierks, J. Gerken","doi":"10.1145/3411764.3445398","DOIUrl":"https://doi.org/10.1145/3411764.3445398","url":null,"abstract":"Teleoperating industrial manipulators in co-located spaces can be challenging. Facilitating robot teleoperation by providing additional visual information about the environment and the robot affordances using augmented reality (AR), can improve task performance in manipulation and grasping. In this paper, we present two designs of augmented visual cues, that aim to enhance the visual space of the robot operator through hints about the position of the robot gripper in the workspace and in relation to the target. These visual cues aim to improve the distance perception and thus, the task performance. We evaluate both designs against a baseline in an experiment where participants teleoperate a robotic arm to perform pick-and-place tasks. Our results show performance improvements in different levels, reflecting in objective and subjective measures with trade-offs in terms of time, accuracy, and participants’ views of teleoperation. These findings show the potential of AR not only in teleoperation, but in understanding the human-robot workspace.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72691828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Samiha Samrose, Daniel J. McDuff, Robert Sim, Jina Suh, Kael Rowan, Javier Hernández, Sean, Rintel, Kevin Moynihan, M. Czerwinski
Video-conferencing is essential for many companies, but its limitations in conveying social cues can lead to ineffective meetings. We present MeetingCoach, an intelligent post-meeting feedback dashboard that summarizes contextual and behavioral meeting information. Through an exploratory survey (N=120), we identified important signals (e.g., turn taking, sentiment) and used these insights to create a wireframe dashboard. The design was evaluated with in situ participants (N=16) who helped identify the components they would prefer in a post-meeting dashboard. After recording video-conferencing meetings of eight teams over four weeks, we developed an AI system to quantify the meeting features and created personalized dashboards for each participant. Through interviews and surveys (N=23), we found that reviewing the dashboard helped improve attendees’ awareness of meeting dynamics, with implications for improved effectiveness and inclusivity. Based on our findings, we provide suggestions for future feedback system designs of video-conferencing meetings.
{"title":"MeetingCoach: An Intelligent Dashboard for Supporting Effective & Inclusive Meetings","authors":"Samiha Samrose, Daniel J. McDuff, Robert Sim, Jina Suh, Kael Rowan, Javier Hernández, Sean, Rintel, Kevin Moynihan, M. Czerwinski","doi":"10.1145/3411764.3445615","DOIUrl":"https://doi.org/10.1145/3411764.3445615","url":null,"abstract":"Video-conferencing is essential for many companies, but its limitations in conveying social cues can lead to ineffective meetings. We present MeetingCoach, an intelligent post-meeting feedback dashboard that summarizes contextual and behavioral meeting information. Through an exploratory survey (N=120), we identified important signals (e.g., turn taking, sentiment) and used these insights to create a wireframe dashboard. The design was evaluated with in situ participants (N=16) who helped identify the components they would prefer in a post-meeting dashboard. After recording video-conferencing meetings of eight teams over four weeks, we developed an AI system to quantify the meeting features and created personalized dashboards for each participant. Through interviews and surveys (N=23), we found that reviewing the dashboard helped improve attendees’ awareness of meeting dynamics, with implications for improved effectiveness and inclusivity. Based on our findings, we provide suggestions for future feedback system designs of video-conferencing meetings.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"101 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77343363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicole A. Beres, Julian Frommel, Elizabeth Reid, R. Mandryk, Madison Klarkowski
Video game toxicity, endemic to online play, represents a pervasive and complex problem. Antisocial behaviours in online play directly harm player wellbeing, enjoyment, and retention—but research has also revealed that some players normalize toxicity as an inextricable and acceptable element of the competitive video game experience. In this work, we explore perceptions of toxicity and how they are predicted by player traits, demonstrating that participants reporting a higher tendency towards Conduct Reconstrual, Distorting Consequences, Dehumanization, and Toxic Online Disinhibition perceive online game interactions as less toxic. Through a thematic analysis on willingness to report, we also demonstrate that players abstain from reporting toxic content because they view it as acceptable, typical of games, as banter, or as not their concern. We propose that these traits and themes represent contributing factors to the cyclical normalization of toxicity. These findings further highlight the multifaceted nature of toxicity in online video games.
{"title":"Don’t You Know That You’re Toxic: Normalization of Toxicity in Online Gaming","authors":"Nicole A. Beres, Julian Frommel, Elizabeth Reid, R. Mandryk, Madison Klarkowski","doi":"10.1145/3411764.3445157","DOIUrl":"https://doi.org/10.1145/3411764.3445157","url":null,"abstract":"Video game toxicity, endemic to online play, represents a pervasive and complex problem. Antisocial behaviours in online play directly harm player wellbeing, enjoyment, and retention—but research has also revealed that some players normalize toxicity as an inextricable and acceptable element of the competitive video game experience. In this work, we explore perceptions of toxicity and how they are predicted by player traits, demonstrating that participants reporting a higher tendency towards Conduct Reconstrual, Distorting Consequences, Dehumanization, and Toxic Online Disinhibition perceive online game interactions as less toxic. Through a thematic analysis on willingness to report, we also demonstrate that players abstain from reporting toxic content because they view it as acceptable, typical of games, as banter, or as not their concern. We propose that these traits and themes represent contributing factors to the cyclical normalization of toxicity. These findings further highlight the multifaceted nature of toxicity in online video games.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80142084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a vision for conversational user interfaces (CUIs) as probes for speculating with, rather than as objects to speculate about. Popular CUIs, e.g., Alexa, are changing the way we converse, narrate, and imagine the world(s) to come. Yet, current conversational interactions normatively may promote non-desirable ends, delivering a restricted range of request-response interactions with sexist and digital colonialist tendencies. Our critical design approach envisions alternatives by considering how future voices can reside in CUIs as enabling probes. We present novel explorations that illustrate the potential of CUIs as critical design material, by critiquing present norms and conversing with imaginary species. As micro-level interventions, we show that conversations with diverse futures through CUIs can persuade us to critically shape our discourse on macro-scale concerns of the present, e.g., sustainability. We reflect on how conversational interactions with pluralistic, imagined futures can contribute to how being human stands to change.
{"title":"Conversational Futures: Emancipating Conversational Interactions for Futures Worth Wanting","authors":"Minha Lee, Renee Noortman, Cristina Zaga, A. Starke, Gijs Huisman, Kristina Andersen","doi":"10.1145/3411764.3445244","DOIUrl":"https://doi.org/10.1145/3411764.3445244","url":null,"abstract":"We present a vision for conversational user interfaces (CUIs) as probes for speculating with, rather than as objects to speculate about. Popular CUIs, e.g., Alexa, are changing the way we converse, narrate, and imagine the world(s) to come. Yet, current conversational interactions normatively may promote non-desirable ends, delivering a restricted range of request-response interactions with sexist and digital colonialist tendencies. Our critical design approach envisions alternatives by considering how future voices can reside in CUIs as enabling probes. We present novel explorations that illustrate the potential of CUIs as critical design material, by critiquing present norms and conversing with imaginary species. As micro-level interventions, we show that conversations with diverse futures through CUIs can persuade us to critically shape our discourse on macro-scale concerns of the present, e.g., sustainability. We reflect on how conversational interactions with pluralistic, imagined futures can contribute to how being human stands to change.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80229762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Park, Nicole K. Santero, Blair Kaneshiro, Jin Ha Lee
Music fans strategically support their artists. Their collective efforts can extend to social causes as well: In 2020 for example, ARMY—the fandom of the music group BTS—successfully organized the #MatchAMillion campaign to raise over one million USD to support Black Lives Matter. To better understand factors of fandoms’ collaborative success for arguably unrelated social goals, we conducted a survey focusing on ARMYs’ perceptions of their fandom and their social effort. Most ARMYs viewed the fandom as a community, loosely structured with pillar accounts. They reported trust in each other as well as high team composition, which mediated the relationship between their neutral psychological safety and high efficacy. Respondents attributed their success in #MatchAMillion to shared values, good teamwork, and established infrastructure. Our findings elucidate contextual factors that contribute to ARMY’s collaborative success and highlight themes that may be applied to studying other fandoms and their collaborative efforts.
{"title":"Armed in ARMY: A Case Study of How BTS Fans Successfully Collaborated to #MatchAMillion for Black Lives Matter","authors":"S. Park, Nicole K. Santero, Blair Kaneshiro, Jin Ha Lee","doi":"10.1145/3411764.3445353","DOIUrl":"https://doi.org/10.1145/3411764.3445353","url":null,"abstract":"Music fans strategically support their artists. Their collective efforts can extend to social causes as well: In 2020 for example, ARMY—the fandom of the music group BTS—successfully organized the #MatchAMillion campaign to raise over one million USD to support Black Lives Matter. To better understand factors of fandoms’ collaborative success for arguably unrelated social goals, we conducted a survey focusing on ARMYs’ perceptions of their fandom and their social effort. Most ARMYs viewed the fandom as a community, loosely structured with pillar accounts. They reported trust in each other as well as high team composition, which mediated the relationship between their neutral psychological safety and high efficacy. Respondents attributed their success in #MatchAMillion to shared values, good teamwork, and established infrastructure. Our findings elucidate contextual factors that contribute to ARMY’s collaborative success and highlight themes that may be applied to studying other fandoms and their collaborative efforts.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84380966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}