As interaction designers are venturing to design for others based on autobiographical experiences, it becomes particularly relevant to critically distinguish the designer’s voice from others’ experiences. However, few reports go into detail about how self and others mutually shape the design process and how to incorporate external evaluation into these designs. We describe a one-year process involving the design and evaluation of a prototype combining haptics and storytelling, aiming to materialise and share somatic memories of earthquakes experienced by a designer and her partner. We contribute with three strategies for bringing others into our autobiographical processes, avoiding the dilution of first-person voices while critically addressing design flaws that might hinder the representation of our stories.
{"title":"Sharing Earthquake Narratives: Making Space for Others in our Autobiographical Design Process","authors":"Claudia Núñez-Pacheco, Emma Frid","doi":"10.1145/3544548.3580977","DOIUrl":"https://doi.org/10.1145/3544548.3580977","url":null,"abstract":"As interaction designers are venturing to design for others based on autobiographical experiences, it becomes particularly relevant to critically distinguish the designer’s voice from others’ experiences. However, few reports go into detail about how self and others mutually shape the design process and how to incorporate external evaluation into these designs. We describe a one-year process involving the design and evaluation of a prototype combining haptics and storytelling, aiming to materialise and share somatic memories of earthquakes experienced by a designer and her partner. We contribute with three strategies for bringing others into our autobiographical processes, avoiding the dilution of first-person voices while critically addressing design flaws that might hinder the representation of our stories.","PeriodicalId":314098,"journal":{"name":"Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124134706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Katharina Margareta Theresa Pöhlmann, Gang Li, Mark Mcgill, Reuben Markoff, S. Brewster
Motion sickness is a problem for many in everyday travel and will become more prevalent with the rise of automated vehicles. Virtual Reality (VR) headsets have shown significant promise in-transit, enabling passengers to engage in immersive entertainment and productivity experiences. In a controlled multi-session motion sickness study using an actuated rotating chair, we examine the potential of multi-sensory visual and auditory motion cues, presented during a VR reading task, for mitigating motion sickness. We found that visual cues are most efficient in reducing symptoms, with auditory cues showing some beneficial effects when combined with the visual. Motion sickness had negative effects on presence as well as task performance, and despite the cognitive demand and multi-sensory cues, motion sickness still reached problematic levels. Our work emphasises the need for effective mitigations and the design of stronger multi-sensory motion cues if VR is to fulfil its potential for passengers.
{"title":"You spin me right round, baby, right round: Examining the Impact of Multi-Sensory Self-Motion Cues on Motion Sickness During a VR Reading Task","authors":"Katharina Margareta Theresa Pöhlmann, Gang Li, Mark Mcgill, Reuben Markoff, S. Brewster","doi":"10.1145/3544548.3580966","DOIUrl":"https://doi.org/10.1145/3544548.3580966","url":null,"abstract":"Motion sickness is a problem for many in everyday travel and will become more prevalent with the rise of automated vehicles. Virtual Reality (VR) headsets have shown significant promise in-transit, enabling passengers to engage in immersive entertainment and productivity experiences. In a controlled multi-session motion sickness study using an actuated rotating chair, we examine the potential of multi-sensory visual and auditory motion cues, presented during a VR reading task, for mitigating motion sickness. We found that visual cues are most efficient in reducing symptoms, with auditory cues showing some beneficial effects when combined with the visual. Motion sickness had negative effects on presence as well as task performance, and despite the cognitive demand and multi-sensory cues, motion sickness still reached problematic levels. Our work emphasises the need for effective mitigations and the design of stronger multi-sensory motion cues if VR is to fulfil its potential for passengers.","PeriodicalId":314098,"journal":{"name":"Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124353637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Measuring perceived challenge and demand in video games is crucial as these player experiences are essential to creating enjoyable games. Two recent measures that identified seemingly distinct structures of challenge (Challenge Originating from Recent Gameplay Interaction Scale (CORGIS) - cognitive, emotional, performative, decision-making) and demand (Video Game Demand Scale (VGDS) - cognitive, emotional, controller, exertional, social) have been theorised to overlap, reflecting the five-factor demand structure. To investigate the overlap between these two scales we compared a five (complete overlap) and nine-factor (no overlap) model by surveying 1,101 players asking them to recall their last gaming experience before completing CORGIS and VGDS. After failing to confirm both models, we conducted an exploratory factor analysis. Our findings reveal seven dimensions, where the five-factor VGDS model holds alongside two additional CORGIS dimensions of performative and decision-making, ultimately providing a more holistic understanding of the concepts whilst highlighting unique aspects of each approach.
{"title":"Comparing Measures of Perceived Challenge and Demand in Video Games: Exploring the Conceptual Dimensions of CORGIS and VGDS","authors":"Alexander Flint, A. Denisova, N. Bowman","doi":"10.1145/3544548.3581409","DOIUrl":"https://doi.org/10.1145/3544548.3581409","url":null,"abstract":"Measuring perceived challenge and demand in video games is crucial as these player experiences are essential to creating enjoyable games. Two recent measures that identified seemingly distinct structures of challenge (Challenge Originating from Recent Gameplay Interaction Scale (CORGIS) - cognitive, emotional, performative, decision-making) and demand (Video Game Demand Scale (VGDS) - cognitive, emotional, controller, exertional, social) have been theorised to overlap, reflecting the five-factor demand structure. To investigate the overlap between these two scales we compared a five (complete overlap) and nine-factor (no overlap) model by surveying 1,101 players asking them to recall their last gaming experience before completing CORGIS and VGDS. After failing to confirm both models, we conducted an exploratory factor analysis. Our findings reveal seven dimensions, where the five-factor VGDS model holds alongside two additional CORGIS dimensions of performative and decision-making, ultimately providing a more holistic understanding of the concepts whilst highlighting unique aspects of each approach.","PeriodicalId":314098,"journal":{"name":"Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124364841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Whether a programmer with code or a potter with clay, practitioners engage in an ongoing process of working and reasoning with materials. Existing discussions in HCI have provided rich accounts of these practices and processes, which we synthesize into three themes: (1) reciprocal discovery of goals and materials, (2) local knowledge of materials, and (3) annotation for holistic interpretation. We then apply these design principles generatively to the domain of version control to present Quickpose: a version control system for creative coding. In an in-situ, longitudinal study of Quickpose guided by our themes, we collected usage data, version history, and interviews. Our study explored our participants’ material interaction behaviors and the initial promise of our proposed measures for recognizing these behaviors. Quickpose is an exploration of version control as material interaction, using existing discussions to inform domain-specific concepts, measures, and designs for version control systems.
{"title":"Understanding Version Control as Material Interaction with Quickpose","authors":"Eric Rawn, Jingyi Li, E. Paulos, Sarah E. Chasins","doi":"10.1145/3544548.3581394","DOIUrl":"https://doi.org/10.1145/3544548.3581394","url":null,"abstract":"Whether a programmer with code or a potter with clay, practitioners engage in an ongoing process of working and reasoning with materials. Existing discussions in HCI have provided rich accounts of these practices and processes, which we synthesize into three themes: (1) reciprocal discovery of goals and materials, (2) local knowledge of materials, and (3) annotation for holistic interpretation. We then apply these design principles generatively to the domain of version control to present Quickpose: a version control system for creative coding. In an in-situ, longitudinal study of Quickpose guided by our themes, we collected usage data, version history, and interviews. Our study explored our participants’ material interaction behaviors and the initial promise of our proposed measures for recognizing these behaviors. Quickpose is an exploration of version control as material interaction, using existing discussions to inform domain-specific concepts, measures, and designs for version control systems.","PeriodicalId":314098,"journal":{"name":"Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124365778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Technology embeddedness in HCI textiles has great potential for enabling novel interactions and enriched experiences, but unless carefully designed, could inadvertently worsen HCI’s sustainability problem. In an attempt to bridge sustainable debates and practical material-driven scholarship in HCI, we propose Multimorphic Textile-forms (MMTF), as a design approach developed through a lens of multiplicity and extended life cycles, that facilitate change in both design/production and use-time via the simultaneous thinking of the qualities and behaviour of material and form. We provide a number of cases, textile-form methods and vocabulary to enable exploration in this emerging design space. MMTF grants insights into textiles as complex material systems whose behaviour can be tuned across material, interaction and ecological scales for conformal, seamless, and sustainable outcomes.
{"title":"Conformal, Seamless, Sustainable: Multimorphic Textile-forms as a Material-Driven Design Approach for HCI","authors":"Holly McQuillan, E. Karana","doi":"10.1145/3544548.3581156","DOIUrl":"https://doi.org/10.1145/3544548.3581156","url":null,"abstract":"Technology embeddedness in HCI textiles has great potential for enabling novel interactions and enriched experiences, but unless carefully designed, could inadvertently worsen HCI’s sustainability problem. In an attempt to bridge sustainable debates and practical material-driven scholarship in HCI, we propose Multimorphic Textile-forms (MMTF), as a design approach developed through a lens of multiplicity and extended life cycles, that facilitate change in both design/production and use-time via the simultaneous thinking of the qualities and behaviour of material and form. We provide a number of cases, textile-form methods and vocabulary to enable exploration in this emerging design space. MMTF grants insights into textiles as complex material systems whose behaviour can be tuned across material, interaction and ecological scales for conformal, seamless, and sustainable outcomes.","PeriodicalId":314098,"journal":{"name":"Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114796830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Variable font file technology enables adjusting fonts on scaled axes that can include weight, and grade. While making text bold increases the character width, grade achieves boldness without increasing character width or causing text reflow. Through two studies with a total of 459 participants, we examined the effect of varying grade levels on both glancing and paragraph reading tasks in light and dark modes. We show that dark text on a light background (Light Mode) is read reliably faster than its polar opposite (Dark Mode). We found an effect of mode for both glance and paragraph reading and an effect of grade for LM with heavier, increased grade levels. Paragraph readers are not choosing, or preferring, LM over DM despite fluency benefits and reported visual clarity. Software designers can vary grade across the tested font formats to influence design aesthetics and user preferences without worrying about reducing reading fluency.
{"title":"How bold can we be? The impact of adjusting font grade on readability in light and dark polarities","authors":"Hilary Palmén, Michael Gilbert, David Crossland","doi":"10.1145/3544548.3581552","DOIUrl":"https://doi.org/10.1145/3544548.3581552","url":null,"abstract":"Variable font file technology enables adjusting fonts on scaled axes that can include weight, and grade. While making text bold increases the character width, grade achieves boldness without increasing character width or causing text reflow. Through two studies with a total of 459 participants, we examined the effect of varying grade levels on both glancing and paragraph reading tasks in light and dark modes. We show that dark text on a light background (Light Mode) is read reliably faster than its polar opposite (Dark Mode). We found an effect of mode for both glance and paragraph reading and an effect of grade for LM with heavier, increased grade levels. Paragraph readers are not choosing, or preferring, LM over DM despite fluency benefits and reported visual clarity. Software designers can vary grade across the tested font formats to influence design aesthetics and user preferences without worrying about reducing reading fluency.","PeriodicalId":314098,"journal":{"name":"Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114805497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Binh Duc Nguyen, Jihae Han, M. Houben, Y. Bayoumi, A. Vande Moere
Media architecture exploits interactive technology to encourage passers-by to engage with an architectural environment. Whereas most media architecture installations focus on visual stimulation, we developed a permanent media facade that rhythmically knocks xylophone blocks embedded beneath 11 window sills, according to the human actions constantly traced via an overhead camera. In an attempt to overcome its apparent limitations in engaging passers-by more enduringly and purposefully, our study investigates the impact of feedforward learning, a constructive interaction method that instructs passers-by about the results of their actions. Based on a comparative (n=25) and a one-month in-the-wild (n=1877) study, we propose how feedforward learning could empower passers-by to understand the interaction of more abstract types of media architecture, and how particular quantitative indicators capturing this learning could predict how enduringly and purposefully a passer-might engage. We believe these contributions could inspire more creative integrations of non-visual modalities in future public interactive interventions.
{"title":"Engaging Passers-by with Rhythm: Applying Feedforward Learning to a Xylophonic Media Architecture Facade","authors":"Binh Duc Nguyen, Jihae Han, M. Houben, Y. Bayoumi, A. Vande Moere","doi":"10.1145/3544548.3580761","DOIUrl":"https://doi.org/10.1145/3544548.3580761","url":null,"abstract":"Media architecture exploits interactive technology to encourage passers-by to engage with an architectural environment. Whereas most media architecture installations focus on visual stimulation, we developed a permanent media facade that rhythmically knocks xylophone blocks embedded beneath 11 window sills, according to the human actions constantly traced via an overhead camera. In an attempt to overcome its apparent limitations in engaging passers-by more enduringly and purposefully, our study investigates the impact of feedforward learning, a constructive interaction method that instructs passers-by about the results of their actions. Based on a comparative (n=25) and a one-month in-the-wild (n=1877) study, we propose how feedforward learning could empower passers-by to understand the interaction of more abstract types of media architecture, and how particular quantitative indicators capturing this learning could predict how enduringly and purposefully a passer-might engage. We believe these contributions could inspire more creative integrations of non-visual modalities in future public interactive interventions.","PeriodicalId":314098,"journal":{"name":"Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115114853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Users are increasingly empowered to personalize natural language interfaces (NLIs) by teaching how to handle new natural language (NL) inputs. However, our formative study found that when teaching new NL inputs, users require assistance in clarifying ambiguities that arise and want insight into which parts of the input the NLI understands. In this paper we introduce ONYX, an intelligent agent that interactively learns new NL inputs by combining NL programming and programming-by-demonstration, also known as multi-modal interactive task learning. To address the aforementioned challenges, ONYX provides suggestions on how ONYX could handle new NL inputs based on previously learned concepts or user-defined procedures, and poses follow-up questions to clarify ambiguities in user demonstrations, using visual and textual aids to clarify the connections. Our evaluation shows that users provided with ONYX’s new features achieved significantly higher accuracy in teaching new NL inputs (median: 93.3%) in contrast to those without (median: 73.3%).
{"title":"ONYX: Assisting Users in Teaching Natural Language Interfaces Through Multi-Modal Interactive Task Learning","authors":"Marcel Ruoff, B. Myers, A. Maedche","doi":"10.1145/3544548.3580964","DOIUrl":"https://doi.org/10.1145/3544548.3580964","url":null,"abstract":"Users are increasingly empowered to personalize natural language interfaces (NLIs) by teaching how to handle new natural language (NL) inputs. However, our formative study found that when teaching new NL inputs, users require assistance in clarifying ambiguities that arise and want insight into which parts of the input the NLI understands. In this paper we introduce ONYX, an intelligent agent that interactively learns new NL inputs by combining NL programming and programming-by-demonstration, also known as multi-modal interactive task learning. To address the aforementioned challenges, ONYX provides suggestions on how ONYX could handle new NL inputs based on previously learned concepts or user-defined procedures, and poses follow-up questions to clarify ambiguities in user demonstrations, using visual and textual aids to clarify the connections. Our evaluation shows that users provided with ONYX’s new features achieved significantly higher accuracy in teaching new NL inputs (median: 93.3%) in contrast to those without (median: 73.3%).","PeriodicalId":314098,"journal":{"name":"Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116276892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qingyu Guo, Chao Zhang, Hanfang Lyu, Zhenhui Peng, Xiaojuan Ma
Online critique communities (OCCs) provide a convenient space for creators to solicit feedback on their artifacts and improve skills. Creators’ behavioral, emotional, and cognitive engagement with comments on their works contribute to their skill development. However, what kinds of critique creators feel engaging may change with the creation stage of their shared artifacts. In this paper, we first model three dimensions of engagement expressed in creators’ replies to peer comments. Then we quantitatively examine how their engagement is affected by artifacts’ stage and feedback characteristics via regression analysis. Results show that creators sharing works-in-progress tend to exhibit lower behavioral and emotional engagement, but higher cognitive engagement than those sharing complete works. The increase in the valence of the feedback is associated with a stronger increase in behavior engagement for seekers sharing complete works than works-in-progress. Finally, we discuss how our insights could benefit OCCs and other online help-seeking platforms.
{"title":"What Makes Creators Engage with Online Critiques? Understanding the Role of Artifacts’ Creation Stage, Characteristics of Community Comments, and their Interactions","authors":"Qingyu Guo, Chao Zhang, Hanfang Lyu, Zhenhui Peng, Xiaojuan Ma","doi":"10.1145/3544548.3581054","DOIUrl":"https://doi.org/10.1145/3544548.3581054","url":null,"abstract":"Online critique communities (OCCs) provide a convenient space for creators to solicit feedback on their artifacts and improve skills. Creators’ behavioral, emotional, and cognitive engagement with comments on their works contribute to their skill development. However, what kinds of critique creators feel engaging may change with the creation stage of their shared artifacts. In this paper, we first model three dimensions of engagement expressed in creators’ replies to peer comments. Then we quantitatively examine how their engagement is affected by artifacts’ stage and feedback characteristics via regression analysis. Results show that creators sharing works-in-progress tend to exhibit lower behavioral and emotional engagement, but higher cognitive engagement than those sharing complete works. The increase in the valence of the feedback is associated with a stronger increase in behavior engagement for seekers sharing complete works than works-in-progress. Finally, we discuss how our insights could benefit OCCs and other online help-seeking platforms.","PeriodicalId":314098,"journal":{"name":"Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123426313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Caluã de Lacerda Pataca, Matthew Watkins, Roshan Peiris, Sooyeon Lee, Matt Huenerfauth
Speech is expressive in ways that caption text does not capture, with emotion or emphasis information not conveyed. We interviewed eight Deaf and Hard-of-Hearing (dhh) individuals to understand if and how captions’ inexpressiveness impacts them in online meetings with hearing peers. Automatically captioned speech, we found, lacks affective depth, lending it a hard-to-parse ambiguity and general dullness. Interviewees regularly feel excluded, which some understand is an inherent quality of these types of meetings rather than a consequence of current caption text design. Next, we developed three novel captioning models that depicted, beyond words, features from prosody, emotions, and a mix of both. In an empirical study, 16 dhh participants compared these models with conventional captions. The emotion-based model outperformed traditional captions in depicting emotions and emphasis, with only a moderate loss in legibility, suggesting its potential as a more inclusive design for captions.
{"title":"Visualization of Speech Prosody and Emotion in Captions: Accessibility for Deaf and Hard-of-Hearing Users","authors":"Caluã de Lacerda Pataca, Matthew Watkins, Roshan Peiris, Sooyeon Lee, Matt Huenerfauth","doi":"10.1145/3544548.3581511","DOIUrl":"https://doi.org/10.1145/3544548.3581511","url":null,"abstract":"Speech is expressive in ways that caption text does not capture, with emotion or emphasis information not conveyed. We interviewed eight Deaf and Hard-of-Hearing (dhh) individuals to understand if and how captions’ inexpressiveness impacts them in online meetings with hearing peers. Automatically captioned speech, we found, lacks affective depth, lending it a hard-to-parse ambiguity and general dullness. Interviewees regularly feel excluded, which some understand is an inherent quality of these types of meetings rather than a consequence of current caption text design. Next, we developed three novel captioning models that depicted, beyond words, features from prosody, emotions, and a mix of both. In an empirical study, 16 dhh participants compared these models with conventional captions. The emotion-based model outperformed traditional captions in depicting emotions and emphasis, with only a moderate loss in legibility, suggesting its potential as a more inclusive design for captions.","PeriodicalId":314098,"journal":{"name":"Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123677421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}