Yannick Weiss, Steeven Villa, Albrecht Schmidt, Sven Mayer, Florian Müller
Providing users with a haptic sensation of the hardness and softness of objects in virtual reality is an open challenge. While physical props and haptic devices help, their haptic properties do not allow for dynamic adjustments. To overcome this limitation, we present a novel technique for changing the perceived stiffness of objects based on a visuo-haptic illusion. We achieved this by manipulating the hands’ Control-to-Display (C/D) ratio in virtual reality while pressing down on an object with fixed stiffness. In the first study (N=12), we determine the detection thresholds of the illusion. Our results show that we can exploit a C/D ratio from 0.7 to 3.5 without user detection. In the second study (N=12), we analyze the illusion’s impact on the perceived stiffness. Our results show that participants perceive the objects to be up to 28.1% softer and 8.9% stiffer, allowing for various haptic applications in virtual reality.
{"title":"Using Pseudo-Stiffness to Enrich the Haptic Experience in Virtual Reality","authors":"Yannick Weiss, Steeven Villa, Albrecht Schmidt, Sven Mayer, Florian Müller","doi":"10.1145/3544548.3581223","DOIUrl":"https://doi.org/10.1145/3544548.3581223","url":null,"abstract":"Providing users with a haptic sensation of the hardness and softness of objects in virtual reality is an open challenge. While physical props and haptic devices help, their haptic properties do not allow for dynamic adjustments. To overcome this limitation, we present a novel technique for changing the perceived stiffness of objects based on a visuo-haptic illusion. We achieved this by manipulating the hands’ Control-to-Display (C/D) ratio in virtual reality while pressing down on an object with fixed stiffness. In the first study (N=12), we determine the detection thresholds of the illusion. Our results show that we can exploit a C/D ratio from 0.7 to 3.5 without user detection. In the second study (N=12), we analyze the illusion’s impact on the perceived stiffness. Our results show that participants perceive the objects to be up to 28.1% softer and 8.9% stiffer, allowing for various haptic applications in virtual reality.","PeriodicalId":314098,"journal":{"name":"Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems","volume":"116 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128670337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Judith Amores Fernandez, Nirmita Mehra, B. Rasch, P. Maes
This paper investigates how a smartphone-controlled olfactory wearable might improve memory recall. We conducted a within-subjects experiment with 32 participants using the device and without (control). In the experimental condition, bursts of odor were released during visuo-spatial memory navigation tasks, and replayed during sleep the following night in the subjects’ home. We found that compared to control, there was an improvement in memory performance when using the scent wearable in memory tasks that involved walking in a physical space. Furthermore, participants recalled more objects and translations when re-exposed to the same scent during the recall test, in addition to during sleep. These effects were statistically significant, and, in the object recall task, they also persisted for more than one week. This experiment demonstrates a potential practical application of olfactory interfaces that can interact with a user during wake as well as sleep to support memory.
{"title":"Olfactory Wearables for Mobile Targeted Memory Reactivation","authors":"Judith Amores Fernandez, Nirmita Mehra, B. Rasch, P. Maes","doi":"10.1145/3544548.3580892","DOIUrl":"https://doi.org/10.1145/3544548.3580892","url":null,"abstract":"This paper investigates how a smartphone-controlled olfactory wearable might improve memory recall. We conducted a within-subjects experiment with 32 participants using the device and without (control). In the experimental condition, bursts of odor were released during visuo-spatial memory navigation tasks, and replayed during sleep the following night in the subjects’ home. We found that compared to control, there was an improvement in memory performance when using the scent wearable in memory tasks that involved walking in a physical space. Furthermore, participants recalled more objects and translations when re-exposed to the same scent during the recall test, in addition to during sleep. These effects were statistically significant, and, in the object recall task, they also persisted for more than one week. This experiment demonstrates a potential practical application of olfactory interfaces that can interact with a user during wake as well as sleep to support memory.","PeriodicalId":314098,"journal":{"name":"Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127376103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As a popular form of online media, videos have been widely used to communicate scientific knowledge on video-sharing platforms. These science knowledge videos take advantage of rich and multi-modality information which has the potential to provoke public engagement with science knowledge and promote self-learning. However, how communicators strategically make science knowledge videos to engage viewers, and how specific communication strategies correlate with viewer engagement remain under-explored. In this paper, we first established a taxonomy of communication strategies currently used in science knowledge videos on Bilibili and then examined the correlations between communication strategies and viewers’ behavioral, emotional, and cognitive engagements measured by post-video comments. Our findings revealed the landscape of rich science communication strategies in science knowledge videos and further uncovered the correlations between these strategies and viewer engagements. We situated our results within prior research on science communication and HCI, and provided design implications for video-sharing platforms to support effective science communication.
{"title":"Understanding Communication Strategies and Viewer Engagement with Science Knowledge Videos on Bilibili","authors":"Yu Zhang, Changyang He, Huanchen Wang, Zhicong Lu","doi":"10.1145/3544548.3581476","DOIUrl":"https://doi.org/10.1145/3544548.3581476","url":null,"abstract":"As a popular form of online media, videos have been widely used to communicate scientific knowledge on video-sharing platforms. These science knowledge videos take advantage of rich and multi-modality information which has the potential to provoke public engagement with science knowledge and promote self-learning. However, how communicators strategically make science knowledge videos to engage viewers, and how specific communication strategies correlate with viewer engagement remain under-explored. In this paper, we first established a taxonomy of communication strategies currently used in science knowledge videos on Bilibili and then examined the correlations between communication strategies and viewers’ behavioral, emotional, and cognitive engagements measured by post-video comments. Our findings revealed the landscape of rich science communication strategies in science knowledge videos and further uncovered the correlations between these strategies and viewer engagements. We situated our results within prior research on science communication and HCI, and provided design implications for video-sharing platforms to support effective science communication.","PeriodicalId":314098,"journal":{"name":"Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems","volume":"10 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127378022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nina Döllinger, Erik Wolf, M. Botsch, M. Latoschik, Carolin Wienrich
Virtual Reality (VR) allows us to replace our visible body with a virtual self-representation (avatar) and to explore its effects on our body perception. While the feeling of owning and controlling a virtual body is widely researched, how VR affects the awareness of internal body signals (body awareness) remains open. Forty participants performed moving meditation tasks in reality and VR, either facing their mirror image or not. Both the virtual environment and avatars photorealistically matched their real counterparts. We found a negative effect of VR on body awareness, mediated by feeling embodied in and changed by the avatar. Further, we revealed a negative effect of a mirror on body awareness. Our results indicate that assessing body awareness should be essential in evaluating VR designs and avatar embodiment aiming at mental health, as even a scenario as close to reality as possible can distract users from their internal body signals.
{"title":"Are Embodied Avatars Harmful to our Self-Experience? The Impact of Virtual Embodiment on Body Awareness","authors":"Nina Döllinger, Erik Wolf, M. Botsch, M. Latoschik, Carolin Wienrich","doi":"10.1145/3544548.3580918","DOIUrl":"https://doi.org/10.1145/3544548.3580918","url":null,"abstract":"Virtual Reality (VR) allows us to replace our visible body with a virtual self-representation (avatar) and to explore its effects on our body perception. While the feeling of owning and controlling a virtual body is widely researched, how VR affects the awareness of internal body signals (body awareness) remains open. Forty participants performed moving meditation tasks in reality and VR, either facing their mirror image or not. Both the virtual environment and avatars photorealistically matched their real counterparts. We found a negative effect of VR on body awareness, mediated by feeling embodied in and changed by the avatar. Further, we revealed a negative effect of a mirror on body awareness. Our results indicate that assessing body awareness should be essential in evaluating VR designs and avatar embodiment aiming at mental health, as even a scenario as close to reality as possible can distract users from their internal body signals.","PeriodicalId":314098,"journal":{"name":"Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127441685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Explaining to novice users how to interact in immersive VR applications may be challenging. This is in particular due to the fact that the learners are isolated from the real world, and they are asked to manipulate hardware and software objects they are not used to. Consequently, the onboarding phase, which consists in teaching the user how to interact with the application is particularly crucial. In this paper, we aim at giving a better understanding of current VR onboarding methods, their benefits and challenges. We performed 21 VR tutorial ergonomic reviews and 15 interviews with VR experts with experience in VR onboarding. Building on the results, we propose a conceptual framework for VR onboarding and discuss important research directions to explore the design of future efficient onboarding solutions adapted to VR.
{"title":"User Onboarding in Virtual Reality: An Investigation of Current Practices","authors":"Edwige Chauvergne, M. Hachet, Arnaud Prouzeau","doi":"10.1145/3544548.3581211","DOIUrl":"https://doi.org/10.1145/3544548.3581211","url":null,"abstract":"Explaining to novice users how to interact in immersive VR applications may be challenging. This is in particular due to the fact that the learners are isolated from the real world, and they are asked to manipulate hardware and software objects they are not used to. Consequently, the onboarding phase, which consists in teaching the user how to interact with the application is particularly crucial. In this paper, we aim at giving a better understanding of current VR onboarding methods, their benefits and challenges. We performed 21 VR tutorial ergonomic reviews and 15 interviews with VR experts with experience in VR onboarding. Building on the results, we propose a conceptual framework for VR onboarding and discuss important research directions to explore the design of future efficient onboarding solutions adapted to VR.","PeriodicalId":314098,"journal":{"name":"Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127514964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Valdemar Danry, Pat Pataranutaporn, Yaoli Mao, P. Maes
Critical thinking is an essential human skill. Despite the importance of critical thinking, research reveals that our reasoning ability suffers from personal biases and cognitive resource limitations, leading to potentially dangerous outcomes. This paper presents the novel idea of AI-framed Questioning that turns information relevant to the AI classification into questions to actively engage users’ thinking and scaffold their reasoning process. We conducted a study with 204 participants comparing the effects of AI-framed Questioning on a critical thinking task; discernment of logical validity of socially divisive statements. Our results show that compared to no feedback and even causal AI explanations of an always correct system, AI-framed Questioning significantly increase human discernment of logically flawed statements. Our experiment exemplifies a future style of Human-AI co-reasoning system, where the AI becomes a critical thinking stimulator rather than an information teller.
{"title":"Don’t Just Tell Me, Ask Me: AI Systems that Intelligently Frame Explanations as Questions Improve Human Logical Discernment Accuracy over Causal AI explanations","authors":"Valdemar Danry, Pat Pataranutaporn, Yaoli Mao, P. Maes","doi":"10.1145/3544548.3580672","DOIUrl":"https://doi.org/10.1145/3544548.3580672","url":null,"abstract":"Critical thinking is an essential human skill. Despite the importance of critical thinking, research reveals that our reasoning ability suffers from personal biases and cognitive resource limitations, leading to potentially dangerous outcomes. This paper presents the novel idea of AI-framed Questioning that turns information relevant to the AI classification into questions to actively engage users’ thinking and scaffold their reasoning process. We conducted a study with 204 participants comparing the effects of AI-framed Questioning on a critical thinking task; discernment of logical validity of socially divisive statements. Our results show that compared to no feedback and even causal AI explanations of an always correct system, AI-framed Questioning significantly increase human discernment of logically flawed statements. Our experiment exemplifies a future style of Human-AI co-reasoning system, where the AI becomes a critical thinking stimulator rather than an information teller.","PeriodicalId":314098,"journal":{"name":"Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129953326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yasha Iravantchi, Yi Zhao, Kenrick Kin, Alanson P. Sample
Enabling computing systems to understand user interactions with everyday surfaces and objects can drive a wide range of applications. However, existing vibration-based sensors (e.g., accelerometers) lack the sensitivity to detect light touch gestures or the bandwidth to recognize activity containing high-frequency components. Conversely, microphones are highly susceptible to environmental noise, degrading performance. Each time an object impacts a surface, Surface Acoustic Waves (SAWs) are generated that propagate along the air-to-surface boundary. This work repurposes a Voice PickUp Unit (VPU) to capture SAWs on surfaces (including smooth surfaces, odd geometries, and fabrics) over long distances and in noisy environments. Our custom-designed signal acquisition, processing, and machine learning pipeline demonstrates utility in both interactive and activity recognition applications, such as classifying trackpad-style gestures on a desk and recognizing 16 cooking-related activities, all with >97% accuracy. Ultimately, SAWs offer a unique signal that can enable robust recognition of user touch and on-surface events.
{"title":"SAWSense: Using Surface Acoustic Waves for Surface-bound Event Recognition","authors":"Yasha Iravantchi, Yi Zhao, Kenrick Kin, Alanson P. Sample","doi":"10.1145/3544548.3580991","DOIUrl":"https://doi.org/10.1145/3544548.3580991","url":null,"abstract":"Enabling computing systems to understand user interactions with everyday surfaces and objects can drive a wide range of applications. However, existing vibration-based sensors (e.g., accelerometers) lack the sensitivity to detect light touch gestures or the bandwidth to recognize activity containing high-frequency components. Conversely, microphones are highly susceptible to environmental noise, degrading performance. Each time an object impacts a surface, Surface Acoustic Waves (SAWs) are generated that propagate along the air-to-surface boundary. This work repurposes a Voice PickUp Unit (VPU) to capture SAWs on surfaces (including smooth surfaces, odd geometries, and fabrics) over long distances and in noisy environments. Our custom-designed signal acquisition, processing, and machine learning pipeline demonstrates utility in both interactive and activity recognition applications, such as classifying trackpad-style gestures on a desk and recognizing 16 cooking-related activities, all with >97% accuracy. Ultimately, SAWs offer a unique signal that can enable robust recognition of user touch and on-surface events.","PeriodicalId":314098,"journal":{"name":"Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems","volume":"7 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130105357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There is a rapidly growing group of people learning to sew online. Without hands-on instruction, these learners are often left to discover the challenges and pitfalls of sewing through trial and error, which can be a frustrating and wasteful process. We present InStitches, a software tool that augments existing sewing patterns with targeted practice tasks to guide users through the skills needed to complete their chosen project. InStitches analyzes the difficulty of sewing instructions relative to a user’s reported expertise in order to determine where practice will be helpful and then solves for a new pattern layout that incorporates additional practice steps while optimizing for efficient use of available materials. Our user evaluation indicates that InStitches can successfully identify challenging sewing tasks and augment existing sewing patterns with practice tasks that users find helpful, showing promise as a tool for helping those new to the craft.
{"title":"InStitches: Augmenting Sewing Patterns with Personalized Material-Efficient Practice","authors":"Mackenzie Leake, Kathryn Jin, Abe Davis, Stefanie Mueller","doi":"10.1145/3544548.3581499","DOIUrl":"https://doi.org/10.1145/3544548.3581499","url":null,"abstract":"There is a rapidly growing group of people learning to sew online. Without hands-on instruction, these learners are often left to discover the challenges and pitfalls of sewing through trial and error, which can be a frustrating and wasteful process. We present InStitches, a software tool that augments existing sewing patterns with targeted practice tasks to guide users through the skills needed to complete their chosen project. InStitches analyzes the difficulty of sewing instructions relative to a user’s reported expertise in order to determine where practice will be helpful and then solves for a new pattern layout that incorporates additional practice steps while optimizing for efficient use of available materials. Our user evaluation indicates that InStitches can successfully identify challenging sewing tasks and augment existing sewing patterns with practice tasks that users find helpful, showing promise as a tool for helping those new to the craft.","PeriodicalId":314098,"journal":{"name":"Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems","volume":"146 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128981538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paradigm of Privacy by Design aims to integrate privacy early in the product development life cycle. One element of this is to conduct threat modeling with developers to identify privacy threats that engender from the architecture design of the product. In this paper, we propose a systematic lightweight privacy threat modeling framework (MAP) based on attacker personas that is both easy to operationalize and scale. MAP leverages existing privacy threat frameworks to provide an operational roadmap based on relevant threat actors, associated threats, and resulting harm to individuals as well as organizations. We implement MAP as a persona picker tool that threat modelers can use as a menu select to identify, investigate, and remediate relevant threats based on product developer’s scope of privacy risk. We conclude by testing the framework using a repository of 207 privacy breaches extracted from the VERIS Community Database.
{"title":"Models of Applied Privacy (MAP): A Persona Based Approach to Threat Modeling","authors":"Jayati Dev, Bahman Rashidi, Vaibhav Garg","doi":"10.1145/3544548.3581484","DOIUrl":"https://doi.org/10.1145/3544548.3581484","url":null,"abstract":"The paradigm of Privacy by Design aims to integrate privacy early in the product development life cycle. One element of this is to conduct threat modeling with developers to identify privacy threats that engender from the architecture design of the product. In this paper, we propose a systematic lightweight privacy threat modeling framework (MAP) based on attacker personas that is both easy to operationalize and scale. MAP leverages existing privacy threat frameworks to provide an operational roadmap based on relevant threat actors, associated threats, and resulting harm to individuals as well as organizations. We implement MAP as a persona picker tool that threat modelers can use as a menu select to identify, investigate, and remediate relevant threats based on product developer’s scope of privacy risk. We conclude by testing the framework using a repository of 207 privacy breaches extracted from the VERIS Community Database.","PeriodicalId":314098,"journal":{"name":"Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125661748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hanieh Shakeri, Denise Y. Geiskkovitch, Radhika Garg, Carman Neustaedter
When emerging adults move out of their parents’ homes for the first time, their needs for togetherness and connection evolve, as do their parents’. In co-located homes, people often experience togetherness passively by sensing one another’s presence in their environment. However, when no longer living together, methods of experiencing togetherness change. Thus, we conducted an interview and co-design study with 16 pairs of parents and emerging adults that explores this concept across distance. The study uncovered differences in the connection needs of emerging adults and their parents, including their goals in connecting, the amount of communication they needed, and their needs for privacy and transparency. We additionally found that passive connecting factors included ambient sounds of the home, visual shared experiences and traces of one another in the home, ambient home smellscapes and smell memories, touching left-behind objects or gifted objects, and the taste of family recipes and the ambience of family mealtimes. We discuss suggestions for designing for passive co-presence based on this new knowledge.
{"title":"Sensing Their Presence: How Emerging Adults And Their Parents Connect After Moving Apart","authors":"Hanieh Shakeri, Denise Y. Geiskkovitch, Radhika Garg, Carman Neustaedter","doi":"10.1145/3544548.3581102","DOIUrl":"https://doi.org/10.1145/3544548.3581102","url":null,"abstract":"When emerging adults move out of their parents’ homes for the first time, their needs for togetherness and connection evolve, as do their parents’. In co-located homes, people often experience togetherness passively by sensing one another’s presence in their environment. However, when no longer living together, methods of experiencing togetherness change. Thus, we conducted an interview and co-design study with 16 pairs of parents and emerging adults that explores this concept across distance. The study uncovered differences in the connection needs of emerging adults and their parents, including their goals in connecting, the amount of communication they needed, and their needs for privacy and transparency. We additionally found that passive connecting factors included ambient sounds of the home, visual shared experiences and traces of one another in the home, ambient home smellscapes and smell memories, touching left-behind objects or gifted objects, and the taste of family recipes and the ambience of family mealtimes. We discuss suggestions for designing for passive co-presence based on this new knowledge.","PeriodicalId":314098,"journal":{"name":"Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132834144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}