Pascal Knierim, T. Kosch, Gabrielle LaBorwit, A. Schmidt
Many events happen so fast that we cannot observe them well with our naked eye. The temporal and spatial limitations of visual perception are well known and determine what we can actually see. Over the last years, sensors and camera systems became available that have surpassed the limitations of human perception. In this paper, we investigate how we can use augmented reality to create a system that allows altering the speed in which we perceive the world around us. We contribute an experimental exploration of how we can implement visual slow-motion to amplify human perception. We outline the research challenges and describe a conceptual architecture for manipulating the temporal perception. Using augmented reality glasses, we created a proof-of-concept implementation and conducted a study of which we report qualitative and quantitative results. We show how providing visual information from the environment at different speeds has benefits for the user. We also highlight the required new approaches to design interfaces that deal with decoupling the perception of the real would.
{"title":"Altering the Speed of Reality?: Exploring Visual Slow-Motion to Amplify Human Perception using Augmented Reality","authors":"Pascal Knierim, T. Kosch, Gabrielle LaBorwit, A. Schmidt","doi":"10.1145/3384657.3384659","DOIUrl":"https://doi.org/10.1145/3384657.3384659","url":null,"abstract":"Many events happen so fast that we cannot observe them well with our naked eye. The temporal and spatial limitations of visual perception are well known and determine what we can actually see. Over the last years, sensors and camera systems became available that have surpassed the limitations of human perception. In this paper, we investigate how we can use augmented reality to create a system that allows altering the speed in which we perceive the world around us. We contribute an experimental exploration of how we can implement visual slow-motion to amplify human perception. We outline the research challenges and describe a conceptual architecture for manipulating the temporal perception. Using augmented reality glasses, we created a proof-of-concept implementation and conducted a study of which we report qualitative and quantitative results. We show how providing visual information from the environment at different speeds has benefits for the user. We also highlight the required new approaches to design interfaces that deal with decoupling the perception of the real would.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126955753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Social Virtual Reality (VR) offers new opportunities for designing social experiences, but at the same time, it challenges the usability of VR as other avatars can block paths and occlude one's avatar's view. In contrast to designing VR similar to the physical reality, we allow avatars to go through and to see through other avatars. In detail, we vary the property of avatars to collide with other avatars. To better understand how such properties should be implemented, we also explore multimodal feedback when avatars collide with each other. Results of a user study show that multimodal feedback on collision yields to a significantly increased sensation of presence in Social VR. Moreover, while the loss of collision (the possibility to go through other avatars) causes a significant decrease of felt co-presence, qualitative feedback showed that the ability to walk through avatars can ease to access spots of interest. Finally, we observed that the purpose of Social VR determines how useful the possibility to walk through avatars is. We conclude with design guidelines that distinguish between Social VR with a priority on social interaction, Social VR supporting education and information, and hybrid Social VR enabling education and information in a social environment.
{"title":"Go-Through: Disabling Collision to Access Obstructed Paths and Open Occluded Views in Social VR","authors":"J. Reinhardt, Katrin Wolf","doi":"10.1145/3384657.3384784","DOIUrl":"https://doi.org/10.1145/3384657.3384784","url":null,"abstract":"Social Virtual Reality (VR) offers new opportunities for designing social experiences, but at the same time, it challenges the usability of VR as other avatars can block paths and occlude one's avatar's view. In contrast to designing VR similar to the physical reality, we allow avatars to go through and to see through other avatars. In detail, we vary the property of avatars to collide with other avatars. To better understand how such properties should be implemented, we also explore multimodal feedback when avatars collide with each other. Results of a user study show that multimodal feedback on collision yields to a significantly increased sensation of presence in Social VR. Moreover, while the loss of collision (the possibility to go through other avatars) causes a significant decrease of felt co-presence, qualitative feedback showed that the ability to walk through avatars can ease to access spots of interest. Finally, we observed that the purpose of Social VR determines how useful the possibility to walk through avatars is. We conclude with design guidelines that distinguish between Social VR with a priority on social interaction, Social VR supporting education and information, and hybrid Social VR enabling education and information in a social environment.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124313877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Erik Pescara, Florian Dreschner, Karola Marky, Kai Kunze, Michael Beigl
Research about vibrotactile patterns is traditionally conducted with patterns handcrafted by experts which are then subsequently evaluated in general user studies. The current empirical approach to designing vibrotactile patterns mostly utilizes expert decisions and is notably not adapted to individual differences in the perception of vibration. This work describes GenVibe: a novel approach to designing vibrotactile patterns by examining the automatic generation of personal patterns. GenVibe adjusts patterns to the perception of an individual through the utilization of interactive generative models. An algorithm is described and tested with a dummy smartphone made from off-the-shelf electronic components. Afterward, a user study with 11 participants evaluates the outcome of GenVibe. Results show a significant increase in accuracy from 73.6% to 84.0% and a higher confidence ratings by the users.
{"title":"GenVibe","authors":"Erik Pescara, Florian Dreschner, Karola Marky, Kai Kunze, Michael Beigl","doi":"10.1145/3384657.3384794","DOIUrl":"https://doi.org/10.1145/3384657.3384794","url":null,"abstract":"Research about vibrotactile patterns is traditionally conducted with patterns handcrafted by experts which are then subsequently evaluated in general user studies. The current empirical approach to designing vibrotactile patterns mostly utilizes expert decisions and is notably not adapted to individual differences in the perception of vibration. This work describes GenVibe: a novel approach to designing vibrotactile patterns by examining the automatic generation of personal patterns. GenVibe adjusts patterns to the perception of an individual through the utilization of interactive generative models. An algorithm is described and tested with a dummy smartphone made from off-the-shelf electronic components. Afterward, a user study with 11 participants evaluates the outcome of GenVibe. Results show a significant increase in accuracy from 73.6% to 84.0% and a higher confidence ratings by the users.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"160 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121020198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Don Samitha Elvitigala, Denys J. C. Matthies, Chamod Weerasinghe, Yilei Shi, Suranga Nanayakkara
Squats and dead-lifts are considered two important full-body exercises for beginners, which can be performed at home or the gymnasium. During the execution of these exercises, it is essential to maintain the correct body posture to avoid injuries. In this paper, we demonstrate an unobtrusive sensing approach, an insole-based wearable system that also provides feedback on the user's centre of pressure (CoP) via vibrotactile and visual aids. Solely visualizing the CoP can significantly improve body posture and thus effectively assist users when performing squats and dead-lifts. We explored different feedback modalities and conclude that a vibrotactile insole is a practical and effective solution.
{"title":"GymSoles++: Using Smart Wearbales to Improve Body Posture when Performing Squats and Dead-Lifts","authors":"Don Samitha Elvitigala, Denys J. C. Matthies, Chamod Weerasinghe, Yilei Shi, Suranga Nanayakkara","doi":"10.1145/3384657.3385331","DOIUrl":"https://doi.org/10.1145/3384657.3385331","url":null,"abstract":"Squats and dead-lifts are considered two important full-body exercises for beginners, which can be performed at home or the gymnasium. During the execution of these exercises, it is essential to maintain the correct body posture to avoid injuries. In this paper, we demonstrate an unobtrusive sensing approach, an insole-based wearable system that also provides feedback on the user's centre of pressure (CoP) via vibrotactile and visual aids. Solely visualizing the CoP can significantly improve body posture and thus effectively assist users when performing squats and dead-lifts. We explored different feedback modalities and conclude that a vibrotactile insole is a practical and effective solution.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130652080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cheek kissing is a common greeting in many countries around the world. Many parameters are involved when performing the kiss, such as which side to begin the kiss on and how many times the kiss is performed. These parameters can be used to infer one's social and physical context. In this paper, we present KissGlass, a system that leverages off-the-shelf smart glasses to recognize different kinds of cheek kissing gestures. Using a dataset we collected with 5 participants performing 10 gestures, our system obtains 83.0% accuracy in 10-fold cross validation and 74.33% accuracy in a leave-one-user-out user independent evaluation.
{"title":"KissGlass","authors":"R. Li, Juyoung Lee, Woontack Woo, Thad Starner","doi":"10.1145/3384657.3384801","DOIUrl":"https://doi.org/10.1145/3384657.3384801","url":null,"abstract":"Cheek kissing is a common greeting in many countries around the world. Many parameters are involved when performing the kiss, such as which side to begin the kiss on and how many times the kiss is performed. These parameters can be used to infer one's social and physical context. In this paper, we present KissGlass, a system that leverages off-the-shelf smart glasses to recognize different kinds of cheek kissing gestures. Using a dataset we collected with 5 participants performing 10 gestures, our system obtains 83.0% accuracy in 10-fold cross validation and 74.33% accuracy in a leave-one-user-out user independent evaluation.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125657578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Freediving relies on a diver's ability to hold his breath until resurfacing. Many fatal accidents in freediving are caused by a sudden blackout of the diver right before resurfacing. In this work, we propose a wearable prototype for monitoring oxygen saturation underwater and conceptualize an early warning system with regard to the diving depth. Our predictive algorithm estimates the latest point of return in order to emerge with a sufficient oxygen level to prevent a blackout and notifies the diver via an acoustic signal.
{"title":"Towards A Wearable for Deep Water Blackout Prevention","authors":"Frederik Wiehr, Andreas Höh, A. Krüger","doi":"10.1145/3384657.3385329","DOIUrl":"https://doi.org/10.1145/3384657.3385329","url":null,"abstract":"Freediving relies on a diver's ability to hold his breath until resurfacing. Many fatal accidents in freediving are caused by a sudden blackout of the diver right before resurfacing. In this work, we propose a wearable prototype for monitoring oxygen saturation underwater and conceptualize an early warning system with regard to the diving depth. Our predictive algorithm estimates the latest point of return in order to emerge with a sufficient oxygen level to prevent a blackout and notifies the diver via an acoustic signal.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127647192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this case study, we discuss how an implanted magnet can support novel forms of input and output. By measuring the relative position between the magnet and an on-body device, local position of the device can be used for input. Electromagnetic fields can actuate the magnet to provide output by means of in-vivo haptic feedback. Traditional tracking options would struggle tracking the input methods we suggest, and the in-vivo sensations of vibration provided as output differ from the experience of vibrations applied externally - our data suggests that in-vivo vibrations are mediated by different receptors than external vibration. As the magnet can be easily tracked as well as actuated it provides opportunities for encoding information as material experiences.
{"title":"Novel Input and Output opportunities using an Implanted Magnet","authors":"P. Strohmeier, Jess McIntosh","doi":"10.1145/3384657.3384785","DOIUrl":"https://doi.org/10.1145/3384657.3384785","url":null,"abstract":"In this case study, we discuss how an implanted magnet can support novel forms of input and output. By measuring the relative position between the magnet and an on-body device, local position of the device can be used for input. Electromagnetic fields can actuate the magnet to provide output by means of in-vivo haptic feedback. Traditional tracking options would struggle tracking the input methods we suggest, and the in-vivo sensations of vibration provided as output differ from the experience of vibrations applied externally - our data suggests that in-vivo vibrations are mediated by different receptors than external vibration. As the magnet can be easily tracked as well as actuated it provides opportunities for encoding information as material experiences.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129472276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pat Pataranutaporn, Angela Vujic, D. S. Kong, P. Maes, Misha Sra
There are trillions of living biological "computers" on, inside, and around the human body: microbes. Microbes have the potential to enhance human-computer interaction (HCI) in entirely new ways. Advances in open-source biotechnology have already enabled designers, artists, and engineers to use microbes in redefining wearables, games, musical instruments, robots, and more. "Living Bits", inspired by Tangible Bits, is an attempt to think beyond the traditional boundaries that exist between biological cells and computers for integrating microorganism in HCI. In this work we: 1) outline and inspire the possibility for integrating organic and regenerative living systems in HCI; 2) explore and characterize human-microbe interactions across contexts and scales; 3) provide principles for stimulating discussions, presentations, and brainstorms of microbial interfaces. We aim to make Living Bits accessible to researchers across HCI, synthetic biology, biotechnology, and interaction design to explore the next generation of biological HCI.
{"title":"Living Bits: Opportunities and Challenges for Integrating Living Microorganisms in Human-Computer Interaction","authors":"Pat Pataranutaporn, Angela Vujic, D. S. Kong, P. Maes, Misha Sra","doi":"10.1145/3384657.3384783","DOIUrl":"https://doi.org/10.1145/3384657.3384783","url":null,"abstract":"There are trillions of living biological \"computers\" on, inside, and around the human body: microbes. Microbes have the potential to enhance human-computer interaction (HCI) in entirely new ways. Advances in open-source biotechnology have already enabled designers, artists, and engineers to use microbes in redefining wearables, games, musical instruments, robots, and more. \"Living Bits\", inspired by Tangible Bits, is an attempt to think beyond the traditional boundaries that exist between biological cells and computers for integrating microorganism in HCI. In this work we: 1) outline and inspire the possibility for integrating organic and regenerative living systems in HCI; 2) explore and characterize human-microbe interactions across contexts and scales; 3) provide principles for stimulating discussions, presentations, and brainstorms of microbial interfaces. We aim to make Living Bits accessible to researchers across HCI, synthetic biology, biotechnology, and interaction design to explore the next generation of biological HCI.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132878416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Non-verbal information is essential to understand intentions and emotions and to facilitate social interaction between humans and between humans and computers. One reliable source of such information is the eyes. We investigated the eye-based interaction (recognizing eye gestures or eye movements) using an eyewear device for facial expression recognition. The device incorporates 16 low-cost optical sensors. The system allows hands-free interaction in many situations. Using the device, we evaluated three eye-based interactions. First, we evaluated the accuracy of detecting the gestures with nine participants. The average accuracy of detecting seven different eye gestures is 89.1% with user-dependent training. We used dynamic time warping (DTW) for gesture recognition. Second, we evaluated the accuracy of eye gaze position estimation with five users holding a neutral face. The system showed potential to track the approximate direction of the eyes, with higher accuracy in detecting position y than x. Finally, we did a feasibility study of one user reading jokes while wearing the device. The system was capable of analyzing facial expressions and eye movements in daily contexts.
{"title":"Eye-based Interaction Using Embedded Optical Sensors on an Eyewear Device for Facial Expression Recognition","authors":"Katsutoshi Masai, K. Kunze, M. Sugimoto","doi":"10.1145/3384657.3384787","DOIUrl":"https://doi.org/10.1145/3384657.3384787","url":null,"abstract":"Non-verbal information is essential to understand intentions and emotions and to facilitate social interaction between humans and between humans and computers. One reliable source of such information is the eyes. We investigated the eye-based interaction (recognizing eye gestures or eye movements) using an eyewear device for facial expression recognition. The device incorporates 16 low-cost optical sensors. The system allows hands-free interaction in many situations. Using the device, we evaluated three eye-based interactions. First, we evaluated the accuracy of detecting the gestures with nine participants. The average accuracy of detecting seven different eye gestures is 89.1% with user-dependent training. We used dynamic time warping (DTW) for gesture recognition. Second, we evaluated the accuracy of eye gaze position estimation with five users holding a neutral face. The system showed potential to track the approximate direction of the eyes, with higher accuracy in detecting position y than x. Finally, we did a feasibility study of one user reading jokes while wearing the device. The system was capable of analyzing facial expressions and eye movements in daily contexts.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132155717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuya Adachi, Haoran Xie, T. Torii, Haopeng Zhang, Ryo Sagisaka
In this work, we propose a novel wearable device to augment the user's egocentric space to a wide range. To achieve this goal, the proposed device provides bidirectional projection using a head-mounted wearable projector and two dihedral mirrors. The included angle of the mirrors were set to reflect the projected image in front of and behind the user. A prototype system is developed to explore possible applications using the proposed device in different scenarios, such as riding a bike and map navigation.
{"title":"EgoSpace","authors":"Yuya Adachi, Haoran Xie, T. Torii, Haopeng Zhang, Ryo Sagisaka","doi":"10.1145/3384657.3385328","DOIUrl":"https://doi.org/10.1145/3384657.3385328","url":null,"abstract":"In this work, we propose a novel wearable device to augment the user's egocentric space to a wide range. To achieve this goal, the proposed device provides bidirectional projection using a head-mounted wearable projector and two dihedral mirrors. The included angle of the mirrors were set to reflect the projected image in front of and behind the user. A prototype system is developed to explore possible applications using the proposed device in different scenarios, such as riding a bike and map navigation.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117006670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}