首页 > 最新文献

Proceedings of the Augmented Humans International Conference最新文献

英文 中文
Altering the Speed of Reality?: Exploring Visual Slow-Motion to Amplify Human Perception using Augmented Reality 改变现实的速度?:探索使用增强现实技术增强人类感知的视觉慢动作
Pub Date : 2020-03-16 DOI: 10.1145/3384657.3384659
Pascal Knierim, T. Kosch, Gabrielle LaBorwit, A. Schmidt
Many events happen so fast that we cannot observe them well with our naked eye. The temporal and spatial limitations of visual perception are well known and determine what we can actually see. Over the last years, sensors and camera systems became available that have surpassed the limitations of human perception. In this paper, we investigate how we can use augmented reality to create a system that allows altering the speed in which we perceive the world around us. We contribute an experimental exploration of how we can implement visual slow-motion to amplify human perception. We outline the research challenges and describe a conceptual architecture for manipulating the temporal perception. Using augmented reality glasses, we created a proof-of-concept implementation and conducted a study of which we report qualitative and quantitative results. We show how providing visual information from the environment at different speeds has benefits for the user. We also highlight the required new approaches to design interfaces that deal with decoupling the perception of the real would.
许多事件发生得如此之快,以至于我们无法用肉眼很好地观察它们。视觉感知的时间和空间限制是众所周知的,这决定了我们实际上能看到什么。在过去的几年里,传感器和摄像系统已经超越了人类感知的限制。在本文中,我们研究了如何使用增强现实来创建一个允许改变我们感知周围世界的速度的系统。我们对如何实现视觉慢动作来增强人类感知能力进行了实验探索。我们概述了研究挑战,并描述了操纵时间感知的概念架构。使用增强现实眼镜,我们创建了一个概念验证实现,并进行了一项研究,我们报告了定性和定量结果。我们展示了如何以不同的速度提供来自环境的视觉信息对用户有好处。我们还强调了设计接口所需的新方法,这些方法处理对真实将的感知的解耦。
{"title":"Altering the Speed of Reality?: Exploring Visual Slow-Motion to Amplify Human Perception using Augmented Reality","authors":"Pascal Knierim, T. Kosch, Gabrielle LaBorwit, A. Schmidt","doi":"10.1145/3384657.3384659","DOIUrl":"https://doi.org/10.1145/3384657.3384659","url":null,"abstract":"Many events happen so fast that we cannot observe them well with our naked eye. The temporal and spatial limitations of visual perception are well known and determine what we can actually see. Over the last years, sensors and camera systems became available that have surpassed the limitations of human perception. In this paper, we investigate how we can use augmented reality to create a system that allows altering the speed in which we perceive the world around us. We contribute an experimental exploration of how we can implement visual slow-motion to amplify human perception. We outline the research challenges and describe a conceptual architecture for manipulating the temporal perception. Using augmented reality glasses, we created a proof-of-concept implementation and conducted a study of which we report qualitative and quantitative results. We show how providing visual information from the environment at different speeds has benefits for the user. We also highlight the required new approaches to design interfaces that deal with decoupling the perception of the real would.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126955753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Go-Through: Disabling Collision to Access Obstructed Paths and Open Occluded Views in Social VR 通过:禁用碰撞访问受阻的路径和打开封闭的视图在社会VR
Pub Date : 2020-03-16 DOI: 10.1145/3384657.3384784
J. Reinhardt, Katrin Wolf
Social Virtual Reality (VR) offers new opportunities for designing social experiences, but at the same time, it challenges the usability of VR as other avatars can block paths and occlude one's avatar's view. In contrast to designing VR similar to the physical reality, we allow avatars to go through and to see through other avatars. In detail, we vary the property of avatars to collide with other avatars. To better understand how such properties should be implemented, we also explore multimodal feedback when avatars collide with each other. Results of a user study show that multimodal feedback on collision yields to a significantly increased sensation of presence in Social VR. Moreover, while the loss of collision (the possibility to go through other avatars) causes a significant decrease of felt co-presence, qualitative feedback showed that the ability to walk through avatars can ease to access spots of interest. Finally, we observed that the purpose of Social VR determines how useful the possibility to walk through avatars is. We conclude with design guidelines that distinguish between Social VR with a priority on social interaction, Social VR supporting education and information, and hybrid Social VR enabling education and information in a social environment.
社交虚拟现实(VR)为设计社交体验提供了新的机会,但与此同时,它也挑战了VR的可用性,因为其他虚拟化身可能会阻塞路径并遮挡虚拟化身的视野。与设计类似于物理现实的虚拟现实相比,我们允许虚拟人物穿越和看穿其他虚拟人物。具体来说,我们通过改变虚拟人物的属性来与其他虚拟人物发生碰撞。为了更好地理解这些属性应该如何实现,我们还探讨了当化身相互碰撞时的多模态反馈。一项用户研究的结果表明,在社交虚拟现实中,碰撞的多模态反馈显著增加了临场感。此外,虽然失去碰撞(穿越其他虚拟角色的可能性)会导致共同存在感的显著降低,但定性反馈表明,穿越虚拟角色的能力可以轻松访问感兴趣的地点。最后,我们观察到社交虚拟现实的目的决定了穿越虚拟角色的可能性有多有用。我们总结了设计指南,区分了优先考虑社交互动的社交VR,支持教育和信息的社交VR,以及在社交环境中支持教育和信息的混合社交VR。
{"title":"Go-Through: Disabling Collision to Access Obstructed Paths and Open Occluded Views in Social VR","authors":"J. Reinhardt, Katrin Wolf","doi":"10.1145/3384657.3384784","DOIUrl":"https://doi.org/10.1145/3384657.3384784","url":null,"abstract":"Social Virtual Reality (VR) offers new opportunities for designing social experiences, but at the same time, it challenges the usability of VR as other avatars can block paths and occlude one's avatar's view. In contrast to designing VR similar to the physical reality, we allow avatars to go through and to see through other avatars. In detail, we vary the property of avatars to collide with other avatars. To better understand how such properties should be implemented, we also explore multimodal feedback when avatars collide with each other. Results of a user study show that multimodal feedback on collision yields to a significantly increased sensation of presence in Social VR. Moreover, while the loss of collision (the possibility to go through other avatars) causes a significant decrease of felt co-presence, qualitative feedback showed that the ability to walk through avatars can ease to access spots of interest. Finally, we observed that the purpose of Social VR determines how useful the possibility to walk through avatars is. We conclude with design guidelines that distinguish between Social VR with a priority on social interaction, Social VR supporting education and information, and hybrid Social VR enabling education and information in a social environment.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124313877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
GenVibe GenVibe
Pub Date : 2020-03-16 DOI: 10.1145/3384657.3384794
Erik Pescara, Florian Dreschner, Karola Marky, Kai Kunze, Michael Beigl
Research about vibrotactile patterns is traditionally conducted with patterns handcrafted by experts which are then subsequently evaluated in general user studies. The current empirical approach to designing vibrotactile patterns mostly utilizes expert decisions and is notably not adapted to individual differences in the perception of vibration. This work describes GenVibe: a novel approach to designing vibrotactile patterns by examining the automatic generation of personal patterns. GenVibe adjusts patterns to the perception of an individual through the utilization of interactive generative models. An algorithm is described and tested with a dummy smartphone made from off-the-shelf electronic components. Afterward, a user study with 11 participants evaluates the outcome of GenVibe. Results show a significant increase in accuracy from 73.6% to 84.0% and a higher confidence ratings by the users.
{"title":"GenVibe","authors":"Erik Pescara, Florian Dreschner, Karola Marky, Kai Kunze, Michael Beigl","doi":"10.1145/3384657.3384794","DOIUrl":"https://doi.org/10.1145/3384657.3384794","url":null,"abstract":"Research about vibrotactile patterns is traditionally conducted with patterns handcrafted by experts which are then subsequently evaluated in general user studies. The current empirical approach to designing vibrotactile patterns mostly utilizes expert decisions and is notably not adapted to individual differences in the perception of vibration. This work describes GenVibe: a novel approach to designing vibrotactile patterns by examining the automatic generation of personal patterns. GenVibe adjusts patterns to the perception of an individual through the utilization of interactive generative models. An algorithm is described and tested with a dummy smartphone made from off-the-shelf electronic components. Afterward, a user study with 11 participants evaluates the outcome of GenVibe. Results show a significant increase in accuracy from 73.6% to 84.0% and a higher confidence ratings by the users.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"160 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121020198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GymSoles++: Using Smart Wearbales to Improve Body Posture when Performing Squats and Dead-Lifts 体操鞋底++:使用智能穿戴包,以改善身体姿势时进行深蹲和死举
Pub Date : 2020-03-16 DOI: 10.1145/3384657.3385331
Don Samitha Elvitigala, Denys J. C. Matthies, Chamod Weerasinghe, Yilei Shi, Suranga Nanayakkara
Squats and dead-lifts are considered two important full-body exercises for beginners, which can be performed at home or the gymnasium. During the execution of these exercises, it is essential to maintain the correct body posture to avoid injuries. In this paper, we demonstrate an unobtrusive sensing approach, an insole-based wearable system that also provides feedback on the user's centre of pressure (CoP) via vibrotactile and visual aids. Solely visualizing the CoP can significantly improve body posture and thus effectively assist users when performing squats and dead-lifts. We explored different feedback modalities and conclude that a vibrotactile insole is a practical and effective solution.
深蹲和硬举被认为是初学者的两种重要的全身运动,可以在家里或健身房进行。在进行这些练习的过程中,保持正确的身体姿势以避免受伤是至关重要的。在本文中,我们展示了一种不引人注目的传感方法,一种基于鞋垫的可穿戴系统,该系统还通过振动触觉和视觉辅助提供用户压力中心(CoP)的反馈。仅仅可视化的俯卧撑可以显著地改善身体姿势,从而有效地帮助使用者进行深蹲和徒手举。我们探索了不同的反馈方式,并得出结论,振动触觉鞋垫是一种实用有效的解决方案。
{"title":"GymSoles++: Using Smart Wearbales to Improve Body Posture when Performing Squats and Dead-Lifts","authors":"Don Samitha Elvitigala, Denys J. C. Matthies, Chamod Weerasinghe, Yilei Shi, Suranga Nanayakkara","doi":"10.1145/3384657.3385331","DOIUrl":"https://doi.org/10.1145/3384657.3385331","url":null,"abstract":"Squats and dead-lifts are considered two important full-body exercises for beginners, which can be performed at home or the gymnasium. During the execution of these exercises, it is essential to maintain the correct body posture to avoid injuries. In this paper, we demonstrate an unobtrusive sensing approach, an insole-based wearable system that also provides feedback on the user's centre of pressure (CoP) via vibrotactile and visual aids. Solely visualizing the CoP can significantly improve body posture and thus effectively assist users when performing squats and dead-lifts. We explored different feedback modalities and conclude that a vibrotactile insole is a practical and effective solution.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130652080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
KissGlass
Pub Date : 2020-03-16 DOI: 10.1145/3384657.3384801
R. Li, Juyoung Lee, Woontack Woo, Thad Starner
Cheek kissing is a common greeting in many countries around the world. Many parameters are involved when performing the kiss, such as which side to begin the kiss on and how many times the kiss is performed. These parameters can be used to infer one's social and physical context. In this paper, we present KissGlass, a system that leverages off-the-shelf smart glasses to recognize different kinds of cheek kissing gestures. Using a dataset we collected with 5 participants performing 10 gestures, our system obtains 83.0% accuracy in 10-fold cross validation and 74.33% accuracy in a leave-one-user-out user independent evaluation.
{"title":"KissGlass","authors":"R. Li, Juyoung Lee, Woontack Woo, Thad Starner","doi":"10.1145/3384657.3384801","DOIUrl":"https://doi.org/10.1145/3384657.3384801","url":null,"abstract":"Cheek kissing is a common greeting in many countries around the world. Many parameters are involved when performing the kiss, such as which side to begin the kiss on and how many times the kiss is performed. These parameters can be used to infer one's social and physical context. In this paper, we present KissGlass, a system that leverages off-the-shelf smart glasses to recognize different kinds of cheek kissing gestures. Using a dataset we collected with 5 participants performing 10 gestures, our system obtains 83.0% accuracy in 10-fold cross validation and 74.33% accuracy in a leave-one-user-out user independent evaluation.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125657578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Towards A Wearable for Deep Water Blackout Prevention 一种防止深水停电的可穿戴设备
Pub Date : 2020-03-16 DOI: 10.1145/3384657.3385329
Frederik Wiehr, Andreas Höh, A. Krüger
Freediving relies on a diver's ability to hold his breath until resurfacing. Many fatal accidents in freediving are caused by a sudden blackout of the diver right before resurfacing. In this work, we propose a wearable prototype for monitoring oxygen saturation underwater and conceptualize an early warning system with regard to the diving depth. Our predictive algorithm estimates the latest point of return in order to emerge with a sufficient oxygen level to prevent a blackout and notifies the diver via an acoustic signal.
自由潜水依靠潜水员屏住呼吸直到浮出水面的能力。自由潜水中的许多致命事故都是由于潜水员在浮出水面之前突然昏迷造成的。在这项工作中,我们提出了一种用于监测水下氧饱和度的可穿戴原型,并概念化了一个关于潜水深度的预警系统。我们的预测算法估计出最后的返回点,以便在足够的氧气水平下出现,以防止停电,并通过声信号通知潜水员。
{"title":"Towards A Wearable for Deep Water Blackout Prevention","authors":"Frederik Wiehr, Andreas Höh, A. Krüger","doi":"10.1145/3384657.3385329","DOIUrl":"https://doi.org/10.1145/3384657.3385329","url":null,"abstract":"Freediving relies on a diver's ability to hold his breath until resurfacing. Many fatal accidents in freediving are caused by a sudden blackout of the diver right before resurfacing. In this work, we propose a wearable prototype for monitoring oxygen saturation underwater and conceptualize an early warning system with regard to the diving depth. Our predictive algorithm estimates the latest point of return in order to emerge with a sufficient oxygen level to prevent a blackout and notifies the diver via an acoustic signal.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127647192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Novel Input and Output opportunities using an Implanted Magnet 使用植入磁铁的新颖输入和输出机会
Pub Date : 2020-03-16 DOI: 10.1145/3384657.3384785
P. Strohmeier, Jess McIntosh
In this case study, we discuss how an implanted magnet can support novel forms of input and output. By measuring the relative position between the magnet and an on-body device, local position of the device can be used for input. Electromagnetic fields can actuate the magnet to provide output by means of in-vivo haptic feedback. Traditional tracking options would struggle tracking the input methods we suggest, and the in-vivo sensations of vibration provided as output differ from the experience of vibrations applied externally - our data suggests that in-vivo vibrations are mediated by different receptors than external vibration. As the magnet can be easily tracked as well as actuated it provides opportunities for encoding information as material experiences.
在这个案例研究中,我们讨论了植入磁铁如何支持新形式的输入和输出。通过测量磁铁与体上装置之间的相对位置,可以利用装置的局部位置作为输入。电磁场可以通过体内触觉反馈驱动磁铁提供输出。传统的跟踪方法很难跟踪我们建议的输入方法,并且作为输出的体内振动感觉不同于外部施加的振动体验-我们的数据表明,体内振动是由不同的受体介导的,而不是外部振动。由于磁铁可以很容易地跟踪和驱动,它提供了将信息编码为物质经验的机会。
{"title":"Novel Input and Output opportunities using an Implanted Magnet","authors":"P. Strohmeier, Jess McIntosh","doi":"10.1145/3384657.3384785","DOIUrl":"https://doi.org/10.1145/3384657.3384785","url":null,"abstract":"In this case study, we discuss how an implanted magnet can support novel forms of input and output. By measuring the relative position between the magnet and an on-body device, local position of the device can be used for input. Electromagnetic fields can actuate the magnet to provide output by means of in-vivo haptic feedback. Traditional tracking options would struggle tracking the input methods we suggest, and the in-vivo sensations of vibration provided as output differ from the experience of vibrations applied externally - our data suggests that in-vivo vibrations are mediated by different receptors than external vibration. As the magnet can be easily tracked as well as actuated it provides opportunities for encoding information as material experiences.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129472276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Living Bits: Opportunities and Challenges for Integrating Living Microorganisms in Human-Computer Interaction 活体:在人机交互中整合活微生物的机遇与挑战
Pub Date : 2020-03-16 DOI: 10.1145/3384657.3384783
Pat Pataranutaporn, Angela Vujic, D. S. Kong, P. Maes, Misha Sra
There are trillions of living biological "computers" on, inside, and around the human body: microbes. Microbes have the potential to enhance human-computer interaction (HCI) in entirely new ways. Advances in open-source biotechnology have already enabled designers, artists, and engineers to use microbes in redefining wearables, games, musical instruments, robots, and more. "Living Bits", inspired by Tangible Bits, is an attempt to think beyond the traditional boundaries that exist between biological cells and computers for integrating microorganism in HCI. In this work we: 1) outline and inspire the possibility for integrating organic and regenerative living systems in HCI; 2) explore and characterize human-microbe interactions across contexts and scales; 3) provide principles for stimulating discussions, presentations, and brainstorms of microbial interfaces. We aim to make Living Bits accessible to researchers across HCI, synthetic biology, biotechnology, and interaction design to explore the next generation of biological HCI.
人体上、体内和周围有数万亿个活的生物“计算机”:微生物。微生物有可能以全新的方式增强人机交互(HCI)。开源生物技术的进步已经使设计师、艺术家和工程师能够使用微生物来重新定义可穿戴设备、游戏、乐器、机器人等等。受“有形比特”的启发,“活比特”试图超越生物细胞和计算机之间存在的传统界限,将微生物整合到HCI中。在这项工作中,我们:1)概述和启发在HCI中整合有机和可再生生命系统的可能性;2)探索和描述跨环境和尺度的人类-微生物相互作用;3)为激发微生物界面的讨论、演示和头脑风暴提供原则。我们的目标是让HCI、合成生物学、生物技术和交互设计领域的研究人员能够访问Living Bits,以探索下一代生物HCI。
{"title":"Living Bits: Opportunities and Challenges for Integrating Living Microorganisms in Human-Computer Interaction","authors":"Pat Pataranutaporn, Angela Vujic, D. S. Kong, P. Maes, Misha Sra","doi":"10.1145/3384657.3384783","DOIUrl":"https://doi.org/10.1145/3384657.3384783","url":null,"abstract":"There are trillions of living biological \"computers\" on, inside, and around the human body: microbes. Microbes have the potential to enhance human-computer interaction (HCI) in entirely new ways. Advances in open-source biotechnology have already enabled designers, artists, and engineers to use microbes in redefining wearables, games, musical instruments, robots, and more. \"Living Bits\", inspired by Tangible Bits, is an attempt to think beyond the traditional boundaries that exist between biological cells and computers for integrating microorganism in HCI. In this work we: 1) outline and inspire the possibility for integrating organic and regenerative living systems in HCI; 2) explore and characterize human-microbe interactions across contexts and scales; 3) provide principles for stimulating discussions, presentations, and brainstorms of microbial interfaces. We aim to make Living Bits accessible to researchers across HCI, synthetic biology, biotechnology, and interaction design to explore the next generation of biological HCI.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132878416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Eye-based Interaction Using Embedded Optical Sensors on an Eyewear Device for Facial Expression Recognition 基于眼交互的嵌入式光学传感器眼镜设备面部表情识别
Pub Date : 2020-03-16 DOI: 10.1145/3384657.3384787
Katsutoshi Masai, K. Kunze, M. Sugimoto
Non-verbal information is essential to understand intentions and emotions and to facilitate social interaction between humans and between humans and computers. One reliable source of such information is the eyes. We investigated the eye-based interaction (recognizing eye gestures or eye movements) using an eyewear device for facial expression recognition. The device incorporates 16 low-cost optical sensors. The system allows hands-free interaction in many situations. Using the device, we evaluated three eye-based interactions. First, we evaluated the accuracy of detecting the gestures with nine participants. The average accuracy of detecting seven different eye gestures is 89.1% with user-dependent training. We used dynamic time warping (DTW) for gesture recognition. Second, we evaluated the accuracy of eye gaze position estimation with five users holding a neutral face. The system showed potential to track the approximate direction of the eyes, with higher accuracy in detecting position y than x. Finally, we did a feasibility study of one user reading jokes while wearing the device. The system was capable of analyzing facial expressions and eye movements in daily contexts.
非语言信息对于理解意图和情感以及促进人与人之间以及人与计算机之间的社会互动至关重要。这些信息的一个可靠来源就是眼睛。我们使用一种用于面部表情识别的眼镜装置来研究基于眼睛的互动(识别眼睛手势或眼球运动)。该设备包含16个低成本光学传感器。该系统允许在许多情况下进行免提交互。使用该设备,我们评估了三种基于眼睛的互动。首先,我们用9个参与者来评估检测手势的准确性。在用户依赖训练下,检测七种不同眼睛手势的平均准确率为89.1%。我们使用动态时间扭曲(DTW)进行手势识别。其次,我们评估了5个用户持有中性面孔时眼睛注视位置估计的准确性。该系统显示出跟踪眼睛大致方向的潜力,在检测位置y方面的准确率高于检测位置x。最后,我们对一个用户戴着该设备阅读笑话进行了可行性研究。该系统能够分析日常环境中的面部表情和眼球运动。
{"title":"Eye-based Interaction Using Embedded Optical Sensors on an Eyewear Device for Facial Expression Recognition","authors":"Katsutoshi Masai, K. Kunze, M. Sugimoto","doi":"10.1145/3384657.3384787","DOIUrl":"https://doi.org/10.1145/3384657.3384787","url":null,"abstract":"Non-verbal information is essential to understand intentions and emotions and to facilitate social interaction between humans and between humans and computers. One reliable source of such information is the eyes. We investigated the eye-based interaction (recognizing eye gestures or eye movements) using an eyewear device for facial expression recognition. The device incorporates 16 low-cost optical sensors. The system allows hands-free interaction in many situations. Using the device, we evaluated three eye-based interactions. First, we evaluated the accuracy of detecting the gestures with nine participants. The average accuracy of detecting seven different eye gestures is 89.1% with user-dependent training. We used dynamic time warping (DTW) for gesture recognition. Second, we evaluated the accuracy of eye gaze position estimation with five users holding a neutral face. The system showed potential to track the approximate direction of the eyes, with higher accuracy in detecting position y than x. Finally, we did a feasibility study of one user reading jokes while wearing the device. The system was capable of analyzing facial expressions and eye movements in daily contexts.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132155717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
EgoSpace EgoSpace
Pub Date : 2020-03-16 DOI: 10.1145/3384657.3385328
Yuya Adachi, Haoran Xie, T. Torii, Haopeng Zhang, Ryo Sagisaka
In this work, we propose a novel wearable device to augment the user's egocentric space to a wide range. To achieve this goal, the proposed device provides bidirectional projection using a head-mounted wearable projector and two dihedral mirrors. The included angle of the mirrors were set to reflect the projected image in front of and behind the user. A prototype system is developed to explore possible applications using the proposed device in different scenarios, such as riding a bike and map navigation.
{"title":"EgoSpace","authors":"Yuya Adachi, Haoran Xie, T. Torii, Haopeng Zhang, Ryo Sagisaka","doi":"10.1145/3384657.3385328","DOIUrl":"https://doi.org/10.1145/3384657.3385328","url":null,"abstract":"In this work, we propose a novel wearable device to augment the user's egocentric space to a wide range. To achieve this goal, the proposed device provides bidirectional projection using a head-mounted wearable projector and two dihedral mirrors. The included angle of the mirrors were set to reflect the projected image in front of and behind the user. A prototype system is developed to explore possible applications using the proposed device in different scenarios, such as riding a bike and map navigation.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117006670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
Proceedings of the Augmented Humans International Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1