首页 > 最新文献

Proceedings of the 10th Augmented Human International Conference 2019最新文献

英文 中文
fSense
Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311839
Thisum Buddhika, Haimo Zhang, Samantha W. T. Chan, Vipula Dissanayake, Suranga Nanayakkara, Roger Zimmermann
While most existing gestural interfaces focus on the static posture or the dynamic action of the hand, few have investigated the feasibility of using the forces that are exerted while performing gestures. Using the photoplethysmogram (PPG) sensor of off-the-shelf smartwatches, we show that, it is possible to recognize the force of a gesture as an independent channel of input. Based on a user study with 12 participants, we found that users were able to reliably produce two levels of force across several types of common gestures. We demonstrate a few interaction scenarios where the force is either used as a standalone input or to complement existing input modalities.
{"title":"fSense","authors":"Thisum Buddhika, Haimo Zhang, Samantha W. T. Chan, Vipula Dissanayake, Suranga Nanayakkara, Roger Zimmermann","doi":"10.1145/3311823.3311839","DOIUrl":"https://doi.org/10.1145/3311823.3311839","url":null,"abstract":"While most existing gestural interfaces focus on the static posture or the dynamic action of the hand, few have investigated the feasibility of using the forces that are exerted while performing gestures. Using the photoplethysmogram (PPG) sensor of off-the-shelf smartwatches, we show that, it is possible to recognize the force of a gesture as an independent channel of input. Based on a user study with 12 participants, we found that users were able to reliably produce two levels of force across several types of common gestures. We demonstrate a few interaction scenarios where the force is either used as a standalone input or to complement existing input modalities.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124401823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
CompoundDome
Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311827
Eriko Maruyama, Junichi Rekimoto
The head-mounted display (HMD) is widely used as a method to experience virtual space. However, HMD has problems in mounting, such as skin touching the equipment used by others, functional issues such as easy to induce VR sickness. In this research, we propose a wearable dome device named "CompoundDome", which enables interaction with the real world by projecting images on the dome. In our system, we used a 600 mm diameter dome, and a projector projects images to the dome to cover the wearer's field of view. With this configuration, the equipment does not touch the skin, and motion sickness can be reduced. HMD also lacks in providing face-to-face communication, because it hides user's face. In addition, the wearer can not see the outside when wearing the HMD. Hence, we applied screen paint to the transparent dome in a mesh form. With this configuration, users can see the image when the image is projected, and they can see the outside of the dome when the image is not projected. Furthermore, users and the surrounding people can make face to face communication by photographing the face with the camera installed in the dome and projecting the face in the virtual space. In this paper, we describe the composition of CompoundDome, in comparison with other virtual space presentation means, and various applications enabled by CompoundDome.
{"title":"CompoundDome","authors":"Eriko Maruyama, Junichi Rekimoto","doi":"10.1145/3311823.3311827","DOIUrl":"https://doi.org/10.1145/3311823.3311827","url":null,"abstract":"The head-mounted display (HMD) is widely used as a method to experience virtual space. However, HMD has problems in mounting, such as skin touching the equipment used by others, functional issues such as easy to induce VR sickness. In this research, we propose a wearable dome device named \"CompoundDome\", which enables interaction with the real world by projecting images on the dome. In our system, we used a 600 mm diameter dome, and a projector projects images to the dome to cover the wearer's field of view. With this configuration, the equipment does not touch the skin, and motion sickness can be reduced. HMD also lacks in providing face-to-face communication, because it hides user's face. In addition, the wearer can not see the outside when wearing the HMD. Hence, we applied screen paint to the transparent dome in a mesh form. With this configuration, users can see the image when the image is projected, and they can see the outside of the dome when the image is not projected. Furthermore, users and the surrounding people can make face to face communication by photographing the face with the camera installed in the dome and projecting the face in the virtual space. In this paper, we describe the composition of CompoundDome, in comparison with other virtual space presentation means, and various applications enabled by CompoundDome.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121728527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Sentiment Pen: Recognizing Emotional Context Based on Handwriting Features 情感笔:基于笔迹特征的情感语境识别
Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311868
Jiawen Han, G. Chernyshov, D. Zheng, Peizhong Gao, Takuji Narumi, Katrin Wolf, K. Kunze
In this paper, we discuss the assessment of the emotional state of the user from digitized handwriting for implicit human-computer interaction. The proposed concept exemplifies how a digital system could recognize the emotional context of the interaction. We discuss our approach to emotion recognition and the underlying neurophysiological mechanisms. To verify the viability of our approach, we have conducted a series of tests where participants were asked to perform simple writing tasks after being exposed to a series of emotionally-stimulating video clips from EMDB[6], one set of four clips per each quadrant on the circumplex model of emotion[28]. The user-independent Support Vector Classifier (SVC) built using the recorded data shows up to 66% accuracy for certain types of writing tasks for 1 in 4 classification (1. High Valence, High Arousal; 2. High Valence, Low Arousal; 3. Low Valence, High Arousal; 4. Low Valence, Low Arousal). In the same conditions, a user-dependent classifier reaches an average of 70% accuracy across all 12 study participants. While future work is required to improve the classification rate, this work should be seen as proof-of-concept for emotion assessment of users while handwriting aiming to motivate research on implicit interaction while writing to enable emotion-sensitivity in mobile and ubiquitous computing.
本文讨论了隐式人机交互中数字化手写用户情感状态的评估。所提出的概念举例说明了数字系统如何识别交互的情感背景。我们讨论我们的方法情绪识别和潜在的神经生理机制。为了验证我们方法的可行性,我们进行了一系列测试,要求参与者在观看了EMDB的一系列情绪刺激视频片段后执行简单的写作任务[6],在情绪的圆周模型[28]中,每个象限一组四个片段。使用记录数据构建的独立于用户的支持向量分类器(SVC)显示,对于某些类型的写作任务,对于1 / 4的分类(1 / 4),准确率高达66%。高效价,高唤醒;2. 高效价,低唤醒;3.低效价,高唤醒;4. 低效价,低唤醒)。在相同的条件下,用户依赖分类器在所有12个研究参与者中平均达到70%的准确率。虽然未来的工作需要提高分类率,但这项工作应被视为手写时用户情绪评估的概念验证,旨在激励书写时隐式交互的研究,以实现移动和无处不在的计算中的情绪敏感性。
{"title":"Sentiment Pen: Recognizing Emotional Context Based on Handwriting Features","authors":"Jiawen Han, G. Chernyshov, D. Zheng, Peizhong Gao, Takuji Narumi, Katrin Wolf, K. Kunze","doi":"10.1145/3311823.3311868","DOIUrl":"https://doi.org/10.1145/3311823.3311868","url":null,"abstract":"In this paper, we discuss the assessment of the emotional state of the user from digitized handwriting for implicit human-computer interaction. The proposed concept exemplifies how a digital system could recognize the emotional context of the interaction. We discuss our approach to emotion recognition and the underlying neurophysiological mechanisms. To verify the viability of our approach, we have conducted a series of tests where participants were asked to perform simple writing tasks after being exposed to a series of emotionally-stimulating video clips from EMDB[6], one set of four clips per each quadrant on the circumplex model of emotion[28]. The user-independent Support Vector Classifier (SVC) built using the recorded data shows up to 66% accuracy for certain types of writing tasks for 1 in 4 classification (1. High Valence, High Arousal; 2. High Valence, Low Arousal; 3. Low Valence, High Arousal; 4. Low Valence, Low Arousal). In the same conditions, a user-dependent classifier reaches an average of 70% accuracy across all 12 study participants. While future work is required to improve the classification rate, this work should be seen as proof-of-concept for emotion assessment of users while handwriting aiming to motivate research on implicit interaction while writing to enable emotion-sensitivity in mobile and ubiquitous computing.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"206 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116360532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Prospero: A Personal Wearable Memory Coach 普洛斯彼罗:个人可穿戴记忆教练
Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311870
Samantha W. T. Chan, Haimo Zhang, Suranga Nanayakkara
Prospective memory, which involves remembering to perform intended actions, is essential for independent daily living especially as we grow older. Yet, majority of everyday memory failures are due to prospective memory lapses. Memory strategy training can help to tackle such lapses. We present Prospero, a wearable virtual memory coach that guides users to learn and apply a memory technique through conversation in natural language. Using physiological signals, Prospero proactively initiates practice of the technique during opportune times where user attention and cognitive load have more bandwidth. This could be a step towards creating more natural and effective digital memory training that could eventually reduce memory decline. In this paper, we contribute with details of its implementation and conversation design.
前瞻记忆,包括记住执行预期的动作,对于独立的日常生活至关重要,尤其是随着年龄的增长。然而,大多数日常记忆失败是由于前瞻性记忆缺失。记忆策略训练可以帮助解决这类失误。我们介绍普洛斯彼罗,一个可穿戴的虚拟记忆教练,指导用户学习和应用记忆技术,通过对话在自然语言。利用生理信号,普洛斯彼罗在用户注意力和认知负荷有更多带宽的适当时间主动开始练习这项技术。这可能是朝着创造更自然、更有效的数字记忆训练迈出的一步,最终可以减少记忆衰退。在本文中,我们提供了它的实现和会话设计的细节。
{"title":"Prospero: A Personal Wearable Memory Coach","authors":"Samantha W. T. Chan, Haimo Zhang, Suranga Nanayakkara","doi":"10.1145/3311823.3311870","DOIUrl":"https://doi.org/10.1145/3311823.3311870","url":null,"abstract":"Prospective memory, which involves remembering to perform intended actions, is essential for independent daily living especially as we grow older. Yet, majority of everyday memory failures are due to prospective memory lapses. Memory strategy training can help to tackle such lapses. We present Prospero, a wearable virtual memory coach that guides users to learn and apply a memory technique through conversation in natural language. Using physiological signals, Prospero proactively initiates practice of the technique during opportune times where user attention and cognitive load have more bandwidth. This could be a step towards creating more natural and effective digital memory training that could eventually reduce memory decline. In this paper, we contribute with details of its implementation and conversation design.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125769239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
An Implicit Dialogue Injection System for Interruption Management 一个用于中断管理的隐式对话注入系统
Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311875
T. Shibata, A. Borisenko, Anzu Hakone, Tal August, L. Deligiannidis, Chen-Hsiang Yu, Matthew Russell, A. Olwal, R. Jacob
This paper presents our efforts in redesigning the conventional on/off interruption management tactic (a.k.a. "Do Not Disturb Mode") for situations where interruptions are inevitable. We introduce an implicit dialogue injection system, in which the computer implicitly observes the user's state of busyness from passive measurement of the prefrontal cortex to determine how to interrupt the user. We use functional Near-Infrared Spectroscopy (fNIRS), a noninvasive brain-sensing technique. In this paper, we describe our system architecture and report results of our proof-of-concept study, in which we compared two contrasting interruption strategies; the computer either forcibly interrupts the user with a secondary task or requests the user's participation before presenting it. The latter yielded improved user experience (e.g. lower reported annoyance), in addition to showing a potential improvement in task performance (i.e. retaining context information) when the user was busier. We conclude that tailoring the presentation of interruptions based on real-time user state provides a step toward making computers more considerate of their users.
本文介绍了我们在重新设计传统的开/关中断管理策略方面所做的努力。“请勿打扰模式”)用于不可避免的干扰情况。我们引入了一个隐式对话注入系统,在该系统中,计算机通过被动测量前额叶皮层来隐式观察用户的忙碌状态,以确定如何打断用户。我们使用功能性近红外光谱(fNIRS),一种非侵入性脑传感技术。在本文中,我们描述了我们的系统架构并报告了我们的概念验证研究的结果,其中我们比较了两种截然不同的中断策略;计算机要么用次要任务强行打断用户,要么在呈现任务之前要求用户参与。后者产生了改进的用户体验(例如,减少报告的烦恼),当用户更忙时,还显示了任务性能的潜在改进(例如,保留上下文信息)。我们得出的结论是,根据实时用户状态定制中断的表示,为使计算机更加体谅用户提供了一步。
{"title":"An Implicit Dialogue Injection System for Interruption Management","authors":"T. Shibata, A. Borisenko, Anzu Hakone, Tal August, L. Deligiannidis, Chen-Hsiang Yu, Matthew Russell, A. Olwal, R. Jacob","doi":"10.1145/3311823.3311875","DOIUrl":"https://doi.org/10.1145/3311823.3311875","url":null,"abstract":"This paper presents our efforts in redesigning the conventional on/off interruption management tactic (a.k.a. \"Do Not Disturb Mode\") for situations where interruptions are inevitable. We introduce an implicit dialogue injection system, in which the computer implicitly observes the user's state of busyness from passive measurement of the prefrontal cortex to determine how to interrupt the user. We use functional Near-Infrared Spectroscopy (fNIRS), a noninvasive brain-sensing technique. In this paper, we describe our system architecture and report results of our proof-of-concept study, in which we compared two contrasting interruption strategies; the computer either forcibly interrupts the user with a secondary task or requests the user's participation before presenting it. The latter yielded improved user experience (e.g. lower reported annoyance), in addition to showing a potential improvement in task performance (i.e. retaining context information) when the user was busier. We conclude that tailoring the presentation of interruptions based on real-time user state provides a step toward making computers more considerate of their users.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129724929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Augmented Recreational Volleyball Court: Supporting the Beginners' Landing Position Prediction Skill by Providing Peripheral Visual Feedback 增强型休闲排球场:通过提供周边视觉反馈来支持初学者的落地位置预测技能
Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311843
Koya Sato, Yuji Sano, M. Otsuki, Mizuki Oka, Kazuhiko Kato
Volleyball is widely popular as a way to share a sense of unity and achievement with others. However, errors detract beginners from enjoying the game. To overcome this issue, we developed a system that supports the beginners' skill to predict the ball landing position by indicating the predicted ball landing position on the floor as a visual feedback. In volleyball, it is necessary to pay attention to the ball that has been launched in air, and visual feedback on the floor surface must be perceived through peripheral vision. The effect of such visual feedback in supporting beginners' prediction skill was not clear. Therefore, we evaluated the effectiveness of the proposed system via a simulated serve-reception experiment. As a result, we confirmed that the proposed system improved the prediction skill in terms of the prediction speed and accuracy in the left-right direction, and that beginners felt an improvement in the prediction accuracy and ease of ball manipulation, thereby increasing the enjoyment. These results also indicate that it is possible to utilize peripheral vision supports in other disciplines in which there is a distance between the object of attention and the sports field on which visual feedback can be presented.
排球作为一种与他人分享团结感和成就感的方式而广受欢迎。然而,错误会影响新手享受游戏。为了克服这个问题,我们开发了一个系统,通过在地板上显示预测的球的落点位置作为视觉反馈来支持初学者预测球的落点位置。在排球运动中,要注意已经在空中打出的球,地板表面的视觉反馈必须通过周边视觉来感知。这种视觉反馈对初学者预测技能的支持作用尚不清楚。因此,我们通过模拟发球接收实验来评估所提出系统的有效性。结果,我们证实了所提出的系统在左右方向的预测速度和准确性方面提高了预测技巧,初学者在预测精度和控球方便性方面都有了提高,从而增加了乐趣。这些结果还表明,在其他学科中,当注意对象与视觉反馈可以呈现的运动领域之间存在一定距离时,可以利用周边视觉支持。
{"title":"Augmented Recreational Volleyball Court: Supporting the Beginners' Landing Position Prediction Skill by Providing Peripheral Visual Feedback","authors":"Koya Sato, Yuji Sano, M. Otsuki, Mizuki Oka, Kazuhiko Kato","doi":"10.1145/3311823.3311843","DOIUrl":"https://doi.org/10.1145/3311823.3311843","url":null,"abstract":"Volleyball is widely popular as a way to share a sense of unity and achievement with others. However, errors detract beginners from enjoying the game. To overcome this issue, we developed a system that supports the beginners' skill to predict the ball landing position by indicating the predicted ball landing position on the floor as a visual feedback. In volleyball, it is necessary to pay attention to the ball that has been launched in air, and visual feedback on the floor surface must be perceived through peripheral vision. The effect of such visual feedback in supporting beginners' prediction skill was not clear. Therefore, we evaluated the effectiveness of the proposed system via a simulated serve-reception experiment. As a result, we confirmed that the proposed system improved the prediction skill in terms of the prediction speed and accuracy in the left-right direction, and that beginners felt an improvement in the prediction accuracy and ease of ball manipulation, thereby increasing the enjoyment. These results also indicate that it is possible to utilize peripheral vision supports in other disciplines in which there is a distance between the object of attention and the sports field on which visual feedback can be presented.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115236224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
AR Pottery Wheel-Throwing by Attaching Omnidirectional Cameras to the Center of a User's Palms 通过在用户手掌中心安装全方位摄像头的AR陶器轮投掷
Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311856
Y. Maruyama, Y. Kono
This research describes our system for AR pottery wheel-throwing employing an HMD and omnidirectional cameras each of which is attached to the center of a user's palm. The omnidirectional cameras enable the user's finger postures and the three-dimensional relative position and orientation between the user's hands and virtual clay model on the wheel to be estimated. Our system detects a marker on the desk and the wheel is set on its coordinate system along with the finger posture estimation in real time. The system then simulates the collision between the virtual clay model and the left/right hand model based on the above information. Pottery wheel-throwing is reproduced in Unity software environment by deforming the clay model by contact with hand models in this simulation.
这项研究描述了我们的AR陶器轮投掷系统,该系统使用了一个HMD和全向摄像头,每个摄像头都连接在用户的手掌中心。全向摄像头可以估计用户的手指姿势和用户的手与车轮上的虚拟粘土模型之间的三维相对位置和方向。我们的系统检测到桌子上的标记,并将车轮设置在其坐标系统上,同时实时估计手指的姿势。然后系统根据上述信息模拟虚拟粘土模型与左/右手模型的碰撞。在Unity软件环境中,通过与手模型接触使粘土模型变形,再现了陶轮投掷。
{"title":"AR Pottery Wheel-Throwing by Attaching Omnidirectional Cameras to the Center of a User's Palms","authors":"Y. Maruyama, Y. Kono","doi":"10.1145/3311823.3311856","DOIUrl":"https://doi.org/10.1145/3311823.3311856","url":null,"abstract":"This research describes our system for AR pottery wheel-throwing employing an HMD and omnidirectional cameras each of which is attached to the center of a user's palm. The omnidirectional cameras enable the user's finger postures and the three-dimensional relative position and orientation between the user's hands and virtual clay model on the wheel to be estimated. Our system detects a marker on the desk and the wheel is set on its coordinate system along with the finger posture estimation in real time. The system then simulates the collision between the virtual clay model and the left/right hand model based on the above information. Pottery wheel-throwing is reproduced in Unity software environment by deforming the clay model by contact with hand models in this simulation.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"592 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115979170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
TherModule TherModule
Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311826
Tomosuke Maeda, T. Kurahashi
Humans have specific sensory organs and they can feel tactile sensation on the whole body. However, many haptic devices have limitations due to the location of the body part and might not provide natural haptic feedback. Thus, we propose a novel interface, TherModule, which is a wearable and modular thermal feedback system for embodied interactions based on a wireless platform. TherModule can be worn on multiple body parts such as the wrist, forearm, ankle, and neck. In this paper, we describe the system concept, module implementation, and applications. To demonstrate and explore the embodied interaction with thermal feedback, we implemented prototype applications, such as movie experiences, projector-based augmented reality, navigation, and notification based on a wireless platform, with TherModule on multiple parts of the body. The result of an experiment on movie experience showed that participants felt more interactions between temperature and visual stimulus.
{"title":"TherModule","authors":"Tomosuke Maeda, T. Kurahashi","doi":"10.1145/3311823.3311826","DOIUrl":"https://doi.org/10.1145/3311823.3311826","url":null,"abstract":"Humans have specific sensory organs and they can feel tactile sensation on the whole body. However, many haptic devices have limitations due to the location of the body part and might not provide natural haptic feedback. Thus, we propose a novel interface, TherModule, which is a wearable and modular thermal feedback system for embodied interactions based on a wireless platform. TherModule can be worn on multiple body parts such as the wrist, forearm, ankle, and neck. In this paper, we describe the system concept, module implementation, and applications. To demonstrate and explore the embodied interaction with thermal feedback, we implemented prototype applications, such as movie experiences, projector-based augmented reality, navigation, and notification based on a wireless platform, with TherModule on multiple parts of the body. The result of an experiment on movie experience showed that participants felt more interactions between temperature and visual stimulus.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128527008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Automatic Eyeglasses Replacement for a 3D Virtual Try-on System 3D虚拟试戴系统的自动眼镜更换
Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311854
Takumi Kobayashi, Yuta Sugiura, H. Saito, Yuji Uema
This paper presents a 3D virtual eyeglasses try-on system for practical use. For fitting eyeglasses in a shop, consumers wish to look at themselves in a mirror while trying on various eyeglass styles. However, for people who need to wear eyeglasses for correcting problems with eyesight, it is impossible for them to clearly observe their face in the mirror without wearing eyeglasses. This makes fitting them for new eyeglasses difficult. This research proposes a virtual try-on system that can be used while wearing eyeglasses. We replace the user's eyeglasses in the input video with new eyeglasses virtually. Moreover, a fast and accurate face tracking tool enables our system to automatically display 3D virtual glasses following a user's head motion. Experimental results demonstrate that the proposed method can render virtual glasses naturally while the user is wearing real eyeglasses.
本文提出了一种实用的三维虚拟眼镜试戴系统。在商店配戴眼镜时,消费者希望一边试穿各种款式的眼镜,一边对着镜子看自己。然而,对于需要戴眼镜矫正视力问题的人来说,不戴眼镜是不可能在镜子里清楚地观察到自己的脸的。这使得他们很难适应新眼镜。本研究提出一种可以在戴眼镜时使用的虚拟试戴系统。我们虚拟地将输入视频中用户的眼镜替换为新眼镜。此外,快速准确的面部跟踪工具使我们的系统能够根据用户的头部运动自动显示3D虚拟眼镜。实验结果表明,该方法可以在用户戴着真实眼镜的情况下自然地呈现虚拟眼镜。
{"title":"Automatic Eyeglasses Replacement for a 3D Virtual Try-on System","authors":"Takumi Kobayashi, Yuta Sugiura, H. Saito, Yuji Uema","doi":"10.1145/3311823.3311854","DOIUrl":"https://doi.org/10.1145/3311823.3311854","url":null,"abstract":"This paper presents a 3D virtual eyeglasses try-on system for practical use. For fitting eyeglasses in a shop, consumers wish to look at themselves in a mirror while trying on various eyeglass styles. However, for people who need to wear eyeglasses for correcting problems with eyesight, it is impossible for them to clearly observe their face in the mirror without wearing eyeglasses. This makes fitting them for new eyeglasses difficult. This research proposes a virtual try-on system that can be used while wearing eyeglasses. We replace the user's eyeglasses in the input video with new eyeglasses virtually. Moreover, a fast and accurate face tracking tool enables our system to automatically display 3D virtual glasses following a user's head motion. Experimental results demonstrate that the proposed method can render virtual glasses naturally while the user is wearing real eyeglasses.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130959600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Double Shellf: What Psychological Effects can be Caused through Interaction with a Doppelganger? 双重壳:通过与二重身的互动会引起什么心理影响?
Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311862
Yuji Hatada, S. Yoshida, Takuji Narumi, M. Hirose
Advances in 3D capture technology have made it easier to generate a realistic avatar, which can represent a person in virtual environments. Because avatars can be easily duplicated in the virtual environments, there can be an unrealistic situation where a person sees her/his own doppelgangers. Doppelganger is a double of a person and sometimes portrayed as a sinister existence. To investigate how people feel and react when they face their doppelgangers, we developed "Double Shellf", a virtual reality experience in which people can interact with their virtual doppelgangers in various situations. In this paper, we introduce the design of Double Shellf and discuss the reactions of 86 users. The user study revealed that most people felt intense eeriness when they see their doppelgangers which acts autonomously and when they were touched by their doppelgangers. We also found that there is a gender difference in reactions to their doppelgangers. We explore the effective way of utilizing doppelgangers.
3D捕捉技术的进步使得生成一个真实的化身变得更加容易,它可以在虚拟环境中代表一个人。因为在虚拟环境中,化身很容易被复制,所以可能会出现一个人看到自己的二重身的不现实情况。二重身是一个人的双重形象,有时被描绘成邪恶的存在。为了调查人们面对自己的二重身时的感受和反应,我们开发了“双重外壳”,这是一种虚拟现实体验,人们可以在各种情况下与虚拟的二重身互动。本文介绍了双层外壳的设计,并讨论了86名用户的反应。这项用户研究显示,大多数人在看到自己的二重身自主行动时,以及被自己的二重身触摸时,都会感到强烈的怪异。我们还发现,对他们的二重身的反应存在性别差异。我们探索利用二重身的有效方法。
{"title":"Double Shellf: What Psychological Effects can be Caused through Interaction with a Doppelganger?","authors":"Yuji Hatada, S. Yoshida, Takuji Narumi, M. Hirose","doi":"10.1145/3311823.3311862","DOIUrl":"https://doi.org/10.1145/3311823.3311862","url":null,"abstract":"Advances in 3D capture technology have made it easier to generate a realistic avatar, which can represent a person in virtual environments. Because avatars can be easily duplicated in the virtual environments, there can be an unrealistic situation where a person sees her/his own doppelgangers. Doppelganger is a double of a person and sometimes portrayed as a sinister existence. To investigate how people feel and react when they face their doppelgangers, we developed \"Double Shellf\", a virtual reality experience in which people can interact with their virtual doppelgangers in various situations. In this paper, we introduce the design of Double Shellf and discuss the reactions of 86 users. The user study revealed that most people felt intense eeriness when they see their doppelgangers which acts autonomously and when they were touched by their doppelgangers. We also found that there is a gender difference in reactions to their doppelgangers. We explore the effective way of utilizing doppelgangers.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"155 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133773751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
Proceedings of the 10th Augmented Human International Conference 2019
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1