Thisum Buddhika, Haimo Zhang, Samantha W. T. Chan, Vipula Dissanayake, Suranga Nanayakkara, Roger Zimmermann
While most existing gestural interfaces focus on the static posture or the dynamic action of the hand, few have investigated the feasibility of using the forces that are exerted while performing gestures. Using the photoplethysmogram (PPG) sensor of off-the-shelf smartwatches, we show that, it is possible to recognize the force of a gesture as an independent channel of input. Based on a user study with 12 participants, we found that users were able to reliably produce two levels of force across several types of common gestures. We demonstrate a few interaction scenarios where the force is either used as a standalone input or to complement existing input modalities.
{"title":"fSense","authors":"Thisum Buddhika, Haimo Zhang, Samantha W. T. Chan, Vipula Dissanayake, Suranga Nanayakkara, Roger Zimmermann","doi":"10.1145/3311823.3311839","DOIUrl":"https://doi.org/10.1145/3311823.3311839","url":null,"abstract":"While most existing gestural interfaces focus on the static posture or the dynamic action of the hand, few have investigated the feasibility of using the forces that are exerted while performing gestures. Using the photoplethysmogram (PPG) sensor of off-the-shelf smartwatches, we show that, it is possible to recognize the force of a gesture as an independent channel of input. Based on a user study with 12 participants, we found that users were able to reliably produce two levels of force across several types of common gestures. We demonstrate a few interaction scenarios where the force is either used as a standalone input or to complement existing input modalities.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124401823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The head-mounted display (HMD) is widely used as a method to experience virtual space. However, HMD has problems in mounting, such as skin touching the equipment used by others, functional issues such as easy to induce VR sickness. In this research, we propose a wearable dome device named "CompoundDome", which enables interaction with the real world by projecting images on the dome. In our system, we used a 600 mm diameter dome, and a projector projects images to the dome to cover the wearer's field of view. With this configuration, the equipment does not touch the skin, and motion sickness can be reduced. HMD also lacks in providing face-to-face communication, because it hides user's face. In addition, the wearer can not see the outside when wearing the HMD. Hence, we applied screen paint to the transparent dome in a mesh form. With this configuration, users can see the image when the image is projected, and they can see the outside of the dome when the image is not projected. Furthermore, users and the surrounding people can make face to face communication by photographing the face with the camera installed in the dome and projecting the face in the virtual space. In this paper, we describe the composition of CompoundDome, in comparison with other virtual space presentation means, and various applications enabled by CompoundDome.
{"title":"CompoundDome","authors":"Eriko Maruyama, Junichi Rekimoto","doi":"10.1145/3311823.3311827","DOIUrl":"https://doi.org/10.1145/3311823.3311827","url":null,"abstract":"The head-mounted display (HMD) is widely used as a method to experience virtual space. However, HMD has problems in mounting, such as skin touching the equipment used by others, functional issues such as easy to induce VR sickness. In this research, we propose a wearable dome device named \"CompoundDome\", which enables interaction with the real world by projecting images on the dome. In our system, we used a 600 mm diameter dome, and a projector projects images to the dome to cover the wearer's field of view. With this configuration, the equipment does not touch the skin, and motion sickness can be reduced. HMD also lacks in providing face-to-face communication, because it hides user's face. In addition, the wearer can not see the outside when wearing the HMD. Hence, we applied screen paint to the transparent dome in a mesh form. With this configuration, users can see the image when the image is projected, and they can see the outside of the dome when the image is not projected. Furthermore, users and the surrounding people can make face to face communication by photographing the face with the camera installed in the dome and projecting the face in the virtual space. In this paper, we describe the composition of CompoundDome, in comparison with other virtual space presentation means, and various applications enabled by CompoundDome.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121728527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiawen Han, G. Chernyshov, D. Zheng, Peizhong Gao, Takuji Narumi, Katrin Wolf, K. Kunze
In this paper, we discuss the assessment of the emotional state of the user from digitized handwriting for implicit human-computer interaction. The proposed concept exemplifies how a digital system could recognize the emotional context of the interaction. We discuss our approach to emotion recognition and the underlying neurophysiological mechanisms. To verify the viability of our approach, we have conducted a series of tests where participants were asked to perform simple writing tasks after being exposed to a series of emotionally-stimulating video clips from EMDB[6], one set of four clips per each quadrant on the circumplex model of emotion[28]. The user-independent Support Vector Classifier (SVC) built using the recorded data shows up to 66% accuracy for certain types of writing tasks for 1 in 4 classification (1. High Valence, High Arousal; 2. High Valence, Low Arousal; 3. Low Valence, High Arousal; 4. Low Valence, Low Arousal). In the same conditions, a user-dependent classifier reaches an average of 70% accuracy across all 12 study participants. While future work is required to improve the classification rate, this work should be seen as proof-of-concept for emotion assessment of users while handwriting aiming to motivate research on implicit interaction while writing to enable emotion-sensitivity in mobile and ubiquitous computing.
{"title":"Sentiment Pen: Recognizing Emotional Context Based on Handwriting Features","authors":"Jiawen Han, G. Chernyshov, D. Zheng, Peizhong Gao, Takuji Narumi, Katrin Wolf, K. Kunze","doi":"10.1145/3311823.3311868","DOIUrl":"https://doi.org/10.1145/3311823.3311868","url":null,"abstract":"In this paper, we discuss the assessment of the emotional state of the user from digitized handwriting for implicit human-computer interaction. The proposed concept exemplifies how a digital system could recognize the emotional context of the interaction. We discuss our approach to emotion recognition and the underlying neurophysiological mechanisms. To verify the viability of our approach, we have conducted a series of tests where participants were asked to perform simple writing tasks after being exposed to a series of emotionally-stimulating video clips from EMDB[6], one set of four clips per each quadrant on the circumplex model of emotion[28]. The user-independent Support Vector Classifier (SVC) built using the recorded data shows up to 66% accuracy for certain types of writing tasks for 1 in 4 classification (1. High Valence, High Arousal; 2. High Valence, Low Arousal; 3. Low Valence, High Arousal; 4. Low Valence, Low Arousal). In the same conditions, a user-dependent classifier reaches an average of 70% accuracy across all 12 study participants. While future work is required to improve the classification rate, this work should be seen as proof-of-concept for emotion assessment of users while handwriting aiming to motivate research on implicit interaction while writing to enable emotion-sensitivity in mobile and ubiquitous computing.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"206 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116360532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Samantha W. T. Chan, Haimo Zhang, Suranga Nanayakkara
Prospective memory, which involves remembering to perform intended actions, is essential for independent daily living especially as we grow older. Yet, majority of everyday memory failures are due to prospective memory lapses. Memory strategy training can help to tackle such lapses. We present Prospero, a wearable virtual memory coach that guides users to learn and apply a memory technique through conversation in natural language. Using physiological signals, Prospero proactively initiates practice of the technique during opportune times where user attention and cognitive load have more bandwidth. This could be a step towards creating more natural and effective digital memory training that could eventually reduce memory decline. In this paper, we contribute with details of its implementation and conversation design.
{"title":"Prospero: A Personal Wearable Memory Coach","authors":"Samantha W. T. Chan, Haimo Zhang, Suranga Nanayakkara","doi":"10.1145/3311823.3311870","DOIUrl":"https://doi.org/10.1145/3311823.3311870","url":null,"abstract":"Prospective memory, which involves remembering to perform intended actions, is essential for independent daily living especially as we grow older. Yet, majority of everyday memory failures are due to prospective memory lapses. Memory strategy training can help to tackle such lapses. We present Prospero, a wearable virtual memory coach that guides users to learn and apply a memory technique through conversation in natural language. Using physiological signals, Prospero proactively initiates practice of the technique during opportune times where user attention and cognitive load have more bandwidth. This could be a step towards creating more natural and effective digital memory training that could eventually reduce memory decline. In this paper, we contribute with details of its implementation and conversation design.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125769239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Shibata, A. Borisenko, Anzu Hakone, Tal August, L. Deligiannidis, Chen-Hsiang Yu, Matthew Russell, A. Olwal, R. Jacob
This paper presents our efforts in redesigning the conventional on/off interruption management tactic (a.k.a. "Do Not Disturb Mode") for situations where interruptions are inevitable. We introduce an implicit dialogue injection system, in which the computer implicitly observes the user's state of busyness from passive measurement of the prefrontal cortex to determine how to interrupt the user. We use functional Near-Infrared Spectroscopy (fNIRS), a noninvasive brain-sensing technique. In this paper, we describe our system architecture and report results of our proof-of-concept study, in which we compared two contrasting interruption strategies; the computer either forcibly interrupts the user with a secondary task or requests the user's participation before presenting it. The latter yielded improved user experience (e.g. lower reported annoyance), in addition to showing a potential improvement in task performance (i.e. retaining context information) when the user was busier. We conclude that tailoring the presentation of interruptions based on real-time user state provides a step toward making computers more considerate of their users.
{"title":"An Implicit Dialogue Injection System for Interruption Management","authors":"T. Shibata, A. Borisenko, Anzu Hakone, Tal August, L. Deligiannidis, Chen-Hsiang Yu, Matthew Russell, A. Olwal, R. Jacob","doi":"10.1145/3311823.3311875","DOIUrl":"https://doi.org/10.1145/3311823.3311875","url":null,"abstract":"This paper presents our efforts in redesigning the conventional on/off interruption management tactic (a.k.a. \"Do Not Disturb Mode\") for situations where interruptions are inevitable. We introduce an implicit dialogue injection system, in which the computer implicitly observes the user's state of busyness from passive measurement of the prefrontal cortex to determine how to interrupt the user. We use functional Near-Infrared Spectroscopy (fNIRS), a noninvasive brain-sensing technique. In this paper, we describe our system architecture and report results of our proof-of-concept study, in which we compared two contrasting interruption strategies; the computer either forcibly interrupts the user with a secondary task or requests the user's participation before presenting it. The latter yielded improved user experience (e.g. lower reported annoyance), in addition to showing a potential improvement in task performance (i.e. retaining context information) when the user was busier. We conclude that tailoring the presentation of interruptions based on real-time user state provides a step toward making computers more considerate of their users.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129724929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Koya Sato, Yuji Sano, M. Otsuki, Mizuki Oka, Kazuhiko Kato
Volleyball is widely popular as a way to share a sense of unity and achievement with others. However, errors detract beginners from enjoying the game. To overcome this issue, we developed a system that supports the beginners' skill to predict the ball landing position by indicating the predicted ball landing position on the floor as a visual feedback. In volleyball, it is necessary to pay attention to the ball that has been launched in air, and visual feedback on the floor surface must be perceived through peripheral vision. The effect of such visual feedback in supporting beginners' prediction skill was not clear. Therefore, we evaluated the effectiveness of the proposed system via a simulated serve-reception experiment. As a result, we confirmed that the proposed system improved the prediction skill in terms of the prediction speed and accuracy in the left-right direction, and that beginners felt an improvement in the prediction accuracy and ease of ball manipulation, thereby increasing the enjoyment. These results also indicate that it is possible to utilize peripheral vision supports in other disciplines in which there is a distance between the object of attention and the sports field on which visual feedback can be presented.
{"title":"Augmented Recreational Volleyball Court: Supporting the Beginners' Landing Position Prediction Skill by Providing Peripheral Visual Feedback","authors":"Koya Sato, Yuji Sano, M. Otsuki, Mizuki Oka, Kazuhiko Kato","doi":"10.1145/3311823.3311843","DOIUrl":"https://doi.org/10.1145/3311823.3311843","url":null,"abstract":"Volleyball is widely popular as a way to share a sense of unity and achievement with others. However, errors detract beginners from enjoying the game. To overcome this issue, we developed a system that supports the beginners' skill to predict the ball landing position by indicating the predicted ball landing position on the floor as a visual feedback. In volleyball, it is necessary to pay attention to the ball that has been launched in air, and visual feedback on the floor surface must be perceived through peripheral vision. The effect of such visual feedback in supporting beginners' prediction skill was not clear. Therefore, we evaluated the effectiveness of the proposed system via a simulated serve-reception experiment. As a result, we confirmed that the proposed system improved the prediction skill in terms of the prediction speed and accuracy in the left-right direction, and that beginners felt an improvement in the prediction accuracy and ease of ball manipulation, thereby increasing the enjoyment. These results also indicate that it is possible to utilize peripheral vision supports in other disciplines in which there is a distance between the object of attention and the sports field on which visual feedback can be presented.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115236224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This research describes our system for AR pottery wheel-throwing employing an HMD and omnidirectional cameras each of which is attached to the center of a user's palm. The omnidirectional cameras enable the user's finger postures and the three-dimensional relative position and orientation between the user's hands and virtual clay model on the wheel to be estimated. Our system detects a marker on the desk and the wheel is set on its coordinate system along with the finger posture estimation in real time. The system then simulates the collision between the virtual clay model and the left/right hand model based on the above information. Pottery wheel-throwing is reproduced in Unity software environment by deforming the clay model by contact with hand models in this simulation.
{"title":"AR Pottery Wheel-Throwing by Attaching Omnidirectional Cameras to the Center of a User's Palms","authors":"Y. Maruyama, Y. Kono","doi":"10.1145/3311823.3311856","DOIUrl":"https://doi.org/10.1145/3311823.3311856","url":null,"abstract":"This research describes our system for AR pottery wheel-throwing employing an HMD and omnidirectional cameras each of which is attached to the center of a user's palm. The omnidirectional cameras enable the user's finger postures and the three-dimensional relative position and orientation between the user's hands and virtual clay model on the wheel to be estimated. Our system detects a marker on the desk and the wheel is set on its coordinate system along with the finger posture estimation in real time. The system then simulates the collision between the virtual clay model and the left/right hand model based on the above information. Pottery wheel-throwing is reproduced in Unity software environment by deforming the clay model by contact with hand models in this simulation.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"592 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115979170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Humans have specific sensory organs and they can feel tactile sensation on the whole body. However, many haptic devices have limitations due to the location of the body part and might not provide natural haptic feedback. Thus, we propose a novel interface, TherModule, which is a wearable and modular thermal feedback system for embodied interactions based on a wireless platform. TherModule can be worn on multiple body parts such as the wrist, forearm, ankle, and neck. In this paper, we describe the system concept, module implementation, and applications. To demonstrate and explore the embodied interaction with thermal feedback, we implemented prototype applications, such as movie experiences, projector-based augmented reality, navigation, and notification based on a wireless platform, with TherModule on multiple parts of the body. The result of an experiment on movie experience showed that participants felt more interactions between temperature and visual stimulus.
{"title":"TherModule","authors":"Tomosuke Maeda, T. Kurahashi","doi":"10.1145/3311823.3311826","DOIUrl":"https://doi.org/10.1145/3311823.3311826","url":null,"abstract":"Humans have specific sensory organs and they can feel tactile sensation on the whole body. However, many haptic devices have limitations due to the location of the body part and might not provide natural haptic feedback. Thus, we propose a novel interface, TherModule, which is a wearable and modular thermal feedback system for embodied interactions based on a wireless platform. TherModule can be worn on multiple body parts such as the wrist, forearm, ankle, and neck. In this paper, we describe the system concept, module implementation, and applications. To demonstrate and explore the embodied interaction with thermal feedback, we implemented prototype applications, such as movie experiences, projector-based augmented reality, navigation, and notification based on a wireless platform, with TherModule on multiple parts of the body. The result of an experiment on movie experience showed that participants felt more interactions between temperature and visual stimulus.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128527008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Takumi Kobayashi, Yuta Sugiura, H. Saito, Yuji Uema
This paper presents a 3D virtual eyeglasses try-on system for practical use. For fitting eyeglasses in a shop, consumers wish to look at themselves in a mirror while trying on various eyeglass styles. However, for people who need to wear eyeglasses for correcting problems with eyesight, it is impossible for them to clearly observe their face in the mirror without wearing eyeglasses. This makes fitting them for new eyeglasses difficult. This research proposes a virtual try-on system that can be used while wearing eyeglasses. We replace the user's eyeglasses in the input video with new eyeglasses virtually. Moreover, a fast and accurate face tracking tool enables our system to automatically display 3D virtual glasses following a user's head motion. Experimental results demonstrate that the proposed method can render virtual glasses naturally while the user is wearing real eyeglasses.
{"title":"Automatic Eyeglasses Replacement for a 3D Virtual Try-on System","authors":"Takumi Kobayashi, Yuta Sugiura, H. Saito, Yuji Uema","doi":"10.1145/3311823.3311854","DOIUrl":"https://doi.org/10.1145/3311823.3311854","url":null,"abstract":"This paper presents a 3D virtual eyeglasses try-on system for practical use. For fitting eyeglasses in a shop, consumers wish to look at themselves in a mirror while trying on various eyeglass styles. However, for people who need to wear eyeglasses for correcting problems with eyesight, it is impossible for them to clearly observe their face in the mirror without wearing eyeglasses. This makes fitting them for new eyeglasses difficult. This research proposes a virtual try-on system that can be used while wearing eyeglasses. We replace the user's eyeglasses in the input video with new eyeglasses virtually. Moreover, a fast and accurate face tracking tool enables our system to automatically display 3D virtual glasses following a user's head motion. Experimental results demonstrate that the proposed method can render virtual glasses naturally while the user is wearing real eyeglasses.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130959600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Advances in 3D capture technology have made it easier to generate a realistic avatar, which can represent a person in virtual environments. Because avatars can be easily duplicated in the virtual environments, there can be an unrealistic situation where a person sees her/his own doppelgangers. Doppelganger is a double of a person and sometimes portrayed as a sinister existence. To investigate how people feel and react when they face their doppelgangers, we developed "Double Shellf", a virtual reality experience in which people can interact with their virtual doppelgangers in various situations. In this paper, we introduce the design of Double Shellf and discuss the reactions of 86 users. The user study revealed that most people felt intense eeriness when they see their doppelgangers which acts autonomously and when they were touched by their doppelgangers. We also found that there is a gender difference in reactions to their doppelgangers. We explore the effective way of utilizing doppelgangers.
{"title":"Double Shellf: What Psychological Effects can be Caused through Interaction with a Doppelganger?","authors":"Yuji Hatada, S. Yoshida, Takuji Narumi, M. Hirose","doi":"10.1145/3311823.3311862","DOIUrl":"https://doi.org/10.1145/3311823.3311862","url":null,"abstract":"Advances in 3D capture technology have made it easier to generate a realistic avatar, which can represent a person in virtual environments. Because avatars can be easily duplicated in the virtual environments, there can be an unrealistic situation where a person sees her/his own doppelgangers. Doppelganger is a double of a person and sometimes portrayed as a sinister existence. To investigate how people feel and react when they face their doppelgangers, we developed \"Double Shellf\", a virtual reality experience in which people can interact with their virtual doppelgangers in various situations. In this paper, we introduce the design of Double Shellf and discuss the reactions of 86 users. The user study revealed that most people felt intense eeriness when they see their doppelgangers which acts autonomously and when they were touched by their doppelgangers. We also found that there is a gender difference in reactions to their doppelgangers. We explore the effective way of utilizing doppelgangers.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"155 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133773751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}