Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00037
T. Lavric, Emmanuel Bricard, M. Preda, T. Zaharia
AR technology has started replacing classical training procedures and is increasingly adopted in the industrial environment as training tool. The key challenge that has been underestimated is the required effort of authoring AR instructions. This research investigates the context of humanoperated assembly lines in manufacturing factories. The main objective is to identify and implement a way of authoring step-bystep AR instruction procedures, in a manner that satisfies industrial requirements identified in our case study and in the literature. Our proposal focuses in particular on speed, simplicity and flexibility. As a result, the proposed authoring tool makes it possible to author AR instructions in a very short time, does not require technical skills and is easy to operate by untrained workers. Compared to existing solutions, our proposal does not rely on a preparation stage. The entire authoring procedure is performed directly and only inside an AR headset.
{"title":"An AR Work Instructions Authoring Tool for Human-Operated Industrial Assembly Lines","authors":"T. Lavric, Emmanuel Bricard, M. Preda, T. Zaharia","doi":"10.1109/AIVR50618.2020.00037","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00037","url":null,"abstract":"AR technology has started replacing classical training procedures and is increasingly adopted in the industrial environment as training tool. The key challenge that has been underestimated is the required effort of authoring AR instructions. This research investigates the context of humanoperated assembly lines in manufacturing factories. The main objective is to identify and implement a way of authoring step-bystep AR instruction procedures, in a manner that satisfies industrial requirements identified in our case study and in the literature. Our proposal focuses in particular on speed, simplicity and flexibility. As a result, the proposed authoring tool makes it possible to author AR instructions in a very short time, does not require technical skills and is easy to operate by untrained workers. Compared to existing solutions, our proposal does not rely on a preparation stage. The entire authoring procedure is performed directly and only inside an AR headset.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"358 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127580117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00058
T. Sun
This paper presents FaceAUG, a cross-platform application for real-time face augmentation in a web browser. Human faces are detected and tracked in real-time from the video stream of the embedded or separated webcam of the user device. Then, the application overlays different 2D or 3D augmented reality (AR) filters and effects over the region of the detected face(s) to achieve a mixed virtual and AR effect. A 2D effect can be a photo frame or a 2D face mask using an image from the local repository. A 3D effect is a 3D face model with a colored material, an image texture, or a video texture. The application uses TensorFlow.js to load the pre-trained Face Mesh model for predicting the regions and landmarks of the faces that appear in the video stream. Three.js is used to create the face geometries and render them using the material and texture selected by the user. FaceAUG can be used on any device, as long as an internal or external camera and a state-of-the-art web browser are accessible on the device. The application is implemented using front-end techniques and is therefore functional without any server-side supports at back-end. Experimental results on different platforms verified the effectiveness of the proposed approach.
{"title":"FaceAUG: A Cross-Platform Application for Real-Time Face Augmentation in Web Browser","authors":"T. Sun","doi":"10.1109/AIVR50618.2020.00058","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00058","url":null,"abstract":"This paper presents FaceAUG, a cross-platform application for real-time face augmentation in a web browser. Human faces are detected and tracked in real-time from the video stream of the embedded or separated webcam of the user device. Then, the application overlays different 2D or 3D augmented reality (AR) filters and effects over the region of the detected face(s) to achieve a mixed virtual and AR effect. A 2D effect can be a photo frame or a 2D face mask using an image from the local repository. A 3D effect is a 3D face model with a colored material, an image texture, or a video texture. The application uses TensorFlow.js to load the pre-trained Face Mesh model for predicting the regions and landmarks of the faces that appear in the video stream. Three.js is used to create the face geometries and render them using the material and texture selected by the user. FaceAUG can be used on any device, as long as an internal or external camera and a state-of-the-art web browser are accessible on the device. The application is implemented using front-end techniques and is therefore functional without any server-side supports at back-end. Experimental results on different platforms verified the effectiveness of the proposed approach.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129042776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/aivr50618.2020.00001
{"title":"Title Page","authors":"","doi":"10.1109/aivr50618.2020.00001","DOIUrl":"https://doi.org/10.1109/aivr50618.2020.00001","url":null,"abstract":"","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131233070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00041
Minjung Son, Hyun Sung Chang
Lane detection is important for visualization-tasks as well as autonomous driving. However, recent approaches have focused principally on the latter part, employing sophisticated sensors. This paper presents a novel lane line map estimation method from single images, which is applicable for visualization tasks such as augmented reality (AR) navigation. Our learning-based approach is designed for sparse lane data under perspective view. It works reliably even in various difficult situations, such as those involving irregular data forms, sensor variations, dynamic environments, and obstacles. We also suggest the visual alignment concept to define visual matching between the estimated lane line map and the corresponding external map, thereby enabling the conversion of various applications related to visualization into score maximization. Experimental results demonstrated that the proposed method could not only be directly used for lane-based 2D data augmentation but also be extended to 3D localization, for viewpoint pose estimation, which is essential for various AR scenarios.
{"title":"Lane Line Map Estimation for Visual Alignment","authors":"Minjung Son, Hyun Sung Chang","doi":"10.1109/AIVR50618.2020.00041","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00041","url":null,"abstract":"Lane detection is important for visualization-tasks as well as autonomous driving. However, recent approaches have focused principally on the latter part, employing sophisticated sensors. This paper presents a novel lane line map estimation method from single images, which is applicable for visualization tasks such as augmented reality (AR) navigation. Our learning-based approach is designed for sparse lane data under perspective view. It works reliably even in various difficult situations, such as those involving irregular data forms, sensor variations, dynamic environments, and obstacles. We also suggest the visual alignment concept to define visual matching between the estimated lane line map and the corresponding external map, thereby enabling the conversion of various applications related to visualization into score maximization. Experimental results demonstrated that the proposed method could not only be directly used for lane-based 2D data augmentation but also be extended to 3D localization, for viewpoint pose estimation, which is essential for various AR scenarios.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115938682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00070
Yu-Yen Chung, Hung-Jui Guo, H. G. Kumar, B. Prabhakaran
With the advent of low-cost RGB-D cameras, mixed reality serious games using ‘live’ 3D human avatars have become popular. Here, RGB-D cameras are used for capturing and transferring user’ motion and texture onto the 3D human avatar in virtual environments. A system with a single camera is more suitable for such mixed reality games deployed in homes, considering the ease of setting up the system. In these mixed reality games, users can have either a third-person perspective or a first-person perspective of the virtual environments used in the games. Since first-person perspective provides a better Sense of Embodiment (SoE), in this paper, we explore the problem of providing a first-person perspective for mixed reality serious games played in homes. We propose a real time textured humanoid-avatar framework to provide a first-person perspective and address the challenges involved in setting up such a gaming system in homes. Our approach comprises: (a) SMPL humanoid model optimization for capturing user’ movements continuously; (b) a real-time texture transferring and merging OpenGL pipeline to build a global texture atlas across multiple video frames. We target the proposed approach towards a serious game for amputees, called Mr.MAPP (Mixed Reality-based framework for Managing Phantom Pain), where amputee’ intact limb is mirrored in real-time in the virtual environment. For this purpose, our framework also introduces a mirroring method to generate a textured phantom limb in the virtual environment. We carried out a series of visual and metrics-based studies to evaluate the effectiveness of the proposed approaches for skeletal pose fitting and texture transfer to SMPL humanoid models, as well as the mirroring and texturing missing limb (for future amputee based studies).
{"title":"High-quality First-person Rendering Mixed Reality Gaming System for In Home Setting","authors":"Yu-Yen Chung, Hung-Jui Guo, H. G. Kumar, B. Prabhakaran","doi":"10.1109/AIVR50618.2020.00070","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00070","url":null,"abstract":"With the advent of low-cost RGB-D cameras, mixed reality serious games using ‘live’ 3D human avatars have become popular. Here, RGB-D cameras are used for capturing and transferring user’ motion and texture onto the 3D human avatar in virtual environments. A system with a single camera is more suitable for such mixed reality games deployed in homes, considering the ease of setting up the system. In these mixed reality games, users can have either a third-person perspective or a first-person perspective of the virtual environments used in the games. Since first-person perspective provides a better Sense of Embodiment (SoE), in this paper, we explore the problem of providing a first-person perspective for mixed reality serious games played in homes. We propose a real time textured humanoid-avatar framework to provide a first-person perspective and address the challenges involved in setting up such a gaming system in homes. Our approach comprises: (a) SMPL humanoid model optimization for capturing user’ movements continuously; (b) a real-time texture transferring and merging OpenGL pipeline to build a global texture atlas across multiple video frames. We target the proposed approach towards a serious game for amputees, called Mr.MAPP (Mixed Reality-based framework for Managing Phantom Pain), where amputee’ intact limb is mirrored in real-time in the virtual environment. For this purpose, our framework also introduces a mirroring method to generate a textured phantom limb in the virtual environment. We carried out a series of visual and metrics-based studies to evaluate the effectiveness of the proposed approaches for skeletal pose fitting and texture transfer to SMPL humanoid models, as well as the mirroring and texturing missing limb (for future amputee based studies).","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129657388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00030
Gang Li, Francisco Macía Varela, Abdullah Habib, Qi Zhang, Mark Mcgill, S. Brewster, F. Pollick
Many head-mounted virtual reality display (VR-HMD) applications that involve moving visual environments (e.g., virtual rollercoaster, car and airplane driving) will trigger cybersickness (CS). Previous research Arshad et al. (2015) has explored the inhibitory effect of cathodal transcranial direct current stimulation (tDCS) on vestibular cortical excitability, applied to traditional motion sickness (MS), however its applicability to CS, as typically experienced in immersive VR, remains unknown. The presented double-blinded 2x2x3 mixed design experiment (independent variables: stimulation condition [cathodal/anodal]; timing of VR stimulus exposure [before/after tDCS]; sickness scenario [slight symptoms onset/moderate symptoms onset/recovery]) aims to investigate whether the tDCS protocol adapted from Arshad et al. (2015) is effective at delaying the onset of CS symptoms and/or accelerating recovery from them in healthy participants. Quantitative analysis revealed that the cathodal tDCS indeed delayed the onset of slight symptoms if compared to that in anodal condition. However, there are no significant differences in delaying the onset of moderate symptoms nor shortening time to recovery between the two stimulation types. Possible reasons for present findings are discussed and suggestions for future studies are proposed.
{"title":"Exploring the feasibility of mitigating VR-HMD-induced cybersickness using cathodal transcranial direct current stimulation","authors":"Gang Li, Francisco Macía Varela, Abdullah Habib, Qi Zhang, Mark Mcgill, S. Brewster, F. Pollick","doi":"10.1109/AIVR50618.2020.00030","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00030","url":null,"abstract":"Many head-mounted virtual reality display (VR-HMD) applications that involve moving visual environments (e.g., virtual rollercoaster, car and airplane driving) will trigger cybersickness (CS). Previous research Arshad et al. (2015) has explored the inhibitory effect of cathodal transcranial direct current stimulation (tDCS) on vestibular cortical excitability, applied to traditional motion sickness (MS), however its applicability to CS, as typically experienced in immersive VR, remains unknown. The presented double-blinded 2x2x3 mixed design experiment (independent variables: stimulation condition [cathodal/anodal]; timing of VR stimulus exposure [before/after tDCS]; sickness scenario [slight symptoms onset/moderate symptoms onset/recovery]) aims to investigate whether the tDCS protocol adapted from Arshad et al. (2015) is effective at delaying the onset of CS symptoms and/or accelerating recovery from them in healthy participants. Quantitative analysis revealed that the cathodal tDCS indeed delayed the onset of slight symptoms if compared to that in anodal condition. However, there are no significant differences in delaying the onset of moderate symptoms nor shortening time to recovery between the two stimulation types. Possible reasons for present findings are discussed and suggestions for future studies are proposed.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129069562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00040
Reid Pinkham, Tanner Schmidt, A. Berkovich
In wearable AR/VR systems, data transmission between cameras and central processors can account for a significant portion of total system power, particularly in high framerate applications. Thus, it becomes necessary to compress video streams to reduce the cost of data transmission. In this paper we present a CNN-based compression scheme for such vision systems. We demonstrate that, unlike conventional compression techniques, our method can be tuned for specific machine vision applications. This enables increased compression for a given application performance target. We present results for Detectron2 Keypoint Detection and compare the performance and computational complexity of our method to existing compression schemes, such as H.264. We created a new high-framerate dataset which represents common scenarios for wearable AR/VR devices.
{"title":"Algorithm-Aware Neural Network Based Image Compression for High-Speed Imaging","authors":"Reid Pinkham, Tanner Schmidt, A. Berkovich","doi":"10.1109/AIVR50618.2020.00040","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00040","url":null,"abstract":"In wearable AR/VR systems, data transmission between cameras and central processors can account for a significant portion of total system power, particularly in high framerate applications. Thus, it becomes necessary to compress video streams to reduce the cost of data transmission. In this paper we present a CNN-based compression scheme for such vision systems. We demonstrate that, unlike conventional compression techniques, our method can be tuned for specific machine vision applications. This enables increased compression for a given application performance target. We present results for Detectron2 Keypoint Detection and compare the performance and computational complexity of our method to existing compression schemes, such as H.264. We created a new high-framerate dataset which represents common scenarios for wearable AR/VR devices.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"478 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123396793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00019
Aaron Duane, B. Jónsson, C. Gurrin
The Virtual Reality Lifelog Explorer is a prototype for immersive personal data analytics, intended as an exploratory effort to produce more sophisticated virtual or augmented reality analysis prototypes in the future. An earlier version of this prototype competed in, and won, the first Lifelog Search Challenge (LSC) held at ACM ICMR in 2018.
{"title":"Virtual Reality Lifelog Explorer: A Prototype for Immersive Lifelog Analytics","authors":"Aaron Duane, B. Jónsson, C. Gurrin","doi":"10.1109/AIVR50618.2020.00019","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00019","url":null,"abstract":"The Virtual Reality Lifelog Explorer is a prototype for immersive personal data analytics, intended as an exploratory effort to produce more sophisticated virtual or augmented reality analysis prototypes in the future. An earlier version of this prototype competed in, and won, the first Lifelog Search Challenge (LSC) held at ACM ICMR in 2018.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129968959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00073
Wenxin Sun, Mengjie Huang, Rui Yang, Jingjing Zhang, Liu Wang, Ji Han, Yong Yue
WebVR technology is widely used as a visualization approach to display virtual objects on 2D webpages. Much of the current literature on virtual object manipulation on the 2D screen pays particular attention to task performance, but few studies focus on users’ psychological feedback and no literature aims at its relationship with task performance. This paper compares manipulation modes with different degrees of freedom (DoF) in translation and rotation on WebVR to explore users’ workload and presence by self-reported data, and task performance by measuring completion time and error rate. The experiment results present that the increase of DoF is associated with lower perceived workload, while people may feel a higher level of presence during tasks. Additionally, this study only finds a positive correlation between workload and efficiency, and a negative correlation between presence and efficiency, which means that when feeling less workload or more presence, people tend to spend less time to complete translation and rotation tasks on WebVR.
{"title":"Workload, Presence and Task Performance of Virtual Object Manipulation on WebVR","authors":"Wenxin Sun, Mengjie Huang, Rui Yang, Jingjing Zhang, Liu Wang, Ji Han, Yong Yue","doi":"10.1109/AIVR50618.2020.00073","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00073","url":null,"abstract":"WebVR technology is widely used as a visualization approach to display virtual objects on 2D webpages. Much of the current literature on virtual object manipulation on the 2D screen pays particular attention to task performance, but few studies focus on users’ psychological feedback and no literature aims at its relationship with task performance. This paper compares manipulation modes with different degrees of freedom (DoF) in translation and rotation on WebVR to explore users’ workload and presence by self-reported data, and task performance by measuring completion time and error rate. The experiment results present that the increase of DoF is associated with lower perceived workload, while people may feel a higher level of presence during tasks. Additionally, this study only finds a positive correlation between workload and efficiency, and a negative correlation between presence and efficiency, which means that when feeling less workload or more presence, people tend to spend less time to complete translation and rotation tasks on WebVR.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133398702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00082
Agata Marta Soccini, F. Ferroni, M. Ardizzi
The human brain does not represent space homogeneously, but it constructs multiple representations of it depending on the source of sensory stimulation and the nature of interaction between the body and the environment. The peripersonal space is defined as an imaginary area coded as separated sector of space, as if there were a boundary between what the body might or might not interact with. We present an experimental pattern that combines the use of virtual reality (VR) and functional magnetic resonance imaging (fMRI) to investigate human behavior and neural basis in case of training of the plasticity of the peripersonal space around the hand. The expected results may provide knowledge on a phenomenon interesting for behavioral neuroscience as well as for the interaction of embodied self-avatars in virtual environments.
{"title":"From Virtual Reality to Neuroscience and Back: a Use Case on Peripersonal Hand Space Plasticity","authors":"Agata Marta Soccini, F. Ferroni, M. Ardizzi","doi":"10.1109/AIVR50618.2020.00082","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00082","url":null,"abstract":"The human brain does not represent space homogeneously, but it constructs multiple representations of it depending on the source of sensory stimulation and the nature of interaction between the body and the environment. The peripersonal space is defined as an imaginary area coded as separated sector of space, as if there were a boundary between what the body might or might not interact with. We present an experimental pattern that combines the use of virtual reality (VR) and functional magnetic resonance imaging (fMRI) to investigate human behavior and neural basis in case of training of the plasticity of the peripersonal space around the hand. The expected results may provide knowledge on a phenomenon interesting for behavioral neuroscience as well as for the interaction of embodied self-avatars in virtual environments.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130875119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}