Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00075
S. Palmieri, Alessio Righi, M. Bisson, A. Ianniello
Recent and continuous innovations in the field of extended reality and, in particular, augmented reality, are able to revolutionize different aspects of the reference market sectors. At the same time, a constant evolution in the area of artificial intelligence, machine learning and deep learning, if combined with the aforementioned innovations, allows to conceive solutions able to shape new ways to inform, to improve skills and to spend time. The ability to simulate contexts, environments, actions and emotions and the possibility to use the data generated by the simulations in a disruptive way permit to imagine and create learning and strengthening paths.This developing research has been carried out within the Interdepartmental Laboratory EDME (Environmental Design Multisensory Experience), which belongs to the Design Department of Politecnico di Milano. It has been conducted by investigating the state of the art of augmented reality and artificial intelligence technologies, highlighting interesting and highly innovative case studies; from this first phase we moved on to analyze the sport sector in which an important potential for future development was recognized. The last part of the first phase of this research project consisted in the elaboration of a concept for an enabling technological system and a business model with a high innovation coefficient, whose realization is hypothesized for the year 2030.It is intended to demonstrate how a design operation, which started from emerging technologies and a sector of high interest and assumed a scenario of use over ten years, is not only extremely interesting but also, and above all, useful to consciously predict and accompany the aforementioned technological development.
{"title":"Training with a world champion: augmented reality applications in sport Design-led research","authors":"S. Palmieri, Alessio Righi, M. Bisson, A. Ianniello","doi":"10.1109/AIVR50618.2020.00075","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00075","url":null,"abstract":"Recent and continuous innovations in the field of extended reality and, in particular, augmented reality, are able to revolutionize different aspects of the reference market sectors. At the same time, a constant evolution in the area of artificial intelligence, machine learning and deep learning, if combined with the aforementioned innovations, allows to conceive solutions able to shape new ways to inform, to improve skills and to spend time. The ability to simulate contexts, environments, actions and emotions and the possibility to use the data generated by the simulations in a disruptive way permit to imagine and create learning and strengthening paths.This developing research has been carried out within the Interdepartmental Laboratory EDME (Environmental Design Multisensory Experience), which belongs to the Design Department of Politecnico di Milano. It has been conducted by investigating the state of the art of augmented reality and artificial intelligence technologies, highlighting interesting and highly innovative case studies; from this first phase we moved on to analyze the sport sector in which an important potential for future development was recognized. The last part of the first phase of this research project consisted in the elaboration of a concept for an enabling technological system and a business model with a high innovation coefficient, whose realization is hypothesized for the year 2030.It is intended to demonstrate how a design operation, which started from emerging technologies and a sector of high interest and assumed a scenario of use over ten years, is not only extremely interesting but also, and above all, useful to consciously predict and accompany the aforementioned technological development.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125428707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/aivr50618.2020.00001
{"title":"Title Page","authors":"","doi":"10.1109/aivr50618.2020.00001","DOIUrl":"https://doi.org/10.1109/aivr50618.2020.00001","url":null,"abstract":"","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131233070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00052
Benjamin Allison, Xujiong Ye, Faraz Janan
Adoption of Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR) - known collectively as Extended Reality (XR) devices has been rapidly increasing over recent years. However, the focus of XR research has shown a lack of diversity in solutions to the problems within medicine, with it being predominantly focused in augmenting surgical procedures. Whilst important, XR applied to aiding medical diagnosis and surgical planning is relatively unexplored. In this paper we present a fully functional mammographic image analysis system, Breast3D, that can reconstruct MRI and CT scan data in XR. With breast cancer Breast Imaging-Reporting and Data System (BI-RADS) risk lexicon, early detection and clinical workflow such as Multi-disciplinary team (MDT) meetings for cancer in mind, our new mammography visualization system reconstructs CT and MRI volumes in a real 3D space. Breast3D is built upon the past literature and inspired from research for diagnosis and surgical planning. In addition to visualising the models in MR using the Microsoft HoloLens, Breast3D is versatile and portable to different XR head-mounted displays such as HTC Vive. Breast3D demonstrates the early potential for XR within diagnostics of 3D mammographic modalities, an application that has been proposed but until now has not been implemented.
{"title":"Breast3D: An Augmented Reality System for Breast CT and MRI","authors":"Benjamin Allison, Xujiong Ye, Faraz Janan","doi":"10.1109/AIVR50618.2020.00052","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00052","url":null,"abstract":"Adoption of Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR) - known collectively as Extended Reality (XR) devices has been rapidly increasing over recent years. However, the focus of XR research has shown a lack of diversity in solutions to the problems within medicine, with it being predominantly focused in augmenting surgical procedures. Whilst important, XR applied to aiding medical diagnosis and surgical planning is relatively unexplored. In this paper we present a fully functional mammographic image analysis system, Breast3D, that can reconstruct MRI and CT scan data in XR. With breast cancer Breast Imaging-Reporting and Data System (BI-RADS) risk lexicon, early detection and clinical workflow such as Multi-disciplinary team (MDT) meetings for cancer in mind, our new mammography visualization system reconstructs CT and MRI volumes in a real 3D space. Breast3D is built upon the past literature and inspired from research for diagnosis and surgical planning. In addition to visualising the models in MR using the Microsoft HoloLens, Breast3D is versatile and portable to different XR head-mounted displays such as HTC Vive. Breast3D demonstrates the early potential for XR within diagnostics of 3D mammographic modalities, an application that has been proposed but until now has not been implemented.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134083048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00019
Aaron Duane, B. Jónsson, C. Gurrin
The Virtual Reality Lifelog Explorer is a prototype for immersive personal data analytics, intended as an exploratory effort to produce more sophisticated virtual or augmented reality analysis prototypes in the future. An earlier version of this prototype competed in, and won, the first Lifelog Search Challenge (LSC) held at ACM ICMR in 2018.
{"title":"Virtual Reality Lifelog Explorer: A Prototype for Immersive Lifelog Analytics","authors":"Aaron Duane, B. Jónsson, C. Gurrin","doi":"10.1109/AIVR50618.2020.00019","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00019","url":null,"abstract":"The Virtual Reality Lifelog Explorer is a prototype for immersive personal data analytics, intended as an exploratory effort to produce more sophisticated virtual or augmented reality analysis prototypes in the future. An earlier version of this prototype competed in, and won, the first Lifelog Search Challenge (LSC) held at ACM ICMR in 2018.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129968959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00070
Yu-Yen Chung, Hung-Jui Guo, H. G. Kumar, B. Prabhakaran
With the advent of low-cost RGB-D cameras, mixed reality serious games using ‘live’ 3D human avatars have become popular. Here, RGB-D cameras are used for capturing and transferring user’ motion and texture onto the 3D human avatar in virtual environments. A system with a single camera is more suitable for such mixed reality games deployed in homes, considering the ease of setting up the system. In these mixed reality games, users can have either a third-person perspective or a first-person perspective of the virtual environments used in the games. Since first-person perspective provides a better Sense of Embodiment (SoE), in this paper, we explore the problem of providing a first-person perspective for mixed reality serious games played in homes. We propose a real time textured humanoid-avatar framework to provide a first-person perspective and address the challenges involved in setting up such a gaming system in homes. Our approach comprises: (a) SMPL humanoid model optimization for capturing user’ movements continuously; (b) a real-time texture transferring and merging OpenGL pipeline to build a global texture atlas across multiple video frames. We target the proposed approach towards a serious game for amputees, called Mr.MAPP (Mixed Reality-based framework for Managing Phantom Pain), where amputee’ intact limb is mirrored in real-time in the virtual environment. For this purpose, our framework also introduces a mirroring method to generate a textured phantom limb in the virtual environment. We carried out a series of visual and metrics-based studies to evaluate the effectiveness of the proposed approaches for skeletal pose fitting and texture transfer to SMPL humanoid models, as well as the mirroring and texturing missing limb (for future amputee based studies).
{"title":"High-quality First-person Rendering Mixed Reality Gaming System for In Home Setting","authors":"Yu-Yen Chung, Hung-Jui Guo, H. G. Kumar, B. Prabhakaran","doi":"10.1109/AIVR50618.2020.00070","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00070","url":null,"abstract":"With the advent of low-cost RGB-D cameras, mixed reality serious games using ‘live’ 3D human avatars have become popular. Here, RGB-D cameras are used for capturing and transferring user’ motion and texture onto the 3D human avatar in virtual environments. A system with a single camera is more suitable for such mixed reality games deployed in homes, considering the ease of setting up the system. In these mixed reality games, users can have either a third-person perspective or a first-person perspective of the virtual environments used in the games. Since first-person perspective provides a better Sense of Embodiment (SoE), in this paper, we explore the problem of providing a first-person perspective for mixed reality serious games played in homes. We propose a real time textured humanoid-avatar framework to provide a first-person perspective and address the challenges involved in setting up such a gaming system in homes. Our approach comprises: (a) SMPL humanoid model optimization for capturing user’ movements continuously; (b) a real-time texture transferring and merging OpenGL pipeline to build a global texture atlas across multiple video frames. We target the proposed approach towards a serious game for amputees, called Mr.MAPP (Mixed Reality-based framework for Managing Phantom Pain), where amputee’ intact limb is mirrored in real-time in the virtual environment. For this purpose, our framework also introduces a mirroring method to generate a textured phantom limb in the virtual environment. We carried out a series of visual and metrics-based studies to evaluate the effectiveness of the proposed approaches for skeletal pose fitting and texture transfer to SMPL humanoid models, as well as the mirroring and texturing missing limb (for future amputee based studies).","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129657388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00030
Gang Li, Francisco Macía Varela, Abdullah Habib, Qi Zhang, Mark Mcgill, S. Brewster, F. Pollick
Many head-mounted virtual reality display (VR-HMD) applications that involve moving visual environments (e.g., virtual rollercoaster, car and airplane driving) will trigger cybersickness (CS). Previous research Arshad et al. (2015) has explored the inhibitory effect of cathodal transcranial direct current stimulation (tDCS) on vestibular cortical excitability, applied to traditional motion sickness (MS), however its applicability to CS, as typically experienced in immersive VR, remains unknown. The presented double-blinded 2x2x3 mixed design experiment (independent variables: stimulation condition [cathodal/anodal]; timing of VR stimulus exposure [before/after tDCS]; sickness scenario [slight symptoms onset/moderate symptoms onset/recovery]) aims to investigate whether the tDCS protocol adapted from Arshad et al. (2015) is effective at delaying the onset of CS symptoms and/or accelerating recovery from them in healthy participants. Quantitative analysis revealed that the cathodal tDCS indeed delayed the onset of slight symptoms if compared to that in anodal condition. However, there are no significant differences in delaying the onset of moderate symptoms nor shortening time to recovery between the two stimulation types. Possible reasons for present findings are discussed and suggestions for future studies are proposed.
{"title":"Exploring the feasibility of mitigating VR-HMD-induced cybersickness using cathodal transcranial direct current stimulation","authors":"Gang Li, Francisco Macía Varela, Abdullah Habib, Qi Zhang, Mark Mcgill, S. Brewster, F. Pollick","doi":"10.1109/AIVR50618.2020.00030","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00030","url":null,"abstract":"Many head-mounted virtual reality display (VR-HMD) applications that involve moving visual environments (e.g., virtual rollercoaster, car and airplane driving) will trigger cybersickness (CS). Previous research Arshad et al. (2015) has explored the inhibitory effect of cathodal transcranial direct current stimulation (tDCS) on vestibular cortical excitability, applied to traditional motion sickness (MS), however its applicability to CS, as typically experienced in immersive VR, remains unknown. The presented double-blinded 2x2x3 mixed design experiment (independent variables: stimulation condition [cathodal/anodal]; timing of VR stimulus exposure [before/after tDCS]; sickness scenario [slight symptoms onset/moderate symptoms onset/recovery]) aims to investigate whether the tDCS protocol adapted from Arshad et al. (2015) is effective at delaying the onset of CS symptoms and/or accelerating recovery from them in healthy participants. Quantitative analysis revealed that the cathodal tDCS indeed delayed the onset of slight symptoms if compared to that in anodal condition. However, there are no significant differences in delaying the onset of moderate symptoms nor shortening time to recovery between the two stimulation types. Possible reasons for present findings are discussed and suggestions for future studies are proposed.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129069562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00040
Reid Pinkham, Tanner Schmidt, A. Berkovich
In wearable AR/VR systems, data transmission between cameras and central processors can account for a significant portion of total system power, particularly in high framerate applications. Thus, it becomes necessary to compress video streams to reduce the cost of data transmission. In this paper we present a CNN-based compression scheme for such vision systems. We demonstrate that, unlike conventional compression techniques, our method can be tuned for specific machine vision applications. This enables increased compression for a given application performance target. We present results for Detectron2 Keypoint Detection and compare the performance and computational complexity of our method to existing compression schemes, such as H.264. We created a new high-framerate dataset which represents common scenarios for wearable AR/VR devices.
{"title":"Algorithm-Aware Neural Network Based Image Compression for High-Speed Imaging","authors":"Reid Pinkham, Tanner Schmidt, A. Berkovich","doi":"10.1109/AIVR50618.2020.00040","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00040","url":null,"abstract":"In wearable AR/VR systems, data transmission between cameras and central processors can account for a significant portion of total system power, particularly in high framerate applications. Thus, it becomes necessary to compress video streams to reduce the cost of data transmission. In this paper we present a CNN-based compression scheme for such vision systems. We demonstrate that, unlike conventional compression techniques, our method can be tuned for specific machine vision applications. This enables increased compression for a given application performance target. We present results for Detectron2 Keypoint Detection and compare the performance and computational complexity of our method to existing compression schemes, such as H.264. We created a new high-framerate dataset which represents common scenarios for wearable AR/VR devices.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"478 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123396793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00073
Wenxin Sun, Mengjie Huang, Rui Yang, Jingjing Zhang, Liu Wang, Ji Han, Yong Yue
WebVR technology is widely used as a visualization approach to display virtual objects on 2D webpages. Much of the current literature on virtual object manipulation on the 2D screen pays particular attention to task performance, but few studies focus on users’ psychological feedback and no literature aims at its relationship with task performance. This paper compares manipulation modes with different degrees of freedom (DoF) in translation and rotation on WebVR to explore users’ workload and presence by self-reported data, and task performance by measuring completion time and error rate. The experiment results present that the increase of DoF is associated with lower perceived workload, while people may feel a higher level of presence during tasks. Additionally, this study only finds a positive correlation between workload and efficiency, and a negative correlation between presence and efficiency, which means that when feeling less workload or more presence, people tend to spend less time to complete translation and rotation tasks on WebVR.
{"title":"Workload, Presence and Task Performance of Virtual Object Manipulation on WebVR","authors":"Wenxin Sun, Mengjie Huang, Rui Yang, Jingjing Zhang, Liu Wang, Ji Han, Yong Yue","doi":"10.1109/AIVR50618.2020.00073","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00073","url":null,"abstract":"WebVR technology is widely used as a visualization approach to display virtual objects on 2D webpages. Much of the current literature on virtual object manipulation on the 2D screen pays particular attention to task performance, but few studies focus on users’ psychological feedback and no literature aims at its relationship with task performance. This paper compares manipulation modes with different degrees of freedom (DoF) in translation and rotation on WebVR to explore users’ workload and presence by self-reported data, and task performance by measuring completion time and error rate. The experiment results present that the increase of DoF is associated with lower perceived workload, while people may feel a higher level of presence during tasks. Additionally, this study only finds a positive correlation between workload and efficiency, and a negative correlation between presence and efficiency, which means that when feeling less workload or more presence, people tend to spend less time to complete translation and rotation tasks on WebVR.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133398702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00041
Minjung Son, Hyun Sung Chang
Lane detection is important for visualization-tasks as well as autonomous driving. However, recent approaches have focused principally on the latter part, employing sophisticated sensors. This paper presents a novel lane line map estimation method from single images, which is applicable for visualization tasks such as augmented reality (AR) navigation. Our learning-based approach is designed for sparse lane data under perspective view. It works reliably even in various difficult situations, such as those involving irregular data forms, sensor variations, dynamic environments, and obstacles. We also suggest the visual alignment concept to define visual matching between the estimated lane line map and the corresponding external map, thereby enabling the conversion of various applications related to visualization into score maximization. Experimental results demonstrated that the proposed method could not only be directly used for lane-based 2D data augmentation but also be extended to 3D localization, for viewpoint pose estimation, which is essential for various AR scenarios.
{"title":"Lane Line Map Estimation for Visual Alignment","authors":"Minjung Son, Hyun Sung Chang","doi":"10.1109/AIVR50618.2020.00041","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00041","url":null,"abstract":"Lane detection is important for visualization-tasks as well as autonomous driving. However, recent approaches have focused principally on the latter part, employing sophisticated sensors. This paper presents a novel lane line map estimation method from single images, which is applicable for visualization tasks such as augmented reality (AR) navigation. Our learning-based approach is designed for sparse lane data under perspective view. It works reliably even in various difficult situations, such as those involving irregular data forms, sensor variations, dynamic environments, and obstacles. We also suggest the visual alignment concept to define visual matching between the estimated lane line map and the corresponding external map, thereby enabling the conversion of various applications related to visualization into score maximization. Experimental results demonstrated that the proposed method could not only be directly used for lane-based 2D data augmentation but also be extended to 3D localization, for viewpoint pose estimation, which is essential for various AR scenarios.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115938682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00082
Agata Marta Soccini, F. Ferroni, M. Ardizzi
The human brain does not represent space homogeneously, but it constructs multiple representations of it depending on the source of sensory stimulation and the nature of interaction between the body and the environment. The peripersonal space is defined as an imaginary area coded as separated sector of space, as if there were a boundary between what the body might or might not interact with. We present an experimental pattern that combines the use of virtual reality (VR) and functional magnetic resonance imaging (fMRI) to investigate human behavior and neural basis in case of training of the plasticity of the peripersonal space around the hand. The expected results may provide knowledge on a phenomenon interesting for behavioral neuroscience as well as for the interaction of embodied self-avatars in virtual environments.
{"title":"From Virtual Reality to Neuroscience and Back: a Use Case on Peripersonal Hand Space Plasticity","authors":"Agata Marta Soccini, F. Ferroni, M. Ardizzi","doi":"10.1109/AIVR50618.2020.00082","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00082","url":null,"abstract":"The human brain does not represent space homogeneously, but it constructs multiple representations of it depending on the source of sensory stimulation and the nature of interaction between the body and the environment. The peripersonal space is defined as an imaginary area coded as separated sector of space, as if there were a boundary between what the body might or might not interact with. We present an experimental pattern that combines the use of virtual reality (VR) and functional magnetic resonance imaging (fMRI) to investigate human behavior and neural basis in case of training of the plasticity of the peripersonal space around the hand. The expected results may provide knowledge on a phenomenon interesting for behavioral neuroscience as well as for the interaction of embodied self-avatars in virtual environments.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130875119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}