S. Palmieri, Alessio Righi, M. Bisson, A. Ianniello
Extended realities, along with other enabling technologies, can improve the way we perform professional and entertaining activities. This paper describes the possibilities created by future applications in the field of sports. These possibilities emerge from the analysis of various aspects of the project to evaluate the feasibility of the idea: studying the different opportunities created by extended realities and applied to learning processes, the possible technological needs have to be addressed in the near future to meet the users’ needs within the sports system and to obtain significant benefits at an appropriate cost.
{"title":"A Sport Project and Its Future Applications: How to Implement Speculative Design to Fulfill Users’ Needs","authors":"S. Palmieri, Alessio Righi, M. Bisson, A. Ianniello","doi":"10.1145/3480433.3480439","DOIUrl":"https://doi.org/10.1145/3480433.3480439","url":null,"abstract":"Extended realities, along with other enabling technologies, can improve the way we perform professional and entertaining activities. This paper describes the possibilities created by future applications in the field of sports. These possibilities emerge from the analysis of various aspects of the project to evaluate the feasibility of the idea: studying the different opportunities created by extended realities and applied to learning processes, the possible technological needs have to be addressed in the near future to meet the users’ needs within the sports system and to obtain significant benefits at an appropriate cost.","PeriodicalId":415865,"journal":{"name":"2021 5th International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116959828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stanislav Selitskiy, Nikolaos Christou, Natalya Selitskaya
We investigate whether the well-known poor performance of the head-on usage of the convolutional neural networks for the facial expression recognition task may be improved in terms of reducing the false positive and false negative errors. An uncertainty isolating technique is used that introduces an additional “unknown” class. A self-attention supervisor artificial neural network is used to “learn about learning” of the underlying convolutional neural networks, in particular, to learn patterns of the underlying neural network parameters that accompany wrong or correct verdicts. A novel data set containing artistic makeup and occlusions images is used to aggravate the problem of the training data not representing the test data distribution.
{"title":"Isolating Uncertainty of the Face Expression Recognition with the Meta-Learning Supervisor Neural Network","authors":"Stanislav Selitskiy, Nikolaos Christou, Natalya Selitskaya","doi":"10.1145/3480433.3480447","DOIUrl":"https://doi.org/10.1145/3480433.3480447","url":null,"abstract":"We investigate whether the well-known poor performance of the head-on usage of the convolutional neural networks for the facial expression recognition task may be improved in terms of reducing the false positive and false negative errors. An uncertainty isolating technique is used that introduces an additional “unknown” class. A self-attention supervisor artificial neural network is used to “learn about learning” of the underlying convolutional neural networks, in particular, to learn patterns of the underlying neural network parameters that accompany wrong or correct verdicts. A novel data set containing artistic makeup and occlusions images is used to aggravate the problem of the training data not representing the test data distribution.","PeriodicalId":415865,"journal":{"name":"2021 5th International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"252 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132677130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hillary Vasquez-Gonzaga, Juan M. Gutiérrez Cárdenas
Cardiovascular diseases and Coronary Artery Disease (CAD) are the leading causes of mortality among people of different ages and conditions. The use of different and not so invasive biomarkers to detect these types of diseases joined with Machine Learning techniques seems promising for early detection of these illnesses. In the present work, we have used the Sani Z-Alizadeh dataset, which comprises a set of different medical features extracted with not invasive methods and used with different machine learning models. The comparisons performed showed that the best results were using a complete set and a subset of features as input for the Random Forest and XGBoost algorithms. Considering the results obtained, we believe that using a complete set of features gives insights that the features should also be analyzed by considering the medical advances and findings of how these markers influence a CAD disease's presence.
{"title":"Comparison of Supervised Learning Models for the Prediction of Coronary Artery Disease","authors":"Hillary Vasquez-Gonzaga, Juan M. Gutiérrez Cárdenas","doi":"10.1145/3480433.3480451","DOIUrl":"https://doi.org/10.1145/3480433.3480451","url":null,"abstract":"Cardiovascular diseases and Coronary Artery Disease (CAD) are the leading causes of mortality among people of different ages and conditions. The use of different and not so invasive biomarkers to detect these types of diseases joined with Machine Learning techniques seems promising for early detection of these illnesses. In the present work, we have used the Sani Z-Alizadeh dataset, which comprises a set of different medical features extracted with not invasive methods and used with different machine learning models. The comparisons performed showed that the best results were using a complete set and a subset of features as input for the Random Forest and XGBoost algorithms. Considering the results obtained, we believe that using a complete set of features gives insights that the features should also be analyzed by considering the medical advances and findings of how these markers influence a CAD disease's presence.","PeriodicalId":415865,"journal":{"name":"2021 5th International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132722474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yusuke Jin, Xiaozhou Zhou, W. Xiao, Jiarui Li, Chengqi Xue
As virtual reality (VR) headsets with head-mounted displays (HMD) have attracted more and more attention, new research questions have emerged. In virtual reality games, virtual reality movies or virtual reality simulations, maintaining a high degree of depth perception and minimizing visual discomfort is essential to improve the overall user experience. Proprioceptive cues, monocular cues, and binocular cues constitute the depth cues of visual perception, which accurately provide the depth information of objects in personal space. Proprioceptive cues include convergence and adjustment. Quite a few studies have proved that convergence and accommodation in VE exacerbate uncomfortable feelings. Monocular cues include occlusion, relative size of objects, texture fineness, variations of light and the shadows, optical thickness, perspective, etc. Monocular clues produce psychological depth cues, which can be affected by the texture mapping of objects and environment settings in the virtual environment. The binocular vision of the human eye enables users to perceive the depth of different objects in space. When both eyes observe the same target object in reality, the images formed on the retina are different due to the different lateral positions of the two eyes. The stimulus received by the retina is processed in the brain to generate stereo vision. The perception of objects generated by binocular vision is influenced by depth cues in environment. The difference in depth perception of virtual information leads to the separation of spatial perception, which makes it difficult to establish relevance with the reality operation mode. In this paper, the physiological characteristics of human eyes are studied, and visual environment provided by head-mounted displays (HMD) equipment are summarized. The limitations of visual images provided by HMD are obtained by comparing the differences. We discussed imaging mechanism of the stereo vision in HMD. By comparing the differences of stereo vision generated in HMD images and human eyes, this paper is aimed to provide theoretical suggestions for building a more realistic and immersive environment in VR.
{"title":"Comparison of Differences between Human Eye Imaging and HMD Imaging","authors":"Yusuke Jin, Xiaozhou Zhou, W. Xiao, Jiarui Li, Chengqi Xue","doi":"10.1145/3480433.3480434","DOIUrl":"https://doi.org/10.1145/3480433.3480434","url":null,"abstract":"As virtual reality (VR) headsets with head-mounted displays (HMD) have attracted more and more attention, new research questions have emerged. In virtual reality games, virtual reality movies or virtual reality simulations, maintaining a high degree of depth perception and minimizing visual discomfort is essential to improve the overall user experience. Proprioceptive cues, monocular cues, and binocular cues constitute the depth cues of visual perception, which accurately provide the depth information of objects in personal space. Proprioceptive cues include convergence and adjustment. Quite a few studies have proved that convergence and accommodation in VE exacerbate uncomfortable feelings. Monocular cues include occlusion, relative size of objects, texture fineness, variations of light and the shadows, optical thickness, perspective, etc. Monocular clues produce psychological depth cues, which can be affected by the texture mapping of objects and environment settings in the virtual environment. The binocular vision of the human eye enables users to perceive the depth of different objects in space. When both eyes observe the same target object in reality, the images formed on the retina are different due to the different lateral positions of the two eyes. The stimulus received by the retina is processed in the brain to generate stereo vision. The perception of objects generated by binocular vision is influenced by depth cues in environment. The difference in depth perception of virtual information leads to the separation of spatial perception, which makes it difficult to establish relevance with the reality operation mode. In this paper, the physiological characteristics of human eyes are studied, and visual environment provided by head-mounted displays (HMD) equipment are summarized. The limitations of visual images provided by HMD are obtained by comparing the differences. We discussed imaging mechanism of the stereo vision in HMD. By comparing the differences of stereo vision generated in HMD images and human eyes, this paper is aimed to provide theoretical suggestions for building a more realistic and immersive environment in VR.","PeriodicalId":415865,"journal":{"name":"2021 5th International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130607326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the study, our group choose a set of quality of red wine as data set. To get a more accurate result, we turn the quality into binary classification. And we try to build models to predict the quality of red wine based on machine learning algorithms, including Decision Tree, Boosting, Classification and regression tree and Random Forest. Among them, CART and Random Forest both get a high accuracy. A binary tree is built with CART and feature importance is analyzed. Meanwhile, we try to combine logistic algorithm with Random Forest and compare the accuracy of different models. In this way, it's found that there is a way to improve the accuracy of these models.
{"title":"Construction of Wine Quality Prediction Model based on Machine Learning Algorithm","authors":"Haoyu Zhang, Zhile Wang, Jiawei He, Jijiao Tong","doi":"10.1145/3480433.3480443","DOIUrl":"https://doi.org/10.1145/3480433.3480443","url":null,"abstract":"In the study, our group choose a set of quality of red wine as data set. To get a more accurate result, we turn the quality into binary classification. And we try to build models to predict the quality of red wine based on machine learning algorithms, including Decision Tree, Boosting, Classification and regression tree and Random Forest. Among them, CART and Random Forest both get a high accuracy. A binary tree is built with CART and feature importance is analyzed. Meanwhile, we try to combine logistic algorithm with Random Forest and compare the accuracy of different models. In this way, it's found that there is a way to improve the accuracy of these models.","PeriodicalId":415865,"journal":{"name":"2021 5th International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"345 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134228746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the development of digital society, more and more museums have digitalized their artifacts with details and put together highlights for visitors to view the selected representations of the virtual collections. It becomes very crucial to evaluate the representations of these highlight artifacts. A set of novel assessment metrics for quantitatively assessing the representations of highlight artifacts from six aspects in a virtual museum are presented in this paper. These assessment respects include the periodization representation ratio index, geography representation ratio index, object type representation ratio index, figures depicted representation ratio index, material representation ratio index, and size representation ratio index. A case study of Egyptian artifacts in a virtual museum with practical data shows the effectiveness of the proposed metrics for assessing the representations of highlight artifacts in a virtual museum.
{"title":"Assessment Metrics for the Representations of Highlight Artifacts in a Virtual Museum: A Case Study of Egyptian Artifacts","authors":"Yidi Wang","doi":"10.1145/3480433.3480452","DOIUrl":"https://doi.org/10.1145/3480433.3480452","url":null,"abstract":"With the development of digital society, more and more museums have digitalized their artifacts with details and put together highlights for visitors to view the selected representations of the virtual collections. It becomes very crucial to evaluate the representations of these highlight artifacts. A set of novel assessment metrics for quantitatively assessing the representations of highlight artifacts from six aspects in a virtual museum are presented in this paper. These assessment respects include the periodization representation ratio index, geography representation ratio index, object type representation ratio index, figures depicted representation ratio index, material representation ratio index, and size representation ratio index. A case study of Egyptian artifacts in a virtual museum with practical data shows the effectiveness of the proposed metrics for assessing the representations of highlight artifacts in a virtual museum.","PeriodicalId":415865,"journal":{"name":"2021 5th International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"75 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133356994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Head-mounted display (HMD) based cognitive training enables a number of unique visual features, including 3D depth and immersive visuals. We examine the impact of these features by measuring task performance and EEG power with a cognitive task that has been modified for use in virtual environments. Results were then compared with those obtained using an unmodified version of the task. Some modified versions of the task resulted in increased cognitive load, but the differences did not correlate with changes in task performance. The increased EEG power observed may therefore only reflect changes at the perceptual level, rather than task-contingent cognitive function. On the whole, these results suggest that adapting a standard cognitive training task to a virtual environment in order to take advantage of the inherent benefits of an HMD should pose few if any problems.
{"title":"Enhanced Cognitive Training using Virtual Reality: Examining a Memory Task Modified for Use in Virtual Environments","authors":"Eric Redlinger, Bernhard Glas, Yang Rong","doi":"10.1145/3480433.3480435","DOIUrl":"https://doi.org/10.1145/3480433.3480435","url":null,"abstract":"Head-mounted display (HMD) based cognitive training enables a number of unique visual features, including 3D depth and immersive visuals. We examine the impact of these features by measuring task performance and EEG power with a cognitive task that has been modified for use in virtual environments. Results were then compared with those obtained using an unmodified version of the task. Some modified versions of the task resulted in increased cognitive load, but the differences did not correlate with changes in task performance. The increased EEG power observed may therefore only reflect changes at the perceptual level, rather than task-contingent cognitive function. On the whole, these results suggest that adapting a standard cognitive training task to a virtual environment in order to take advantage of the inherent benefits of an HMD should pose few if any problems.","PeriodicalId":415865,"journal":{"name":"2021 5th International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115749413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Garrido, R. Rodrigues, A. A. Sousa, João Jacob, D. Silva
The use of virtual reality technologies for data visualization and analysis has been an emerging topic of research in the past years. However, one type of data has been left neglected, the point cloud. While some strides have been made in the visualization and analysis of point clouds in immersive environments, these have yet to be used for direct manipulation interactions. It is hypothesized that as with other types of data, bringing direct interactions and 3D visualization to point clouds may increase the ease of performing basic handling tasks. An immersive application for virtual reality HMDs was developed in Unity to help research this hypothesis. It is capable of parsing classified point cloud files with extracted objects and representing them in a virtual environment. Several editing tools were also developed, designed with the HMD controllers in mind. The end result allows the user to perform basic transformative tasks to the point cloud with an ease of use and intuitive feeling unmatched by the traditional desktop-based tools.
{"title":"Point Cloud Interaction and Manipulation in Virtual Reality","authors":"Daniel Garrido, R. Rodrigues, A. A. Sousa, João Jacob, D. Silva","doi":"10.1145/3480433.3480437","DOIUrl":"https://doi.org/10.1145/3480433.3480437","url":null,"abstract":"The use of virtual reality technologies for data visualization and analysis has been an emerging topic of research in the past years. However, one type of data has been left neglected, the point cloud. While some strides have been made in the visualization and analysis of point clouds in immersive environments, these have yet to be used for direct manipulation interactions. It is hypothesized that as with other types of data, bringing direct interactions and 3D visualization to point clouds may increase the ease of performing basic handling tasks. An immersive application for virtual reality HMDs was developed in Unity to help research this hypothesis. It is capable of parsing classified point cloud files with extracted objects and representing them in a virtual environment. Several editing tools were also developed, designed with the HMD controllers in mind. The end result allows the user to perform basic transformative tasks to the point cloud with an ease of use and intuitive feeling unmatched by the traditional desktop-based tools.","PeriodicalId":415865,"journal":{"name":"2021 5th International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123111136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yirui Jiang, T. Tran, Leon Williams, Jaime Palmer, Edgar Simson, Daniel Benson, Michael Christopher, Daila Christopher
∗Nowadays, customization by mixed reality to enhance the customer experience plays an important role in the retail industry. Customers can choose and customize products with their images and labels in a virtual reality environment. However, the existing asset creation pipelines are labor-intensive and time-consuming to display the images and labels (aka logos) on 3D product models, and cannot be easily customized by customers in real-time. In this paper, we thus propose a real-time 3D logo mapping framework for converting 3D logo mesh from a specified image and fitting it to the 3D product models. In the framework, Convolutional Neural Network (CNN) is adopted to reconstruct 3D logo/product models from their images. The detailed 3D information and the logo location provided by a customer are used to select the effective sampling points to mesh deformation. This method can preserve both the visual quality and details of 3D product models. Experimental results, carried out on various sizes of logos and types of products, show that our method can produce accurately and quickly customized logos on 3D product models.
{"title":"Enhancing the Customer Experience by Mixed Reality in the Retail Industry","authors":"Yirui Jiang, T. Tran, Leon Williams, Jaime Palmer, Edgar Simson, Daniel Benson, Michael Christopher, Daila Christopher","doi":"10.1145/3480433.3480438","DOIUrl":"https://doi.org/10.1145/3480433.3480438","url":null,"abstract":"∗Nowadays, customization by mixed reality to enhance the customer experience plays an important role in the retail industry. Customers can choose and customize products with their images and labels in a virtual reality environment. However, the existing asset creation pipelines are labor-intensive and time-consuming to display the images and labels (aka logos) on 3D product models, and cannot be easily customized by customers in real-time. In this paper, we thus propose a real-time 3D logo mapping framework for converting 3D logo mesh from a specified image and fitting it to the 3D product models. In the framework, Convolutional Neural Network (CNN) is adopted to reconstruct 3D logo/product models from their images. The detailed 3D information and the logo location provided by a customer are used to select the effective sampling points to mesh deformation. This method can preserve both the visual quality and details of 3D product models. Experimental results, carried out on various sizes of logos and types of products, show that our method can produce accurately and quickly customized logos on 3D product models.","PeriodicalId":415865,"journal":{"name":"2021 5th International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131936522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human activity recognition has been a very active topic in pervasive computing for several years for its important applications in assisted living, healthcare, and security surveillance. Many researchers are finding and representing the details of human body gestures to determine human activity. While simple activities can be easily recognized only by acceleration data, our research has focused on the recognition and understanding the various activities in daily living. In this work, we address this problem by proposing approach theory of deep learning with the Deep belief network. Deep belief network comprises a series of Restricted Boltzmann Machines will be formed by superimposed multiple Restricted Boltzmann Machines and training the model parameters for data reconstruction, feature construction and classification. We tested our approach on PASCAL VOC datasets. The experimental results indicate that our proposed approach offers significant performance improvements with the maximum of 79.8%.
{"title":"Deep Belief Network based Machine Learning for Daily Activities Classification","authors":"Tejtasin Phiasai, N. Chinpanthana","doi":"10.1145/3480433.3480444","DOIUrl":"https://doi.org/10.1145/3480433.3480444","url":null,"abstract":"Human activity recognition has been a very active topic in pervasive computing for several years for its important applications in assisted living, healthcare, and security surveillance. Many researchers are finding and representing the details of human body gestures to determine human activity. While simple activities can be easily recognized only by acceleration data, our research has focused on the recognition and understanding the various activities in daily living. In this work, we address this problem by proposing approach theory of deep learning with the Deep belief network. Deep belief network comprises a series of Restricted Boltzmann Machines will be formed by superimposed multiple Restricted Boltzmann Machines and training the model parameters for data reconstruction, feature construction and classification. We tested our approach on PASCAL VOC datasets. The experimental results indicate that our proposed approach offers significant performance improvements with the maximum of 79.8%.","PeriodicalId":415865,"journal":{"name":"2021 5th International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130611851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}