Y. Ikei, Vibol Yem, Kento Tashiro, Toi Fujie, Tomohiro Amemiya, M. Kitazaki
To capture a remote 3D image, conventional stereo cameras attached to a robot head have been commonly used. However, when the head and cameras rotate, the captured image in buffers is degraded by latency and motion blur, which may cause VR sickness. In the present study, we propose a method named TwinCam in which we use two 360° cameras spaced at the standard interpupillary distance and keep the direction of the lens constant in the world coordinate even when the camera bodies are rotated to reflect the orientation of the observer's head and the position of the eyes. We consider that this method can suppress the image buffer size to send to the observer because each camera captures the omnidirectional image without lens rotation. This paper introduces the mechanical design of our camera system and its potential for visual telepresence through three experiments. Experiment 1 confirmed the requirement of a stereoscopic rather than monoscopic camera for highly accurate depth perception, and Experiments 2 and 3 proved that our mechanical camera setup can reduce motion blur and VR sickness.
{"title":"Live Stereoscopic 3D Image with Constant Capture Direction of 360°Cameras for High-Quality Visual Telepresence","authors":"Y. Ikei, Vibol Yem, Kento Tashiro, Toi Fujie, Tomohiro Amemiya, M. Kitazaki","doi":"10.1109/VR.2019.8797876","DOIUrl":"https://doi.org/10.1109/VR.2019.8797876","url":null,"abstract":"To capture a remote 3D image, conventional stereo cameras attached to a robot head have been commonly used. However, when the head and cameras rotate, the captured image in buffers is degraded by latency and motion blur, which may cause VR sickness. In the present study, we propose a method named TwinCam in which we use two 360° cameras spaced at the standard interpupillary distance and keep the direction of the lens constant in the world coordinate even when the camera bodies are rotated to reflect the orientation of the observer's head and the position of the eyes. We consider that this method can suppress the image buffer size to send to the observer because each camera captures the omnidirectional image without lens rotation. This paper introduces the mechanical design of our camera system and its potential for visual telepresence through three experiments. Experiment 1 confirmed the requirement of a stereoscopic rather than monoscopic camera for highly accurate depth perception, and Experiments 2 and 3 proved that our mechanical camera setup can reduce motion blur and VR sickness.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126840765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Motion sickness is a major cause of discomfort for users of virtual reality (VR) systems. Over the past several years, several techniques have been proposed to mitigate motion sickness, such as high-quality “room-scale” tracking systems, dynamic field of view modification, and displaying static or dynamic rest frames. At the same time, an absence of real world spatial cues may cause trouble during movement in virtual reality, and users may collide with physical obstacles. To address both of these problems, we propose a novel technique that combines dynamic field of view modification with rest frames generated from 3D scans of the physical environment. As the users moves, either physically and/or virtually, the displayed field of view can be artificially reduced to reveal a wireframe visualization of the real world geometry in the periphery, rendered in the same reference frame as the user. Although empirical studies have not yet been conducted, informal testing suggests that this approach is a promising method for reducing motion sickness and improving user safety at the same time.
{"title":"Combining Dynamic Field of View Modification with Physical Obstacle Avoidance","authors":"Fei Wu, Evan Suma Rosenberg","doi":"10.1109/VR.2019.8798015","DOIUrl":"https://doi.org/10.1109/VR.2019.8798015","url":null,"abstract":"Motion sickness is a major cause of discomfort for users of virtual reality (VR) systems. Over the past several years, several techniques have been proposed to mitigate motion sickness, such as high-quality “room-scale” tracking systems, dynamic field of view modification, and displaying static or dynamic rest frames. At the same time, an absence of real world spatial cues may cause trouble during movement in virtual reality, and users may collide with physical obstacles. To address both of these problems, we propose a novel technique that combines dynamic field of view modification with rest frames generated from 3D scans of the physical environment. As the users moves, either physically and/or virtually, the displayed field of view can be artificially reduced to reveal a wireframe visualization of the real world geometry in the periphery, rendered in the same reference frame as the user. Although empirical studies have not yet been conducted, informal testing suggests that this approach is a promising method for reducing motion sickness and improving user safety at the same time.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"198 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125719870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent progress of high-speed projectors using DMD (Digital Micromirror Device) has enabled low-latency motion adaptability of displayed images, which is a key challenge in achieving projection-based dynamic interaction systems. This paper presents evaluation of different approaches in achieving fast motion adaptability with DMD projectors through a subjective image evaluation experiment and a discrimination experiment. The results suggest that the approach proposed by the authors, which updates the image position for every binary frame instead of for every video frame, applied to 60-fps video input offers perceptual image quality comparable with the quality offered by 500-fps projection.
{"title":"Perception of Motion-Adaptive Color Images Displayed by a High-Speed DMD Projector","authors":"Wakana Oshiro, S. Kagami, K. Hashimoto","doi":"10.1109/VR.2019.8797850","DOIUrl":"https://doi.org/10.1109/VR.2019.8797850","url":null,"abstract":"Recent progress of high-speed projectors using DMD (Digital Micromirror Device) has enabled low-latency motion adaptability of displayed images, which is a key challenge in achieving projection-based dynamic interaction systems. This paper presents evaluation of different approaches in achieving fast motion adaptability with DMD projectors through a subjective image evaluation experiment and a discrimination experiment. The results suggest that the approach proposed by the authors, which updates the image position for every binary frame instead of for every video frame, applied to 60-fps video input offers perceptual image quality comparable with the quality offered by 500-fps projection.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"624 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132181946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We developed a virtual reality interface for cleaning sonar point cloud data. Experimentally, users performed better when using this VR interface compared to a mouse-and-keyboard with a desktop monitor. However, hydrographers often clean data aboard moving vessels, which can create motion sickness. Users of VR experience motion sickness as well, in the form of simulator sickness. Combining the two is a worst-case scenario for motion sickness. Advice for avoiding seasickness includes focusing on the horizon or objects in the distance, to keep your frame of reference external. We explored moving the surroundings in a virtual environment to match vessel motion, to assess whether it provides similar visual cues that could prevent seasickness. An informal evaluation in a seasickness-inducing simulator was conducted, and subjective preliminary results hint at such compensation's potential for reducing motion sickness, enabling the use of immersive VR technologies aboard underway ships.
{"title":"Reducing Seasickness in Onboard Marine VR Use through Visual Compensation of Vessel Motion","authors":"A. Stevens, T. Butkiewicz","doi":"10.1109/VR.2019.8797800","DOIUrl":"https://doi.org/10.1109/VR.2019.8797800","url":null,"abstract":"We developed a virtual reality interface for cleaning sonar point cloud data. Experimentally, users performed better when using this VR interface compared to a mouse-and-keyboard with a desktop monitor. However, hydrographers often clean data aboard moving vessels, which can create motion sickness. Users of VR experience motion sickness as well, in the form of simulator sickness. Combining the two is a worst-case scenario for motion sickness. Advice for avoiding seasickness includes focusing on the horizon or objects in the distance, to keep your frame of reference external. We explored moving the surroundings in a virtual environment to match vessel motion, to assess whether it provides similar visual cues that could prevent seasickness. An informal evaluation in a seasickness-inducing simulator was conducted, and subjective preliminary results hint at such compensation's potential for reducing motion sickness, enabling the use of immersive VR technologies aboard underway ships.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127526142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We conducted research project towards an inclusive society from the viewpoint of the computational assistive technologies. This project aims to explore AI-assisted human-machine integration techniques for overcoming impairments and disabilities. By connecting assistive hardware and auditory/visual/tactile sensors and actuators with a user-adaptive and interactive learning framework, we propose and develop a proof of concept of our “xDiversity AI platform” to meet the various abilities, needs, and demands in our society. For example, one of our studies is a wheelchair for automatic driving using “AI technology” called “tele wheelchair”. Its purpose is not fully automated driving but labor saving at nursing care sites and nursing care by natural communication. These attempts to solve the challenges facing the body and sense organs with the help of AI and others. In this keynote we explain the case studies and out final goal for the social design and deployment of the assistive technologies towards an inclusive society.
{"title":"Keynote Speaker: Virtual Reality for Enhancing Human Perceptional Diversity Towards an Inclusive Society","authors":"Yoichi Ochiai","doi":"10.1109/vr.2019.8798046","DOIUrl":"https://doi.org/10.1109/vr.2019.8798046","url":null,"abstract":"We conducted research project towards an inclusive society from the viewpoint of the computational assistive technologies. This project aims to explore AI-assisted human-machine integration techniques for overcoming impairments and disabilities. By connecting assistive hardware and auditory/visual/tactile sensors and actuators with a user-adaptive and interactive learning framework, we propose and develop a proof of concept of our “xDiversity AI platform” to meet the various abilities, needs, and demands in our society. For example, one of our studies is a wheelchair for automatic driving using “AI technology” called “tele wheelchair”. Its purpose is not fully automated driving but labor saving at nursing care sites and nursing care by natural communication. These attempts to solve the challenges facing the body and sense organs with the help of AI and others. In this keynote we explain the case studies and out final goal for the social design and deployment of the assistive technologies towards an inclusive society.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121100833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtual Reality (VR) constitutes an advantageous alternative for research considering scenarios that are not feasible in real-life conditions. Thus, this technology was used in the presented study for the behavioral observation of participants when being exposed to autonomous vehicles (AVs). Further data was collected via questionnaires before, directly after the experience and one month later to measure the impact that the experience had on participants' general attitude towards AVs. Despite a nonsignificance of the results, first insights suggest that participants with low prior gaming experience were more impacted than gamers. Future work will involve bigger sample size and refined questionnaires.
{"title":"A Study in Virtual Reality on (Non-)Gamers‘ Attitudes and Behaviors","authors":"Sebastian Stadler, H. Cornet, F. Frenkler","doi":"10.1109/VR.2019.8797750","DOIUrl":"https://doi.org/10.1109/VR.2019.8797750","url":null,"abstract":"Virtual Reality (VR) constitutes an advantageous alternative for research considering scenarios that are not feasible in real-life conditions. Thus, this technology was used in the presented study for the behavioral observation of participants when being exposed to autonomous vehicles (AVs). Further data was collected via questionnaires before, directly after the experience and one month later to measure the impact that the experience had on participants' general attitude towards AVs. Despite a nonsignificance of the results, first insights suggest that participants with low prior gaming experience were more impacted than gamers. Future work will involve bigger sample size and refined questionnaires.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126852383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The advent of modern and affordable augmented reality head sets like Microsoft HoloLens has sparked new interest in using virtual and augmented reality technology in the analysis of molecular data. For all visualisation in immersive, mixed-reality scenarios, a sufficiently high rendering speed is an important factor, which leads to the issue of limited processing power available on fully untethered devices facing the situation of handling computationally expensive visualisations. Recent research shows that the space-filling model of even small data sets from the Protein Data Bank (PDB) cannot be rendered at desirable frame rates on the HoloLens. In this work, we report on how to improve the rendering speed of atom-based visualisation of proteins and how the rendering of more abstract representations of the molecules compares against it. We complement our findings with in-depth GPU and CPU performance numbers.
{"title":"Optimised Molecular Graphics on the HoloLens","authors":"C. Müller, Matthias Braun, T. Ertl","doi":"10.1109/VR.2019.8798111","DOIUrl":"https://doi.org/10.1109/VR.2019.8798111","url":null,"abstract":"The advent of modern and affordable augmented reality head sets like Microsoft HoloLens has sparked new interest in using virtual and augmented reality technology in the analysis of molecular data. For all visualisation in immersive, mixed-reality scenarios, a sufficiently high rendering speed is an important factor, which leads to the issue of limited processing power available on fully untethered devices facing the situation of handling computationally expensive visualisations. Recent research shows that the space-filling model of even small data sets from the Protein Data Bank (PDB) cannot be rendered at desirable frame rates on the HoloLens. In this work, we report on how to improve the rendering speed of atom-based visualisation of proteins and how the rendering of more abstract representations of the molecules compares against it. We complement our findings with in-depth GPU and CPU performance numbers.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"67 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115698116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a novel method for estimating the 3D geometry of indoor scenes based on multiple spherical images. Our technique produces a dense depth map registered to a reference view so that depth-image-based-rendering (DIBR) techniques can be explored for providing three-degrees-of-freedom plus immersive experiences to virtual reality users. The core of our method is to explore large displacement optical flow algorithms to obtain point correspondences, and use cross-checking and geometric constraints to detect and remove bad matches. We show that selecting a subset of the best dense matches leads to better pose estimates than traditional approaches based on sparse feature matching, and explore a weighting scheme to obtain the depth maps. Finally, we adapt a fast image-guided filter to the spherical domain for enforcing local spatial consistency, improving the 3D estimates. Experimental results indicate that our method quantitatively outperforms competitive approaches on computer-generated images and synthetic data under noisy correspondences and camera poses. Also, we show that the estimated depth maps obtained from only a few real spherical captures of the scene are capable of producing coherent synthesized binocular stereoscopic views by using traditional DIBR methods.
{"title":"Dense 3D Scene Reconstruction from Multiple Spherical Images for 3-DoF+ VR Applications","authors":"T. L. T. D. Silveira, C. Jung","doi":"10.1109/VR.2019.8798281","DOIUrl":"https://doi.org/10.1109/VR.2019.8798281","url":null,"abstract":"We propose a novel method for estimating the 3D geometry of indoor scenes based on multiple spherical images. Our technique produces a dense depth map registered to a reference view so that depth-image-based-rendering (DIBR) techniques can be explored for providing three-degrees-of-freedom plus immersive experiences to virtual reality users. The core of our method is to explore large displacement optical flow algorithms to obtain point correspondences, and use cross-checking and geometric constraints to detect and remove bad matches. We show that selecting a subset of the best dense matches leads to better pose estimates than traditional approaches based on sparse feature matching, and explore a weighting scheme to obtain the depth maps. Finally, we adapt a fast image-guided filter to the spherical domain for enforcing local spatial consistency, improving the 3D estimates. Experimental results indicate that our method quantitatively outperforms competitive approaches on computer-generated images and synthetic data under noisy correspondences and camera poses. Also, we show that the estimated depth maps obtained from only a few real spherical captures of the scene are capable of producing coherent synthesized binocular stereoscopic views by using traditional DIBR methods.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128232785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. K. Andreasen, Sarune Baceviciute, Prajakt Pande, G. Makransky
A 2×2 between-subjects experiment (a) investigated and compared the instructional effectiveness of immersive virtual reality (VR) versus video as media for teaching scientific procedural knowledge, and (b) examined the efficacy of enactment as a generative learning strategy in combination with the respective instructional media. A total of 117 high school students (74 females) were randomly distributed across four instructional groups — VR and enactment, video and enactment, only VR, and only video. Outcome measures included declarative knowledge, procedural knowledge, knowledge transfer, and subjective ratings of perceived enjoyment. Results indicated that there were no main effects or interactions for the outcomes of declarative knowledge or transfer. However, there was a significant interaction between media and method for the outcome of procedural knowledge with the VR and enactment group having the highest performance. Furthermore, media also seemed to have a significant effect on student perceived enjoyment, indicating that the groups enjoyed the VR simulation significantly more than the video. The results deepen our understanding of how we learn with immersive technology, as well as suggest important implications for implementing VR in schools.
{"title":"Virtual Reality Instruction Followed by Enactment Can Increase Procedural Knowledge in a Science Lesson","authors":"N. K. Andreasen, Sarune Baceviciute, Prajakt Pande, G. Makransky","doi":"10.1109/VR.2019.8797755","DOIUrl":"https://doi.org/10.1109/VR.2019.8797755","url":null,"abstract":"A 2×2 between-subjects experiment (a) investigated and compared the instructional effectiveness of immersive virtual reality (VR) versus video as media for teaching scientific procedural knowledge, and (b) examined the efficacy of enactment as a generative learning strategy in combination with the respective instructional media. A total of 117 high school students (74 females) were randomly distributed across four instructional groups — VR and enactment, video and enactment, only VR, and only video. Outcome measures included declarative knowledge, procedural knowledge, knowledge transfer, and subjective ratings of perceived enjoyment. Results indicated that there were no main effects or interactions for the outcomes of declarative knowledge or transfer. However, there was a significant interaction between media and method for the outcome of procedural knowledge with the VR and enactment group having the highest performance. Furthermore, media also seemed to have a significant effect on student perceived enjoyment, indicating that the groups enjoyed the VR simulation significantly more than the video. The results deepen our understanding of how we learn with immersive technology, as well as suggest important implications for implementing VR in schools.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116115544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Fujiwara, Yu Tzu Wu, M. K. Gomes, W. H. A. Silva, C. Suzuki
A haptic grasp interface based on the force myography technique is reported. The hand movements and forces during the object manipulation are assessed by an optical fiber sensor attached to the forearm, so the virtual contact is computed, and the reaction forces are delivered to the subject by graphical and vibrotactile feedbacks. The system was successfully tested for different objects, providing a non-invasive and realistic approach for applications in virtual-reality environments.
{"title":"Haptic Interface Based on Optical Fiber Force Myography Sensor","authors":"E. Fujiwara, Yu Tzu Wu, M. K. Gomes, W. H. A. Silva, C. Suzuki","doi":"10.1109/VR.2019.8797788","DOIUrl":"https://doi.org/10.1109/VR.2019.8797788","url":null,"abstract":"A haptic grasp interface based on the force myography technique is reported. The hand movements and forces during the object manipulation are assessed by an optical fiber sensor attached to the forearm, so the virtual contact is computed, and the reaction forces are delivered to the subject by graphical and vibrotactile feedbacks. The system was successfully tested for different objects, providing a non-invasive and realistic approach for applications in virtual-reality environments.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"354 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131750575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}