We have developed and operated 23 different VR activities while developing/expanding our VR Entertainment Facility VR ZONE throughout the years in Odaiba (2016), Shinjuku (2017), and Osaka (2018). Leaning on these experiences, we will share some of our know-how regarding VR Entertainment�s qualities/development as well as future possibilities of VR Entertainment.
{"title":"Keynote Speaker: Let's Unleash Entertainment! VR Possibilities Learned through Entertainment Facility “VR Zone”","authors":"Junichiro Koyama, Y. Tamiya","doi":"10.1109/vr.2019.8798301","DOIUrl":"https://doi.org/10.1109/vr.2019.8798301","url":null,"abstract":"We have developed and operated 23 different VR activities while developing/expanding our VR Entertainment Facility VR ZONE throughout the years in Odaiba (2016), Shinjuku (2017), and Osaka (2018). Leaning on these experiences, we will share some of our know-how regarding VR Entertainment�s qualities/development as well as future possibilities of VR Entertainment.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121415788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To improve the telepresence sense of a local HMD user, a high-resolution view of the remote environment is necessary. However, current commodity omnidirectional camera could not support enough resolution for the human eye. Using a higher resolution omnidirectional camera is also infeasible because it will increase the streaming bandwidth. We propose a hybrid camera system that can convey a higher resolution for the HMD user viewport ROI region in available bandwidth range. The hybrid camera consists of a pair of omnidirectional and PTZ camera which is close to each other. The HMD user head orientation controls the PTZ camera orientation. The HMD user also controls the zooming level of the PTZ camera to achieve higher resolution up to PTZ camera maximum optical zoom level. The remote environment view obtained from each camera is streamed to the HMD user and then stitched into one combined view. This combined view simulates human visual system (HVS) phenomenon called foveation, where only a small part in the human view is in high resolution, and the rests are in low resolution.
{"title":"Hybrid Camera System for Telepresence with Foveated Imaging","authors":"M. Syawaludin, Chanho Kim, Jae-In Hwang","doi":"10.1109/VR.2019.8798011","DOIUrl":"https://doi.org/10.1109/VR.2019.8798011","url":null,"abstract":"To improve the telepresence sense of a local HMD user, a high-resolution view of the remote environment is necessary. However, current commodity omnidirectional camera could not support enough resolution for the human eye. Using a higher resolution omnidirectional camera is also infeasible because it will increase the streaming bandwidth. We propose a hybrid camera system that can convey a higher resolution for the HMD user viewport ROI region in available bandwidth range. The hybrid camera consists of a pair of omnidirectional and PTZ camera which is close to each other. The HMD user head orientation controls the PTZ camera orientation. The HMD user also controls the zooming level of the PTZ camera to achieve higher resolution up to PTZ camera maximum optical zoom level. The remote environment view obtained from each camera is streamed to the HMD user and then stitched into one combined view. This combined view simulates human visual system (HVS) phenomenon called foveation, where only a small part in the human view is in high resolution, and the rests are in low resolution.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122894276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mark Murnane, Max Breitmeyer, Cynthia Matuszek, Don Engel
Collecting data in robotics, especially human-robot interactions, traditionally requires a physical robot in a prepared environment, that presents substantial scalability challenges. First, robots provide many possible points of system failure, while the availability of human participants is limited. Second, for tasks such as language learning, it is important to create environments that provide interesting’ varied use cases. Traditionally, this requires prepared physical spaces for each scenario being studied. Finally, the expense associated with acquiring robots and preparing spaces places serious limitations on the reproducible quality of experiments. We therefore propose a novel mechanism for using virtual reality to simulate robotic sensor data in a series of prepared scenarios. This allows for a reproducible dataset that other labs can recreate using commodity VR hardware. We demonstrate the effectiveness of this approach with an implementation that includes a simulated physical context, a reconstruction of a human actor, and a reconstruction of a robot. This evaluation shows that even a simple “sandbox” environment allows us to simulate robot sensor data, as well as the movement (e.g., view-port) and speech of humans interacting with the robot in a prescribed scenario.
{"title":"Virtual Reality and Photogrammetry for Improved Reproducibility of Human-Robot Interaction Studies","authors":"Mark Murnane, Max Breitmeyer, Cynthia Matuszek, Don Engel","doi":"10.1109/VR.2019.8798186","DOIUrl":"https://doi.org/10.1109/VR.2019.8798186","url":null,"abstract":"Collecting data in robotics, especially human-robot interactions, traditionally requires a physical robot in a prepared environment, that presents substantial scalability challenges. First, robots provide many possible points of system failure, while the availability of human participants is limited. Second, for tasks such as language learning, it is important to create environments that provide interesting’ varied use cases. Traditionally, this requires prepared physical spaces for each scenario being studied. Finally, the expense associated with acquiring robots and preparing spaces places serious limitations on the reproducible quality of experiments. We therefore propose a novel mechanism for using virtual reality to simulate robotic sensor data in a series of prepared scenarios. This allows for a reproducible dataset that other labs can recreate using commodity VR hardware. We demonstrate the effectiveness of this approach with an implementation that includes a simulated physical context, a reconstruction of a human actor, and a reconstruction of a robot. This evaluation shows that even a simple “sandbox” environment allows us to simulate robot sensor data, as well as the movement (e.g., view-port) and speech of humans interacting with the robot in a prescribed scenario.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114618273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shifting from single-person experiences to multi-user interactions is an inevitable trend of virtual reality technology. Existing methods primarily address the problem of one- or two-user redirected walking and do not respond to additional challenges related to potential collisions among three or more users who are moving both virtually and physically. To apply redirected walking to multiple users who are immersed in virtual reality experiences, we present a novel algorithm of three-user redirected walking in shared physical spaces. In addition, we present the steps to apply three-user redirected walking to multiplayer VR scene, where the users are divided into different groups based on the users' motion states. Therefore, this strategy can be applied to each group to address the challenges of redirected walking when there are more than three users. The results show that sharing a space using our three-user redirected walking algorithm is completely feasible.
{"title":"Simulation and Evaluation of Three-User Redirected Walking Algorithm in Shared Physical Spaces","authors":"Tianyang Dong, Yifan Song, Yuqi Shen, Jing Fan","doi":"10.1109/VR.2019.8798319","DOIUrl":"https://doi.org/10.1109/VR.2019.8798319","url":null,"abstract":"Shifting from single-person experiences to multi-user interactions is an inevitable trend of virtual reality technology. Existing methods primarily address the problem of one- or two-user redirected walking and do not respond to additional challenges related to potential collisions among three or more users who are moving both virtually and physically. To apply redirected walking to multiple users who are immersed in virtual reality experiences, we present a novel algorithm of three-user redirected walking in shared physical spaces. In addition, we present the steps to apply three-user redirected walking to multiplayer VR scene, where the users are divided into different groups based on the users' motion states. Therefore, this strategy can be applied to each group to address the challenges of redirected walking when there are more than three users. The results show that sharing a space using our three-user redirected walking algorithm is completely feasible.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117061224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Augmented Reality is a promising interaction paradigm for learning applications. It has the potential to improve learning outcomes by merging educational content with spatial cues and semantically relevant objects within a learner's everyday environment. The impact of such an interface could be comparable to the method of loci, a well known memory enhancement technique used by memory champions and polyglots. However, using Augmented Reality in this manner is still impractical for a number of reasons. Scalable object recognition and consistent labeling of objects is a significant challenge, and interaction with arbitrary (unmodeled) physical objects in AR scenes has consequently not been well explored. To help address these challenges, we present a framework for in-situ object labeling and selection in Augmented Reality, with a particular focus on language learning applications. Our framework uses a generalized object recognition model to identify objects in the world in real time, integrates eye tracking to facilitate selection and interaction within the interface, and incorporates a personalized learning model that dynamically adapts to student's growth. We show our current progress in the development of this system, including preliminary tests and benchmarks. We explore challenges with using such a system in practice, and discuss our vision for the future of AR language learning applications.
{"title":"In-Situ Labeling for Augmented Reality Language Learning","authors":"Brandon Huynh, J. Orlosky, Tobias Höllerer","doi":"10.1109/VR.2019.8798358","DOIUrl":"https://doi.org/10.1109/VR.2019.8798358","url":null,"abstract":"Augmented Reality is a promising interaction paradigm for learning applications. It has the potential to improve learning outcomes by merging educational content with spatial cues and semantically relevant objects within a learner's everyday environment. The impact of such an interface could be comparable to the method of loci, a well known memory enhancement technique used by memory champions and polyglots. However, using Augmented Reality in this manner is still impractical for a number of reasons. Scalable object recognition and consistent labeling of objects is a significant challenge, and interaction with arbitrary (unmodeled) physical objects in AR scenes has consequently not been well explored. To help address these challenges, we present a framework for in-situ object labeling and selection in Augmented Reality, with a particular focus on language learning applications. Our framework uses a generalized object recognition model to identify objects in the world in real time, integrates eye tracking to facilitate selection and interaction within the interface, and incorporates a personalized learning model that dynamically adapts to student's growth. We show our current progress in the development of this system, including preliminary tests and benchmarks. We explore challenges with using such a system in practice, and discuss our vision for the future of AR language learning applications.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"424 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129337844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
By providing 360° field of view, spherical panoramas are widely used in virtual reality (VR) systems and street view services. However, due to bandwidth or storage limitations, existing systems only provide sparsely captured panoramas and have limited interaction modes. Although there are methods that can synthesize novel views based on captured panoramas, the generated novel views all lie on the lines connecting existing views. Therefore these methods do not support free viewpoint navigation. In this paper, we propose a new panoramic image based rendering method. Our method takes pre-captured images as input and can synthesize panoramas at novel views that are far from input camera positions. Thus, it supports to freely explore the scene with 360° field of view.
{"title":"Freely Explore the Scene with 360°Field of View","authors":"Feng Dai, Chen Zhu, Yike Ma, Juan Cao, Qiang Zhao, Yongdong Zhang","doi":"10.1109/VR.2019.8797922","DOIUrl":"https://doi.org/10.1109/VR.2019.8797922","url":null,"abstract":"By providing 360° field of view, spherical panoramas are widely used in virtual reality (VR) systems and street view services. However, due to bandwidth or storage limitations, existing systems only provide sparsely captured panoramas and have limited interaction modes. Although there are methods that can synthesize novel views based on captured panoramas, the generated novel views all lie on the lines connecting existing views. Therefore these methods do not support free viewpoint navigation. In this paper, we propose a new panoramic image based rendering method. Our method takes pre-captured images as input and can synthesize panoramas at novel views that are far from input camera positions. Thus, it supports to freely explore the scene with 360° field of view.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122413386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ryo Takizawa, Atsushi Hivarna, Adrien Verhulst, Katie Seaborn, M. Fukuoka, M. Kitazaki, M. Inami, Maki Suqirnoto
With advancements in robotics, systems featuring wearable robotic arms teleoperated by a third party are appearing. An important aspect of these systems is the visual feedback provided to the third party operator. This can be achieved by placing a wearable camera on the robotic arm's “host,” or Main Body Operator (MBO), but such a setup makes the visual feedback dependant on the movements of the main body. Here we introduce a VR system called Parasitic Body to explore a VR shared body concept representative of the wearable robotic arms “host” (the MBO) and of the teleoperator (here called the Parasite Body Operator (PBO)). 2 users jointly operate a shared virtual body with a third arm: The MBO controls the main body and the PBO controls a third arm sticking out from the left shoulder of the main body. We focused here on the perspective dependency of the PBO (indeed, the PBO view is dependant of the movement of the MBO) in a “finding and reaching” task.
{"title":"Parasitic Body: Exploring Perspective Dependency in a Shared Body with a Third Arm","authors":"Ryo Takizawa, Atsushi Hivarna, Adrien Verhulst, Katie Seaborn, M. Fukuoka, M. Kitazaki, M. Inami, Maki Suqirnoto","doi":"10.1109/VR.2019.8798351","DOIUrl":"https://doi.org/10.1109/VR.2019.8798351","url":null,"abstract":"With advancements in robotics, systems featuring wearable robotic arms teleoperated by a third party are appearing. An important aspect of these systems is the visual feedback provided to the third party operator. This can be achieved by placing a wearable camera on the robotic arm's “host,” or Main Body Operator (MBO), but such a setup makes the visual feedback dependant on the movements of the main body. Here we introduce a VR system called Parasitic Body to explore a VR shared body concept representative of the wearable robotic arms “host” (the MBO) and of the teleoperator (here called the Parasite Body Operator (PBO)). 2 users jointly operate a shared virtual body with a third arm: The MBO controls the main body and the PBO controls a third arm sticking out from the left shoulder of the main body. We focused here on the perspective dependency of the PBO (indeed, the PBO view is dependant of the movement of the MBO) in a “finding and reaching” task.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130712202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Huyen Nguyen, Benjamin Ward, U. Engelke, B. Thomas, Tomasz Bednarz
Immersive analytics allows a large amount of data and complex structures to be concurrently investigated. We propose a collaborative analytics system that benefits from new advances in immersive technologies for collaborators working in the early stages of data exploration. We implemented a combination of Star Coordinates and Star Plot visualisation techniques to support the visualisation of multidimensional data and the encoding of datasets using simple and compact visual representations. To support data analytics tasks, we propose tools and interaction techniques for users to build decision trees for visualising and analysing data in a top-down method.
{"title":"Collaborative Data Analytics Using Virtual Reality","authors":"Huyen Nguyen, Benjamin Ward, U. Engelke, B. Thomas, Tomasz Bednarz","doi":"10.1109/VR.2019.8797845","DOIUrl":"https://doi.org/10.1109/VR.2019.8797845","url":null,"abstract":"Immersive analytics allows a large amount of data and complex structures to be concurrently investigated. We propose a collaborative analytics system that benefits from new advances in immersive technologies for collaborators working in the early stages of data exploration. We implemented a combination of Star Coordinates and Star Plot visualisation techniques to support the visualisation of multidimensional data and the encoding of datasets using simple and compact visual representations. To support data analytics tasks, we propose tools and interaction techniques for users to build decision trees for visualising and analysing data in a top-down method.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133195838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Junya Mizutani, Keigo Matsumoto, Ryohei Nagao, Takuji Narumi, T. Tanikawa, M. Hirose
Redirection makes it possible to walk around a vast virtual space in a limited real space while providing a natural walking sensation by applying a gain to the amount of movement in a real space. However, manipulating the walking path while keeping it and maintaining the naturalness of walking when turning at a corner cannot be achieved by the existing methods. To realize natural manipulation for turning at a corner, this study proposes novel “turning gains”, which refer to the increase in real and virtual turning degrees. The result of an experiment which aims to estimate the detection thresholds of turning gains indicated that when the turning radius is 0.5 m, discrimination is more difficult compared with the rotation gains $(r=0.0mathrm{m})$.
{"title":"Estimation of Detection Thresholds for Redirected Turning","authors":"Junya Mizutani, Keigo Matsumoto, Ryohei Nagao, Takuji Narumi, T. Tanikawa, M. Hirose","doi":"10.1109/VR.2019.8797976","DOIUrl":"https://doi.org/10.1109/VR.2019.8797976","url":null,"abstract":"Redirection makes it possible to walk around a vast virtual space in a limited real space while providing a natural walking sensation by applying a gain to the amount of movement in a real space. However, manipulating the walking path while keeping it and maintaining the naturalness of walking when turning at a corner cannot be achieved by the existing methods. To realize natural manipulation for turning at a corner, this study proposes novel “turning gains”, which refer to the increase in real and virtual turning degrees. The result of an experiment which aims to estimate the detection thresholds of turning gains indicated that when the turning radius is 0.5 m, discrimination is more difficult compared with the rotation gains $(r=0.0mathrm{m})$.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122149827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In immersive virtual environments (IVE), users' visual and auditory perception is replaced by computer-generated stimuli. Thus, knowing the positions of real objects is crucial for physical safety. While some solutions exist, e. g., using virtual replicas or visible cues indicating the interaction space boundaries, these are limiting the IVE design or depend on the hardware setup. Moreover, most solutions cannot handle lost tracking, erroneous tracker calibration, or moving obstacles. However, these are common scenarios especially for the increasingly popular home virtual reality settings. In this paper, we present a stand-alone hardware device designed to alert IVE users for potential collisions with real-world objects. It uses distance sensors mounted on a head-mounted display (HMD) and vibro-tactile actuators inserted into the HMD's face cushion. We implemented different types of sensor-actuator mappings with the goal to find a mapping function that is minimally obtrusive in normal use, but efficiently alerting in risk situations.
{"title":"Vibro-tactile Feedback for Real-world Awareness in Immersive Virtual Environments","authors":"Dimitar Valkov, L. Linsen","doi":"10.1109/VR.2019.8798036","DOIUrl":"https://doi.org/10.1109/VR.2019.8798036","url":null,"abstract":"In immersive virtual environments (IVE), users' visual and auditory perception is replaced by computer-generated stimuli. Thus, knowing the positions of real objects is crucial for physical safety. While some solutions exist, e. g., using virtual replicas or visible cues indicating the interaction space boundaries, these are limiting the IVE design or depend on the hardware setup. Moreover, most solutions cannot handle lost tracking, erroneous tracker calibration, or moving obstacles. However, these are common scenarios especially for the increasingly popular home virtual reality settings. In this paper, we present a stand-alone hardware device designed to alert IVE users for potential collisions with real-world objects. It uses distance sensors mounted on a head-mounted display (HMD) and vibro-tactile actuators inserted into the HMD's face cushion. We implemented different types of sensor-actuator mappings with the goal to find a mapping function that is minimally obtrusive in normal use, but efficiently alerting in risk situations.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123959343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}