In informal learning spaces employing digital content, such as museums, visitors either do not get adequate exposure to content or get information through passive instruction offered by a museum docent to the whole group. This research aims to identify which elements of co-located group collaboration, virtual environments, and serious games can be leveraged for an enhanced learning experience. Our hypothesis is that synchronous, co-located, group collaboration will afford greater learning compared to conventional approaches. We developed C-OLiVE, an interactive virtual learning environment supporting tripartite group collaboration, which we will use as a testbed to respond to our research questions. In this paper, we discuss our proposed research, which involves exploring some benefits of the involved technologies and proposing a list of design guidelines for anyone interested to exploit them in developing virtual environments for informal learning spaces.
{"title":"C-OLiVE: Group co-located interaction in VEs for contextual learning","authors":"P. Apostolellis, D. Bowman","doi":"10.1109/VR.2014.6802085","DOIUrl":"https://doi.org/10.1109/VR.2014.6802085","url":null,"abstract":"In informal learning spaces employing digital content, such as museums, visitors either do not get adequate exposure to content or get information through passive instruction offered by a museum docent to the whole group. This research aims to identify which elements of co-located group collaboration, virtual environments, and serious games can be leveraged for an enhanced learning experience. Our hypothesis is that synchronous, co-located, group collaboration will afford greater learning compared to conventional approaches. We developed C-OLiVE, an interactive virtual learning environment supporting tripartite group collaboration, which we will use as a testbed to respond to our research questions. In this paper, we discuss our proposed research, which involves exploring some benefits of the involved technologies and proposing a list of design guidelines for anyone interested to exploit them in developing virtual environments for informal learning spaces.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127756380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniela Markov-Vetter, Vanja Zander, J. Latsch, O. Staadt
The performance of Augmented Reality (AR) direct object selection coded outside of the human egocentric body frame of reference is decreased under short-term altered gravity. Therefore an adequate countermeasure is required. This paper presents the results of a proof-of-concept (POC) study to investigate the impact of simulated hypergravity on target's size and distance. For gravity-dependent resizing and -positioning we used Hooke's law for the target deformation. The POC study was divided in two experiments, whereby hypergravity was induced by a long-arm human centrifuge and by weights attached to subjects' dominant arm. The study showed that at higher gravity levels larger target size and larger distance between the targets led to increased performance.
{"title":"A proof-of-concept study on the impact of artificial hypergravity on force-adapted target sizing for direct Augmented Reality pointing","authors":"Daniela Markov-Vetter, Vanja Zander, J. Latsch, O. Staadt","doi":"10.1109/VR.2014.6802068","DOIUrl":"https://doi.org/10.1109/VR.2014.6802068","url":null,"abstract":"The performance of Augmented Reality (AR) direct object selection coded outside of the human egocentric body frame of reference is decreased under short-term altered gravity. Therefore an adequate countermeasure is required. This paper presents the results of a proof-of-concept (POC) study to investigate the impact of simulated hypergravity on target's size and distance. For gravity-dependent resizing and -positioning we used Hooke's law for the target deformation. The POC study was divided in two experiments, whereby hypergravity was induced by a long-arm human centrifuge and by weights attached to subjects' dominant arm. The study showed that at higher gravity levels larger target size and larger distance between the targets led to increased performance.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131370388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jerald Thomas, Raghav Bashyal, Samantha Goldstein, Evan A. Suma
Consumer adoption of virtual reality technology has historically been held back by poor accessibility, the lack of intuitive multi-user capabilities, dependence on external infrastructure for rendering and tracking, and the amount of time and effort required to enter virtual reality systems. This poster presents the current status of our work creating MuVR, a Multi-User Virtual Reality platform that seeks to overcome these hindrances. The MuVR project comprises four main goals: scalable and easy to use multi-user capabilities, portable and self-contained hardware, a rapidly deployable system, and ready accessibility to others. We provide a description of the platform we developed to address these goals and discuss potential directions for future work.
{"title":"MuVR: A Multi-User Virtual Reality platform","authors":"Jerald Thomas, Raghav Bashyal, Samantha Goldstein, Evan A. Suma","doi":"10.1109/VR.2014.6802078","DOIUrl":"https://doi.org/10.1109/VR.2014.6802078","url":null,"abstract":"Consumer adoption of virtual reality technology has historically been held back by poor accessibility, the lack of intuitive multi-user capabilities, dependence on external infrastructure for rendering and tracking, and the amount of time and effort required to enter virtual reality systems. This poster presents the current status of our work creating MuVR, a Multi-User Virtual Reality platform that seeks to overcome these hindrances. The MuVR project comprises four main goals: scalable and easy to use multi-user capabilities, portable and self-contained hardware, a rapidly deployable system, and ready accessibility to others. We provide a description of the platform we developed to address these goals and discuss potential directions for future work.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"291 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120965326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Leigh, Andrew E. Johnson, L. Renambot, Lance Long, D. Sandin, Jonas Talandis, Alessandro Febretti, Arthur Nishimoto
This video describes the CAVE2 Hybrid Reality Environment developed by the Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago (UIC). CAVE2 is a near-seamless flat-panel-based, surround-screen immersive system that can simultaneously display both 2D and 3D information, providing more flexibility for mixed media applications and more opportunities for groups of researchers to work together with large heterogeneous datasets [1]. CAVE2 is a cylindrical system of 24 feet in diameter and 8 feet tall, and consists of 72 near-seamless, off-axis-optimized passive-stereo LCD panels, creating a 320-degree panoramic environment for displaying information at 37 Megapixels (in stereoscopic 3D) or 74 Megapixels in 2D and at a horizontal visual acuity of 20/20 [2]. Custom LCD panels with shifted polarizers were built so the images in the top and bottom rows of LCDs are optimized for vertical off-center viewing, allowing viewers to come closer to the displays while minimizing ghosting. EVL's Omegalib middleware supports fully immersive OpenGL, OpenSceneGraph and VTK applications, as well as EVL's SAGE middleware to achieve a hybrid 2D/3D environment. CAVE2 also supports CalVR (developed at the Calit2 Qualcomm Institute at University of California, San Diego), Electro (by Robert Kooima, currently at Louisiana State University) and Google Earth.
{"title":"CAVE2 documentary","authors":"J. Leigh, Andrew E. Johnson, L. Renambot, Lance Long, D. Sandin, Jonas Talandis, Alessandro Febretti, Arthur Nishimoto","doi":"10.1109/VR.2014.6802097","DOIUrl":"https://doi.org/10.1109/VR.2014.6802097","url":null,"abstract":"This video describes the CAVE2 Hybrid Reality Environment developed by the Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago (UIC). CAVE2 is a near-seamless flat-panel-based, surround-screen immersive system that can simultaneously display both 2D and 3D information, providing more flexibility for mixed media applications and more opportunities for groups of researchers to work together with large heterogeneous datasets [1]. CAVE2 is a cylindrical system of 24 feet in diameter and 8 feet tall, and consists of 72 near-seamless, off-axis-optimized passive-stereo LCD panels, creating a 320-degree panoramic environment for displaying information at 37 Megapixels (in stereoscopic 3D) or 74 Megapixels in 2D and at a horizontal visual acuity of 20/20 [2]. Custom LCD panels with shifted polarizers were built so the images in the top and bottom rows of LCDs are optimized for vertical off-center viewing, allowing viewers to come closer to the displays while minimizing ghosting. EVL's Omegalib middleware supports fully immersive OpenGL, OpenSceneGraph and VTK applications, as well as EVL's SAGE middleware to achieve a hybrid 2D/3D environment. CAVE2 also supports CalVR (developed at the Calit2 Qualcomm Institute at University of California, San Diego), Electro (by Robert Kooima, currently at Louisiana State University) and Google Earth.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"23 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131972697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It has been shown that virtual reality (VR) can be used to train mine workers for safety in critical situations [4]. The National Institute for Occupational Safety and Health (NIOSH) has a virtual reality (VR) laboratory on its Pittsburgh campus. Currently, input devices for the system are an Xbox 360 game pad and an air mouse. Due to the high cost and added complexity of most 3D tracking systems, we wanted to first test to see if the mine safety application could benefit from an upgrade to a 6-DOF tracking system. Thus, we conducted a pilot study at Duke University's six-sided CAVE-type system, and collected performance and questionnaire data for three tasks (selection, navigation, and maneuvering) and three devices (gamepad, air mouse, 6-DOF wand). Results indicate that the wand allows users to complete tasks faster and is preferred by users. However, in certain situations its use led to more errors.
{"title":"Comparative study of input devices for a VR mine simulation","authors":"David J. Zielinski, B. Macdonald, Regis Kopper","doi":"10.1109/VR.2014.6802083","DOIUrl":"https://doi.org/10.1109/VR.2014.6802083","url":null,"abstract":"It has been shown that virtual reality (VR) can be used to train mine workers for safety in critical situations [4]. The National Institute for Occupational Safety and Health (NIOSH) has a virtual reality (VR) laboratory on its Pittsburgh campus. Currently, input devices for the system are an Xbox 360 game pad and an air mouse. Due to the high cost and added complexity of most 3D tracking systems, we wanted to first test to see if the mine safety application could benefit from an upgrade to a 6-DOF tracking system. Thus, we conducted a pilot study at Duke University's six-sided CAVE-type system, and collected performance and questionnaire data for three tasks (selection, navigation, and maneuvering) and three devices (gamepad, air mouse, 6-DOF wand). Results indicate that the wand allows users to complete tasks faster and is preferred by users. However, in certain situations its use led to more errors.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115227409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mirabelle D'Cruz, H. Patel, Laura Lewis, S. Cobb, Matthias Bues, O. Stefani, Tredeaux Grobler, K. Helin, J. Viitaniemi, S. Aromaa, B. Fröhlich, S. Beck, André Kunert, Alexander Kulik, I. Karaseitanidis, P. Psonis, Nikos Frangakis, M. Slater, Ilias Bergstrom, Konstantina Kilteni, Elena Kokkinara, B. Mohler, Markus Leyrer, F. Soyka, E. Gaia, D. Tedone, M. Olbert, M. Cappitelli
Our vision is that regardless of future variations in the interior of airplane cabins, we can utilize ever-advancing state-of-the-art virtual and mixed reality technologies with the latest research in neuroscience and psychology to achieve high levels of comfort for passengers. Current surveys on passenger's experience during air travel reveal that they are least satisfied with the amount and effectiveness of their personal space, and their ability to work, sleep or rest. Moreover, considering current trends it is likely that the amount of available space is likely to decrease and therefore the passenger's physical comfort during a flight is likely to worsen significantly. Therefore, the main challenge is to enable the passengers to maintain a high level of comfort and satisfaction while being placed in a restricted physical space.
{"title":"Demonstration: VR-HYPERSPACE — The innovative use of virtual reality to increase comfort by changing the perception of self and space","authors":"Mirabelle D'Cruz, H. Patel, Laura Lewis, S. Cobb, Matthias Bues, O. Stefani, Tredeaux Grobler, K. Helin, J. Viitaniemi, S. Aromaa, B. Fröhlich, S. Beck, André Kunert, Alexander Kulik, I. Karaseitanidis, P. Psonis, Nikos Frangakis, M. Slater, Ilias Bergstrom, Konstantina Kilteni, Elena Kokkinara, B. Mohler, Markus Leyrer, F. Soyka, E. Gaia, D. Tedone, M. Olbert, M. Cappitelli","doi":"10.1109/VR.2014.6802104","DOIUrl":"https://doi.org/10.1109/VR.2014.6802104","url":null,"abstract":"Our vision is that regardless of future variations in the interior of airplane cabins, we can utilize ever-advancing state-of-the-art virtual and mixed reality technologies with the latest research in neuroscience and psychology to achieve high levels of comfort for passengers. Current surveys on passenger's experience during air travel reveal that they are least satisfied with the amount and effectiveness of their personal space, and their ability to work, sleep or rest. Moreover, considering current trends it is likely that the amount of available space is likely to decrease and therefore the passenger's physical comfort during a flight is likely to worsen significantly. Therefore, the main challenge is to enable the passengers to maintain a high level of comfort and satisfaction while being placed in a restricted physical space.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117276067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Previous research on healthy subjects showed that a higher sense of presence can be elicited through full body avatars versus no avatar. However, minimal avatar research has been conducted with persons with mobility impairments. For these users, Virtual Environments (VEs) and avatars are becoming more common as tools for rehabilitation. If we can maximize presence in these VEs, users may be more effectively distracted from the pain and repetitiveness of rehabilitation, thereby increasing users' motivation. To investigate this we replicated the classic virtual pit experiment and included a responsive full body avatar (or lack thereof) as a 3D user interface. We recruited from two different populations: mobility impaired persons and healthy persons as a control. Results give insight into many other differences between healthy and mobility impaired users' experience of presence in VEs.
{"title":"A unique way to increase presence of mobility impaired users — Increasing confidence in balance","authors":"Rongkai Guo, G. Samaraweera, J. Quarles","doi":"10.1109/VR.2014.6802059","DOIUrl":"https://doi.org/10.1109/VR.2014.6802059","url":null,"abstract":"Previous research on healthy subjects showed that a higher sense of presence can be elicited through full body avatars versus no avatar. However, minimal avatar research has been conducted with persons with mobility impairments. For these users, Virtual Environments (VEs) and avatars are becoming more common as tools for rehabilitation. If we can maximize presence in these VEs, users may be more effectively distracted from the pain and repetitiveness of rehabilitation, thereby increasing users' motivation. To investigate this we replicated the classic virtual pit experiment and included a responsive full body avatar (or lack thereof) as a 3D user interface. We recruited from two different populations: mobility impaired persons and healthy persons as a control. Results give insight into many other differences between healthy and mobility impaired users' experience of presence in VEs.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129465741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Realistic models in VR training applications are considered to positively influence presence and performance. The experimental study presented analyzed the effect of simulation fidelity (static vs. animated audience) on presence, perceived realism, and anxiety in a virtual speech anxiety training application. No influence on presence and perceived realism was shown, although an animated audience led to significantly higher effects in anxiety during giving a talk.
{"title":"Virtual speech anxiety training — Effects of simulation fidelity on user experience","authors":"Sandra Poeschl, N. Döring","doi":"10.1109/VR.2014.6802073","DOIUrl":"https://doi.org/10.1109/VR.2014.6802073","url":null,"abstract":"Realistic models in VR training applications are considered to positively influence presence and performance. The experimental study presented analyzed the effect of simulation fidelity (static vs. animated audience) on presence, perceived realism, and anxiety in a virtual speech anxiety training application. No influence on presence and perceived realism was shown, although an animated audience led to significantly higher effects in anxiety during giving a talk.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114706655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NASA's Telexploration Project seeks to make us better explorers by building immersive environments that feel like we are really there. The Mission Operations Innovation Office and its Operations Laboratory at the NASA Jet Propulsion Laboratory (JPL) founded the Telexploration Project, and is researching how immersive visualization and natural human-robot interaction can enable mission scientists, engineers, and the general public to interact with NASA spacecraft and alien environments in a more effective way. These efforts have been accelerated through partnerships with many different companies, especially in the video game industry. These demos will exhibit some of the progress made at NASA and its commercial partnerships by allowing attendees to experience Mars data acquired from NASA spacecraft in a head mounted display using several rendering and interaction techniques.
{"title":"NASA Telexploration Project demo","authors":"J. Norris, Scott Davidoff","doi":"10.1109/VR.2014.6802112","DOIUrl":"https://doi.org/10.1109/VR.2014.6802112","url":null,"abstract":"NASA's Telexploration Project seeks to make us better explorers by building immersive environments that feel like we are really there. The Mission Operations Innovation Office and its Operations Laboratory at the NASA Jet Propulsion Laboratory (JPL) founded the Telexploration Project, and is researching how immersive visualization and natural human-robot interaction can enable mission scientists, engineers, and the general public to interact with NASA spacecraft and alien environments in a more effective way. These efforts have been accelerated through partnerships with many different companies, especially in the video game industry. These demos will exhibit some of the progress made at NASA and its commercial partnerships by allowing attendees to experience Mars data acquired from NASA spacecraft in a head mounted display using several rendering and interaction techniques.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130363162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is a big challenge to generate the traffic scenarios with frequent lane changes in flow-based continuum traffic simulations. In this paper, we present a novel macroscopic method, named interactable cooperative driving lattice hydrodynamic model (Interactable CDL-H model). We describe traffic flow along lanes and flow interactions between lanes in a uniformly continuum frame. We further consider various constraints for a detailed lane-changing simulation. The model owns the efficiency of traditional macroscopic traffic models and can describe lane-changing behaviors effectively. It physically describes where/when/how traffic flow goes into (out) of a lane, which make it possible to simulate and display lane-changing behaviors in large-scale virtual environments. The validity and efficiency of the interactable CDLH model are demonstrated by comparing simulation results with real traffic data and one-dimensional CDLH model.
{"title":"Modeling interactions in continuum traffic","authors":"Hua Wang, Tianlu Mao, Zhaoqi Wang","doi":"10.1109/VR.2014.6802082","DOIUrl":"https://doi.org/10.1109/VR.2014.6802082","url":null,"abstract":"It is a big challenge to generate the traffic scenarios with frequent lane changes in flow-based continuum traffic simulations. In this paper, we present a novel macroscopic method, named interactable cooperative driving lattice hydrodynamic model (Interactable CDL-H model). We describe traffic flow along lanes and flow interactions between lanes in a uniformly continuum frame. We further consider various constraints for a detailed lane-changing simulation. The model owns the efficiency of traditional macroscopic traffic models and can describe lane-changing behaviors effectively. It physically describes where/when/how traffic flow goes into (out) of a lane, which make it possible to simulate and display lane-changing behaviors in large-scale virtual environments. The validity and efficiency of the interactable CDLH model are demonstrated by comparing simulation results with real traffic data and one-dimensional CDLH model.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125566103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}