Zhengfu Peng, Ting Lu, Zhaowen Chen, Xiangmin Xu, Shu-Min Lin
With the help of existing augmented vision perception motion capture technologies, virtual reality (VR) can make users immerse in virtual environments. But users are difficult to convey their actual emotions to others in virtual environments. Since the head-mounted displays (HMDs) significantly obstruct users face, it is hard to recover the full face directly with traditional techniques. In this paper, we introduce a novel method to address this problem by only using the RGB image of a person, without the need of any other sensors or devices. Firstly, we utilize the facial landmark points to estimate the face shape, expression and pose of the user. Then with the information of the Non occlusion face area, we could recover the face texture and the illumination of the current scene.
{"title":"Human Face Reconstruction under a HMD Occlusion","authors":"Zhengfu Peng, Ting Lu, Zhaowen Chen, Xiangmin Xu, Shu-Min Lin","doi":"10.1109/VR.2019.8797959","DOIUrl":"https://doi.org/10.1109/VR.2019.8797959","url":null,"abstract":"With the help of existing augmented vision perception motion capture technologies, virtual reality (VR) can make users immerse in virtual environments. But users are difficult to convey their actual emotions to others in virtual environments. Since the head-mounted displays (HMDs) significantly obstruct users face, it is hard to recover the full face directly with traditional techniques. In this paper, we introduce a novel method to address this problem by only using the RGB image of a person, without the need of any other sensors or devices. Firstly, we utilize the facial landmark points to estimate the face shape, expression and pose of the user. Then with the information of the Non occlusion face area, we could recover the face texture and the illumination of the current scene.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114638551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent advances of the affordable virtual reality headsets make virtual reality training an economical choice when compared to traditional training. However, these virtual reality devices present a range of different levels of virtual reality fidelity and interactions. Few works have evaluated their validity against the traditional training formats. This paper presents a study that compares the learning efficiency of a bimanual gearbox assembly task among traditional training, virtual reality training with direct 3D inputs (HTC VIVE), and virtual reality training without 3D inputs (Google Cardboard). A pilot study was conducted and the result shows that HTC VIVE brings the best learning outcomes.
{"title":"Training Transfer of Bimanual Assembly Tasks in Cost-Differentiated Virtual Reality Systems","authors":"S. Shen, Hsiang-Ting Chen, T. Leong","doi":"10.1109/VR.2019.8797917","DOIUrl":"https://doi.org/10.1109/VR.2019.8797917","url":null,"abstract":"Recent advances of the affordable virtual reality headsets make virtual reality training an economical choice when compared to traditional training. However, these virtual reality devices present a range of different levels of virtual reality fidelity and interactions. Few works have evaluated their validity against the traditional training formats. This paper presents a study that compares the learning efficiency of a bimanual gearbox assembly task among traditional training, virtual reality training with direct 3D inputs (HTC VIVE), and virtual reality training without 3D inputs (Google Cardboard). A pilot study was conducted and the result shows that HTC VIVE brings the best learning outcomes.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125156824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wan-Lun Tsai, Liwei Su, Tsai-Yen Ko, Cheng-Ta Yang, Min-Chun Hu
Decision-making is an essential part in basketball offenses. In this paper, we proposed a basketball offensive decision-making VR training system. During the training, the trainee can intuitively interact with the system by wearing a motion capture suit and be trained in different virtual defensive scenarios designed by professional coaches. The system will recognize the offensive action performed by the user and provide correct suggestions when he/she makes a bad offensive decision. We compared the effectiveness of the training protocols by using conventional tactics board and the proposed VR system. Furthermore, we investigated the influence of using prerecorded 360-degree panorama video and computer simulated virtual content to create immersive training environment.
{"title":"Improve the Decision-making Skill of Basketball Players by an Action-aware VR Training System","authors":"Wan-Lun Tsai, Liwei Su, Tsai-Yen Ko, Cheng-Ta Yang, Min-Chun Hu","doi":"10.1109/VR.2019.8798309","DOIUrl":"https://doi.org/10.1109/VR.2019.8798309","url":null,"abstract":"Decision-making is an essential part in basketball offenses. In this paper, we proposed a basketball offensive decision-making VR training system. During the training, the trainee can intuitively interact with the system by wearing a motion capture suit and be trained in different virtual defensive scenarios designed by professional coaches. The system will recognize the offensive action performed by the user and provide correct suggestions when he/she makes a bad offensive decision. We compared the effectiveness of the training protocols by using conventional tactics board and the proposed VR system. Furthermore, we investigated the influence of using prerecorded 360-degree panorama video and computer simulated virtual content to create immersive training environment.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123702165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Wisotzky, Jean-Claude Rosenthal, P. Eisert, A. Hilsmann, Falko Schmid, M. Bauer, Armin Schneider, F. Uecker
We present an interactive and multimodal-based augmented reality system for computer-assisted surgery in the context of ear, nose and throat (ENT) treatment. The proposed processing pipeline uses fully digital stereoscopic imaging devices, which support multispectral and white light imaging to generate high resolution image data, and consists of five modules. Input/output data handling, a hybrid multimodal image analysis and a bi-directional interactive augmented reality (AR) and mixed reality (MR) interface for local and remote surgical assistance are of high relevance for the complete framework. The hybrid multimodal 3D scene analysis module uses different wavelengths to classify tissue structures and combines this spectral data with metric 3D information. Additionally, we propose a zoom-independent intraoperative tool for virtual ossicular prosthesis insertion (e.g. stapedectomy) guaranteeing very high metric accuracy in sub-millimeter range (1/10 mm). A bi-directional interactive AR/MR communication module guarantees low latency, while consisting surgical information and avoiding informational overload. Display agnostic AR/MR visualization can show our analyzed data synchronized inside the digital binocular, the 3D display or any connected head-mounted-display (HMD). In addition, the analyzed data can be enriched with annotations by involving external clinical experts using AR/MR and furthermore an accurate registration of preoperative data. The benefits of such a collaborative surgical system are manifold and will lead to a highly improved patient outcome through an easier tissue classification and reduced surgery risk.
{"title":"Interactive and Multimodal-based Augmented Reality for Remote Assistance using a Digital Surgical Microscope","authors":"E. Wisotzky, Jean-Claude Rosenthal, P. Eisert, A. Hilsmann, Falko Schmid, M. Bauer, Armin Schneider, F. Uecker","doi":"10.1109/VR.2019.8797682","DOIUrl":"https://doi.org/10.1109/VR.2019.8797682","url":null,"abstract":"We present an interactive and multimodal-based augmented reality system for computer-assisted surgery in the context of ear, nose and throat (ENT) treatment. The proposed processing pipeline uses fully digital stereoscopic imaging devices, which support multispectral and white light imaging to generate high resolution image data, and consists of five modules. Input/output data handling, a hybrid multimodal image analysis and a bi-directional interactive augmented reality (AR) and mixed reality (MR) interface for local and remote surgical assistance are of high relevance for the complete framework. The hybrid multimodal 3D scene analysis module uses different wavelengths to classify tissue structures and combines this spectral data with metric 3D information. Additionally, we propose a zoom-independent intraoperative tool for virtual ossicular prosthesis insertion (e.g. stapedectomy) guaranteeing very high metric accuracy in sub-millimeter range (1/10 mm). A bi-directional interactive AR/MR communication module guarantees low latency, while consisting surgical information and avoiding informational overload. Display agnostic AR/MR visualization can show our analyzed data synchronized inside the digital binocular, the 3D display or any connected head-mounted-display (HMD). In addition, the analyzed data can be enriched with annotations by involving external clinical experts using AR/MR and furthermore an accurate registration of preoperative data. The benefits of such a collaborative surgical system are manifold and will lead to a highly improved patient outcome through an easier tissue classification and reduced surgery risk.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115287830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Be it discussed as cybersickness, immersive sickness, simulator sickness, or virtual reality sickness, the ill effects of visuo-vestibular mismatch in immersive environments are of great concern for the wider adoption of virtual reality and related technologies. In this position paper, we discuss a unified research approach that may address motion sickness and identify critical research topics.
{"title":"Unifying Research to Address Motion Sickness","authors":"Mark S. Dennison, D. Krum","doi":"10.1109/VR.2019.8798297","DOIUrl":"https://doi.org/10.1109/VR.2019.8798297","url":null,"abstract":"Be it discussed as cybersickness, immersive sickness, simulator sickness, or virtual reality sickness, the ill effects of visuo-vestibular mismatch in immersive environments are of great concern for the wider adoption of virtual reality and related technologies. In this position paper, we discuss a unified research approach that may address motion sickness and identify critical research topics.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127857466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cheng Yao Wang, Mose Sakashita, Upol Ehsan, Jingjin Li, A. S. Won
We present a new type of sharing VR experience over distance which allows people to relive their recorded experience in VR together. We describe a pilot study examining the user experience when people share their VR experience together remotely. Finally, we discuss the implications for sharing VR experiences over time and space.
{"title":"RelivelnVR: Capturing and Reliving Virtual Reality Experiences Together","authors":"Cheng Yao Wang, Mose Sakashita, Upol Ehsan, Jingjin Li, A. S. Won","doi":"10.1109/VR.2019.8798363","DOIUrl":"https://doi.org/10.1109/VR.2019.8798363","url":null,"abstract":"We present a new type of sharing VR experience over distance which allows people to relive their recorded experience in VR together. We describe a pilot study examining the user experience when people share their VR experience together remotely. Finally, we discuss the implications for sharing VR experiences over time and space.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127804254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Augmented Reality (AR) as a new field regarding Human Computing Interaction (HCI) has been gaining momentum in the last few years. Being able to project interactive graphics into real-life environments can be applied in various fields, research and commercial goals. In the field of education, textbooks are still considered to be the primary tool used by students to learn about new topics. Since AR requires interaction and exploration, it brings a ludic component that is hard to replicate using regular textbooks. The application we developed allows elementary school students to interact with a fully three-dimensional human skeleton model, using specialized virtual buttons. Students can understand this complex structure and learn the names of important bones just by using a tablet, a picture and their hands. Results show that the majority of students consider that our AR application helped them visualize and learn more about the human skeletal system. Additionally, the data we gathered shows that there was a 16% increase in correct responses regarding bone names after using our AR application. Our AR application successfully helped the students learn about the human skeletal system by introducing them to AR technologies.
{"title":"An Educational Augmented Reality Application for Elementary School Students Focusing on the Human Skeletal System","authors":"M. Kouzi, Abdihakim Mao, Diego Zambrano","doi":"10.1109/VR.2019.8798058","DOIUrl":"https://doi.org/10.1109/VR.2019.8798058","url":null,"abstract":"Augmented Reality (AR) as a new field regarding Human Computing Interaction (HCI) has been gaining momentum in the last few years. Being able to project interactive graphics into real-life environments can be applied in various fields, research and commercial goals. In the field of education, textbooks are still considered to be the primary tool used by students to learn about new topics. Since AR requires interaction and exploration, it brings a ludic component that is hard to replicate using regular textbooks. The application we developed allows elementary school students to interact with a fully three-dimensional human skeleton model, using specialized virtual buttons. Students can understand this complex structure and learn the names of important bones just by using a tablet, a picture and their hands. Results show that the majority of students consider that our AR application helped them visualize and learn more about the human skeletal system. Additionally, the data we gathered shows that there was a 16% increase in correct responses regarding bone names after using our AR application. Our AR application successfully helped the students learn about the human skeletal system by introducing them to AR technologies.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127969147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Nakano, K. Kiyokawa, Daichi Horita, Keiji Yanai, Nobuchika Sakata, Takuji Narumi
We propose a novel gustatory manipulation interface which utilizes the cross-modal effect of vision on taste elicited with augmented reality (AR)-based real-time food appearance modulation using a generative adversarial network (GAN). Unlike existing systems which only change color or texture pattern of a particular type of food in an inflexible manner, our system changes the appearance of food into multiple types of food in real-time flexibly, dynamically and interactively in accordance with the deformation of the food that the user is actually eating by using GAN-based image-to-image translation. The experimental results reveal that our system successfully manipulates gustatory sensations to some extent and that the effectiveness depends on the original and target types of food as well as each user's food experience.
{"title":"Enchanting Your Noodles: GAN-based Real-time Food-to-Food Translation and Its Impact on Vision-induced Gustatory Manipulation","authors":"K. Nakano, K. Kiyokawa, Daichi Horita, Keiji Yanai, Nobuchika Sakata, Takuji Narumi","doi":"10.1109/VR.2019.8798336","DOIUrl":"https://doi.org/10.1109/VR.2019.8798336","url":null,"abstract":"We propose a novel gustatory manipulation interface which utilizes the cross-modal effect of vision on taste elicited with augmented reality (AR)-based real-time food appearance modulation using a generative adversarial network (GAN). Unlike existing systems which only change color or texture pattern of a particular type of food in an inflexible manner, our system changes the appearance of food into multiple types of food in real-time flexibly, dynamically and interactively in accordance with the deformation of the food that the user is actually eating by using GAN-based image-to-image translation. The experimental results reveal that our system successfully manipulates gustatory sensations to some extent and that the effectiveness depends on the original and target types of food as well as each user's food experience.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128268296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Veronez, L. G. D. Silveira, F. Bordin, Leonardo Campos Inocencio, Graciela Racolte, L. S. Kupssinskü, Pedro Rossa, L. Scalco
One of the main difficulties in the inspection of Bridges/Viaducts by observation is inaccessibility or lack of access throughout the structure. Mapping using remote sensors on Unmanned Aerial Vehicles (UAVs) or by means of laser scanning can be an interesting alternative to the engineer as it can enable more detailed analysis and diagnostics. Such mapping techniques also allow the generation of realistic 3D models that can be integrated in Virtual Reality (VR) environments. In this sense, we present the ImSpector, a system that uses realistic 3D models generated by remote sensors embedded in UAVs that implements a virtual and immersive environment for inspections. As a result, the system provides the engineer a tool to carry out field tests directly at the office, ensuring agility, accuracy and safety in bridge and viaduct inspections.
{"title":"Imspector: Immersive System of Inspection of Bridges/Viaducts","authors":"M. Veronez, L. G. D. Silveira, F. Bordin, Leonardo Campos Inocencio, Graciela Racolte, L. S. Kupssinskü, Pedro Rossa, L. Scalco","doi":"10.1109/VR.2019.8798295","DOIUrl":"https://doi.org/10.1109/VR.2019.8798295","url":null,"abstract":"One of the main difficulties in the inspection of Bridges/Viaducts by observation is inaccessibility or lack of access throughout the structure. Mapping using remote sensors on Unmanned Aerial Vehicles (UAVs) or by means of laser scanning can be an interesting alternative to the engineer as it can enable more detailed analysis and diagnostics. Such mapping techniques also allow the generation of realistic 3D models that can be integrated in Virtual Reality (VR) environments. In this sense, we present the ImSpector, a system that uses realistic 3D models generated by remote sensors embedded in UAVs that implements a virtual and immersive environment for inspections. As a result, the system provides the engineer a tool to carry out field tests directly at the office, ensuring agility, accuracy and safety in bridge and viaduct inspections.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"275 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123497461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper discusses a virtual reality (VR) therapeutic video game for treatment of the neurological eye disorder, Amblyopia. Amblyopia is often referred to as lazy eye, and it entails weaker vision in one eye due to a poor connection between the eye and the brain. Until recently it was thought to be untreatable in adults, but new research has proven that with consistent therapy even adults can improve their Amblyopia, especially through perceptual learning and video games. Even so, therapy compliance remains low due to the fact that conventional therapies are perceived as either invasive, dull and/or boring. Our game aims to make Amblyopia therapy more immersive, enjoyable and playful. The game was perceived by our users to be a fun and accessible alternative, as it involves adhering a Bangerter foil (an opaque sticker) on a VR headset to blur vision in an Amblyopic person's dominant eye while having them playa VR video game. To perform well in the video game, their brain must adapt to rely on seeing with their weaker eye, thereby reforging that neurological connection. While testing our game, we also studied users behavior to investigate what visual and kinetic components were more effective therapeutically. Our findings generally show positive results, showing that visual acuity in adults increases with 45 minutes of therapy. Amblyopia has many negative symptoms including poor depth perception (nec-essary for daily activities such as driving), so this therapy could be life changing for adults with Amblyopia.
{"title":"Virtual Reality Video Game Paired with Physical Monocular Blurring as Accessible Therapy for Amblyopia","authors":"O. Hurd, S. Kurniawan, M. Teodorescu","doi":"10.1109/VR.2019.8797997","DOIUrl":"https://doi.org/10.1109/VR.2019.8797997","url":null,"abstract":"This paper discusses a virtual reality (VR) therapeutic video game for treatment of the neurological eye disorder, Amblyopia. Amblyopia is often referred to as lazy eye, and it entails weaker vision in one eye due to a poor connection between the eye and the brain. Until recently it was thought to be untreatable in adults, but new research has proven that with consistent therapy even adults can improve their Amblyopia, especially through perceptual learning and video games. Even so, therapy compliance remains low due to the fact that conventional therapies are perceived as either invasive, dull and/or boring. Our game aims to make Amblyopia therapy more immersive, enjoyable and playful. The game was perceived by our users to be a fun and accessible alternative, as it involves adhering a Bangerter foil (an opaque sticker) on a VR headset to blur vision in an Amblyopic person's dominant eye while having them playa VR video game. To perform well in the video game, their brain must adapt to rely on seeing with their weaker eye, thereby reforging that neurological connection. While testing our game, we also studied users behavior to investigate what visual and kinetic components were more effective therapeutically. Our findings generally show positive results, showing that visual acuity in adults increases with 45 minutes of therapy. Amblyopia has many negative symptoms including poor depth perception (nec-essary for daily activities such as driving), so this therapy could be life changing for adults with Amblyopia.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116101938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}