Xiaodong Du, Bin Liang, Wenfu Xu, Xueqian Wang, Jianghua Yu
In order to perform the on-orbit servicing mission, the robotic system is firstly required to approach and dock with the target autonomously, for which the measurement of relative pose is the key. It is a challenging task since the existing GEO satellites are generally non-cooperative, i.e. no artificial mark is mounted to aid the measurement. In this paper, a method based on natural features is proposed to estimate the pose of a GEO satellite in the phase of R-bar final approach. The adapter ring and the bottom edges of the satellite are chosen as the recognized object. By the circular feature, the relative position can be resolved while two solutions of the orientation are obtained. The vanishing points formed by the bottom edges are applied to solve the orientation-duality problem so that the on board camera requires no specific motions. The corresponding algorithm for image processing and pose estimation is presented. Computer simulations verify the proposed method.
{"title":"Pose Measurement of a GEO Satellite Based on Natural Features","authors":"Xiaodong Du, Bin Liang, Wenfu Xu, Xueqian Wang, Jianghua Yu","doi":"10.1109/ICVRV.2012.16","DOIUrl":"https://doi.org/10.1109/ICVRV.2012.16","url":null,"abstract":"In order to perform the on-orbit servicing mission, the robotic system is firstly required to approach and dock with the target autonomously, for which the measurement of relative pose is the key. It is a challenging task since the existing GEO satellites are generally non-cooperative, i.e. no artificial mark is mounted to aid the measurement. In this paper, a method based on natural features is proposed to estimate the pose of a GEO satellite in the phase of R-bar final approach. The adapter ring and the bottom edges of the satellite are chosen as the recognized object. By the circular feature, the relative position can be resolved while two solutions of the orientation are obtained. The vanishing points formed by the bottom edges are applied to solve the orientation-duality problem so that the on board camera requires no specific motions. The corresponding algorithm for image processing and pose estimation is presented. Computer simulations verify the proposed method.","PeriodicalId":421789,"journal":{"name":"2012 International Conference on Virtual Reality and Visualization","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129970227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automatic image annotation has emerged as an important research topic due to its potential application on both image understanding and web image search. Due to the inherent ambiguity of image-label mapping, the annotation task has become a challenge to systematically develop robust annotation models with better performance. In this paper, we present an image annotation framework based on Sparse Representation and Multi-Label Learning (SCMLL), which aims at taking full advantage of Image Sparse representation and multi-label learning mechanism to address the annotation problem. We first treat each image as a sparse linear combination of other images, and then consider the component images as the nearest neighbors of the target image based on a sparse representation computed by L-1 minimization. Based on statistical information gained from the label sets of these neighbors, a multiple label learning algorithm based on a posteriori (MAP) principle is presented to determine the tags for the unlabeled image. The experiments over the well known data set demonstrate that the proposed method is beneficial in the image annotation task and outperforms most existing image annotation algorithms.
{"title":"Automatic Image Annotation Based on Sparse Representation and Multiple Label Learning","authors":"Feng Tian, Sheng Xu-kun, Shang Fu-hua, Zhou Kai","doi":"10.1109/ICVRV.2012.11","DOIUrl":"https://doi.org/10.1109/ICVRV.2012.11","url":null,"abstract":"Automatic image annotation has emerged as an important research topic due to its potential application on both image understanding and web image search. Due to the inherent ambiguity of image-label mapping, the annotation task has become a challenge to systematically develop robust annotation models with better performance. In this paper, we present an image annotation framework based on Sparse Representation and Multi-Label Learning (SCMLL), which aims at taking full advantage of Image Sparse representation and multi-label learning mechanism to address the annotation problem. We first treat each image as a sparse linear combination of other images, and then consider the component images as the nearest neighbors of the target image based on a sparse representation computed by L-1 minimization. Based on statistical information gained from the label sets of these neighbors, a multiple label learning algorithm based on a posteriori (MAP) principle is presented to determine the tags for the unlabeled image. The experiments over the well known data set demonstrate that the proposed method is beneficial in the image annotation task and outperforms most existing image annotation algorithms.","PeriodicalId":421789,"journal":{"name":"2012 International Conference on Virtual Reality and Visualization","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126309672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
High quality college education for hearing impaired students is a challenging task. The most common practices nowadays intensively engage specially trained instructors, inclass and after-class tutors, as well as accessible infrastructure such as speech-to-text services. Such approaches require significant manpower investments of educators, staff and volunteers, yet are still highly susceptible to quality control and wide deployment issues. With proven records in education, mixed reality has the potential to serve as a useful assistive learning technology for hearing impaired college students. However, the fundamental technical and theoretical questions for this proposed endeavor remain largely unanswered, which motivated us to conduct this pilot study to explore the feasibilities. We designed and implemented a mixed reality system that simulated in-class assistive learning, and tested it at China's largest hearing impaired higher education institute. 15 hearing impaired college students took part in the experiments and studied a subject that is not part of their regular curriculum. Results showed that the mixed reality techniques were effective for in-class assisting, with moderate side effects. As the first step, this study validated the hypothesis that mixed reality can be used as an assistive learning technology for hearing impaired college students. It also opened the avenue to our planned next phases of mixed reality research for this purpose.
{"title":"Assistive Learning for Hearing Impaired College Students using Mixed Reality: A Pilot Study","authors":"Xun Luo, Mei Han, Tao Liu, Weikang Chen, Fan Bai","doi":"10.1109/ICVRV.2012.20","DOIUrl":"https://doi.org/10.1109/ICVRV.2012.20","url":null,"abstract":"High quality college education for hearing impaired students is a challenging task. The most common practices nowadays intensively engage specially trained instructors, inclass and after-class tutors, as well as accessible infrastructure such as speech-to-text services. Such approaches require significant manpower investments of educators, staff and volunteers, yet are still highly susceptible to quality control and wide deployment issues. With proven records in education, mixed reality has the potential to serve as a useful assistive learning technology for hearing impaired college students. However, the fundamental technical and theoretical questions for this proposed endeavor remain largely unanswered, which motivated us to conduct this pilot study to explore the feasibilities. We designed and implemented a mixed reality system that simulated in-class assistive learning, and tested it at China's largest hearing impaired higher education institute. 15 hearing impaired college students took part in the experiments and studied a subject that is not part of their regular curriculum. Results showed that the mixed reality techniques were effective for in-class assisting, with moderate side effects. As the first step, this study validated the hypothesis that mixed reality can be used as an assistive learning technology for hearing impaired college students. It also opened the avenue to our planned next phases of mixed reality research for this purpose.","PeriodicalId":421789,"journal":{"name":"2012 International Conference on Virtual Reality and Visualization","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132281522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a fast continuous geometric calibration method for projector-camera system under ambient light. Our method estimates an appropriate exposure time to prevent features in captured image from degradation and adopts ORB descriptor to match features pairs in real-time. The adaptive exposure method has been verified with different exposure values and proved to be effective. We also implement our real-time continuous calibration method on Dual-projection display. The calibration process can be accomplished smoothly within 5 frames.
{"title":"Real-time Continuous Geometric Calibration for Projector-Camera System under Ambient Illumination","authors":"Yuqi Li, Niguang Bao, Qingshu Yuan, Dongming Lu","doi":"10.1109/ICVRV.2012.15","DOIUrl":"https://doi.org/10.1109/ICVRV.2012.15","url":null,"abstract":"This paper presents a fast continuous geometric calibration method for projector-camera system under ambient light. Our method estimates an appropriate exposure time to prevent features in captured image from degradation and adopts ORB descriptor to match features pairs in real-time. The adaptive exposure method has been verified with different exposure values and proved to be effective. We also implement our real-time continuous calibration method on Dual-projection display. The calibration process can be accomplished smoothly within 5 frames.","PeriodicalId":421789,"journal":{"name":"2012 International Conference on Virtual Reality and Visualization","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114745401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The total size of massive aircraft CAD models is usually up to several GBs, which exceed not only the storage capacity of memory, but also the rendering ability of graphics card. In this paper, we present compression and rendering methods by exploring the up-to-date GPU techniques. To fit into the memory, vertex data are compressed from float to byte type with bounding box information and then decompressed with GPU. Index data are in short or byte type according to the vertex size, while normal data are deleted and generated by GPU while rendering. To render in real-time, vertex buffer object is exploited instead of traditional display list for efficiency and GPU occlusion query culls occluded parts to lower the rendering load. Furthermore, deliberately designed GPU shaders are applied to optimize the traditional rendering pipeline. The experiment results show by the GPU based methods, the compression rates get up to 5.3, massive CAD models such as the regional jet can be compressed within 178 MB and fit into memory of personal computers, and the rendering frame rates achieve up to 40 with cheap graphics card. It's proved that our method maximizes the GPU capabilities to accelerate the real-time rendering performance of massive aircraft CAD models.
{"title":"GPU Based Compression and Rendering of Massive Aircraft CAD Models","authors":"Tan Dunming, Zhao Gang, Yu Lu","doi":"10.1109/ICVRV.2012.8","DOIUrl":"https://doi.org/10.1109/ICVRV.2012.8","url":null,"abstract":"The total size of massive aircraft CAD models is usually up to several GBs, which exceed not only the storage capacity of memory, but also the rendering ability of graphics card. In this paper, we present compression and rendering methods by exploring the up-to-date GPU techniques. To fit into the memory, vertex data are compressed from float to byte type with bounding box information and then decompressed with GPU. Index data are in short or byte type according to the vertex size, while normal data are deleted and generated by GPU while rendering. To render in real-time, vertex buffer object is exploited instead of traditional display list for efficiency and GPU occlusion query culls occluded parts to lower the rendering load. Furthermore, deliberately designed GPU shaders are applied to optimize the traditional rendering pipeline. The experiment results show by the GPU based methods, the compression rates get up to 5.3, massive CAD models such as the regional jet can be compressed within 178 MB and fit into memory of personal computers, and the rendering frame rates achieve up to 40 with cheap graphics card. It's proved that our method maximizes the GPU capabilities to accelerate the real-time rendering performance of massive aircraft CAD models.","PeriodicalId":421789,"journal":{"name":"2012 International Conference on Virtual Reality and Visualization","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125013425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
3D cloud scenes generation is widely used in computer graphics and virtual reality. Most of the existing methods for 3D cloud visualization first model the cloud based on the physical mechanism of cloud, and then solve the illumination model of the cloud to generate 3D scenes. However, this kind of methods cannot show the real weather condition. Moreover, the existing cloud visualization methods based on the weather forecast data cannot be applied to the large scale 3D cloud scenes due to the complicated solution of the illumination model. Borrowing the idea of particle system, this paper proposes an algorithm for automatic generation of large scale 3D cloud based on weather forecast data. The algorithm considers each grid point in the data as a particle, whose optical parameters can be determined by the input data. Multiple forward scattering is used to calculate the incident color of each particle, and the first order scattering is utilized to determine the incident color to the observer. Experimental results demonstrate that our algorithm could not only generate realistic 3D cloud scenes from the weather forecast data, but also obtain an interactive frame rates for the data that contains millions of grids.
{"title":"Automatic generation of large scale 3D cloud based on weather forecast data","authors":"W. Wenke, Guo Yumeng, Xiong Min, Li Sikun","doi":"10.1109/ICVRV.2012.19","DOIUrl":"https://doi.org/10.1109/ICVRV.2012.19","url":null,"abstract":"3D cloud scenes generation is widely used in computer graphics and virtual reality. Most of the existing methods for 3D cloud visualization first model the cloud based on the physical mechanism of cloud, and then solve the illumination model of the cloud to generate 3D scenes. However, this kind of methods cannot show the real weather condition. Moreover, the existing cloud visualization methods based on the weather forecast data cannot be applied to the large scale 3D cloud scenes due to the complicated solution of the illumination model. Borrowing the idea of particle system, this paper proposes an algorithm for automatic generation of large scale 3D cloud based on weather forecast data. The algorithm considers each grid point in the data as a particle, whose optical parameters can be determined by the input data. Multiple forward scattering is used to calculate the incident color of each particle, and the first order scattering is utilized to determine the incident color to the observer. Experimental results demonstrate that our algorithm could not only generate realistic 3D cloud scenes from the weather forecast data, but also obtain an interactive frame rates for the data that contains millions of grids.","PeriodicalId":421789,"journal":{"name":"2012 International Conference on Virtual Reality and Visualization","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115254344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to create a rapid, easy, and circulatory capabilities for multipurpose applications of remote sensing image, a network-oriented satellite remote sensing circulation architecture was proposed, and its key components was discussed in the same time. The client scheming and block data disposure were supplements to the integrity circulation architecture, conforming to which, clients can distribute the obtained remote sensing images and transfer to be servers, remote sensing image circulation utilization was realized.
{"title":"A Network-Oriented Application of Satellite Remote Sensing Circulation Architecture","authors":"Lingda Wu, Rui Cao, Y. Bian, Jie Jiang","doi":"10.1109/ICVRV.2012.17","DOIUrl":"https://doi.org/10.1109/ICVRV.2012.17","url":null,"abstract":"In order to create a rapid, easy, and circulatory capabilities for multipurpose applications of remote sensing image, a network-oriented satellite remote sensing circulation architecture was proposed, and its key components was discussed in the same time. The client scheming and block data disposure were supplements to the integrity circulation architecture, conforming to which, clients can distribute the obtained remote sensing images and transfer to be servers, remote sensing image circulation utilization was realized.","PeriodicalId":421789,"journal":{"name":"2012 International Conference on Virtual Reality and Visualization","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127941138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As an important visualization way, volume rendering is widely used in many fields. However, occlusion is one of the key problems that perplex traditional volume rendering. In order to see some important features in the datasets, users have to modify the Transfer Functions in a trial and error way which is time-consuming and indirect. In this paper, we provide an interactive continuous erasing for users to quickly get features that they are interested in and an interactive clustering way to view classified features. The first method map user's direct operation on the screen to 3D data space in real time, and then change the rendering results according to the modes that users make use of. Users could directly operate on the 3D rendering results on the screen, and filter any uninterested parts as they want. The second method makes use of Gaussian Mixture Model (GMM) to cluster raw data into different parts. We check the universal practicality of our methods by various datasets from different areas.
{"title":"Interactive Continuous Erasing and Clustering in 3D","authors":"Shen Enya, Wang Wen-ke, Li Si-kun, Cai Xun","doi":"10.1109/ICVRV.2012.21","DOIUrl":"https://doi.org/10.1109/ICVRV.2012.21","url":null,"abstract":"As an important visualization way, volume rendering is widely used in many fields. However, occlusion is one of the key problems that perplex traditional volume rendering. In order to see some important features in the datasets, users have to modify the Transfer Functions in a trial and error way which is time-consuming and indirect. In this paper, we provide an interactive continuous erasing for users to quickly get features that they are interested in and an interactive clustering way to view classified features. The first method map user's direct operation on the screen to 3D data space in real time, and then change the rendering results according to the modes that users make use of. Users could directly operate on the 3D rendering results on the screen, and filter any uninterested parts as they want. The second method makes use of Gaussian Mixture Model (GMM) to cluster raw data into different parts. We check the universal practicality of our methods by various datasets from different areas.","PeriodicalId":421789,"journal":{"name":"2012 International Conference on Virtual Reality and Visualization","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134423205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As a cue to depth perception, motion parallax can improve stereoscopic visualization to a level more like human natural vision. And stereoscopic visualization with motion parallax rendering can lessen the fatigue when people are long-time immersed in virtual scenes. This paper presents a three-stage approach for real-time stereoscopic visualization (SV) with motion parallax rendering (MPR), which consists of head motion sensing, head-camera motion mapping, and stereo pair generation procedures. Theory and algorithm for each stage are presented. This paper also reviews the head tracking technologies and stereoscopic rendering methods mostly used in virtual and augmented reality. A demo application is developed to show the efficiency and adaptability of the algorithms. The experimental results show that our algorithm for SV with MPR is robust and efficient. And aircraft virtual assembly environments with motion parallax rendering can guarantee better interaction experiences and higher assembly efficiency.
{"title":"A Motion Parallax Rendering Approach to Real-time Stereoscopic Visualization for Aircraft Virtual Assembly","authors":"Junjie Xue, Gang Zhao, Dunming Tan","doi":"10.1109/ICVRV.2012.9","DOIUrl":"https://doi.org/10.1109/ICVRV.2012.9","url":null,"abstract":"As a cue to depth perception, motion parallax can improve stereoscopic visualization to a level more like human natural vision. And stereoscopic visualization with motion parallax rendering can lessen the fatigue when people are long-time immersed in virtual scenes. This paper presents a three-stage approach for real-time stereoscopic visualization (SV) with motion parallax rendering (MPR), which consists of head motion sensing, head-camera motion mapping, and stereo pair generation procedures. Theory and algorithm for each stage are presented. This paper also reviews the head tracking technologies and stereoscopic rendering methods mostly used in virtual and augmented reality. A demo application is developed to show the efficiency and adaptability of the algorithms. The experimental results show that our algorithm for SV with MPR is robust and efficient. And aircraft virtual assembly environments with motion parallax rendering can guarantee better interaction experiences and higher assembly efficiency.","PeriodicalId":421789,"journal":{"name":"2012 International Conference on Virtual Reality and Visualization","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130928346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a method to represent the complicated illumination in the real world by using HDR light probe sequences. The illumination representations proposed in this paper employ non-uniform structure instead of uniform light field to simulate lighting with spatial and angular variation, which turns out to be more efficient and accurate. The captured illuminations are divided into direct and indirect parts that are modeled respectively. Both integrated with global illumination algorithm easily, the direct part is organized as an amount of clusters on a virtual plane, which can solve the lighting occlusion problem successfully, while the indirect part is represented as a bounding mesh with HDR texture. This paper demonstrates the technique that captures real illuminations for virtual scenes, and also shows the comparison with the renderings using traditional image based lighting.
{"title":"Non-uniform Illumination Representation based on HDR Light Probe Sequences","authors":"Jian Hu, Tao Yu, L. Wang, Zhong Zhou, Wei Wu","doi":"10.1109/ICVRV.2012.18","DOIUrl":"https://doi.org/10.1109/ICVRV.2012.18","url":null,"abstract":"This paper presents a method to represent the complicated illumination in the real world by using HDR light probe sequences. The illumination representations proposed in this paper employ non-uniform structure instead of uniform light field to simulate lighting with spatial and angular variation, which turns out to be more efficient and accurate. The captured illuminations are divided into direct and indirect parts that are modeled respectively. Both integrated with global illumination algorithm easily, the direct part is organized as an amount of clusters on a virtual plane, which can solve the lighting occlusion problem successfully, while the indirect part is represented as a bounding mesh with HDR texture. This paper demonstrates the technique that captures real illuminations for virtual scenes, and also shows the comparison with the renderings using traditional image based lighting.","PeriodicalId":421789,"journal":{"name":"2012 International Conference on Virtual Reality and Visualization","volume":"356 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133698328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}