Pub Date : 2017-10-01DOI: 10.1109/ICVRV.2017.00014
Jiaqing Liu, Xukun Shen, Yong Hu
In this paper we describe a variational approach to reconstruct the non-rigid shape from a monocular video sequence based on optical flow feedback. To obtain the dense 2D correspondences from the image sequence, which is critical for 3D reconstruction, we formulate the multi-frame optical flow problem as a global energy minimization process using subspace constraints, settles the problems of large displacements and high cost caused by dimensionality elegantly. Using the long-term trajectory tracked by optical flow field as input, our method estimate the depth of traced pixel in each frame based on the Non-Rigid Structure from Motion(SFM) algorithm. And finally, we refine the 3D shape via interpolation on recovered 3D point cloud and camera parameters. The experiment on real sequence of different objects demonstrates the accuracy and robustness of our framework.
本文提出了一种基于光流反馈的变分方法来重建单目视频序列的非刚性形状。为了从图像序列中获得密集的二维对应关系,这是三维重建的关键,我们利用子空间约束将多帧光流问题表述为一个全局能量最小化过程,很好地解决了由维数引起的大位移和高成本问题。该方法以光流场跟踪的长期轨迹为输入,基于运动非刚性结构(non -刚性Structure from Motion, SFM)算法估计每帧跟踪像素的深度。最后,对恢复的三维点云和相机参数进行插值,细化三维形状。在不同目标真实序列上的实验验证了该框架的准确性和鲁棒性。
{"title":"Monocular Reconstruction of Non-rigid Shapes Using Optical Flow Feedback","authors":"Jiaqing Liu, Xukun Shen, Yong Hu","doi":"10.1109/ICVRV.2017.00014","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00014","url":null,"abstract":"In this paper we describe a variational approach to reconstruct the non-rigid shape from a monocular video sequence based on optical flow feedback. To obtain the dense 2D correspondences from the image sequence, which is critical for 3D reconstruction, we formulate the multi-frame optical flow problem as a global energy minimization process using subspace constraints, settles the problems of large displacements and high cost caused by dimensionality elegantly. Using the long-term trajectory tracked by optical flow field as input, our method estimate the depth of traced pixel in each frame based on the Non-Rigid Structure from Motion(SFM) algorithm. And finally, we refine the 3D shape via interpolation on recovered 3D point cloud and camera parameters. The experiment on real sequence of different objects demonstrates the accuracy and robustness of our framework.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128386059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/ICVRV.2017.00097
Benyang Cao, Zhenliang Zhang, Dongdong Weng
In this paper, we propose a mixed reality system consisting of a binocular optical see-through head-mounted display(OST-HMD) and a depth sensor. Based on the proposed system, participants perform manipulation tasks to experience the direct physics-inspired interaction mode. Moreover, we design an active selection rule for accurate detailed manipulation. The result indicates that although direct physics-inspired interaction mode is not sufficiently efficient, it shows an advantage regarding the humanization and attraction.
{"title":"Evaluation of Direct Physics-Inspired Interaction for Mixed Reality Based on Optical See-Through Head-Mounted Displays","authors":"Benyang Cao, Zhenliang Zhang, Dongdong Weng","doi":"10.1109/ICVRV.2017.00097","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00097","url":null,"abstract":"In this paper, we propose a mixed reality system consisting of a binocular optical see-through head-mounted display(OST-HMD) and a depth sensor. Based on the proposed system, participants perform manipulation tasks to experience the direct physics-inspired interaction mode. Moreover, we design an active selection rule for accurate detailed manipulation. The result indicates that although direct physics-inspired interaction mode is not sufficiently efficient, it shows an advantage regarding the humanization and attraction.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130551185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/ICVRV.2017.00017
Xiangbin Shi, Yaguang Lu, Cuiwei Liu, Deyuan Zhang, Fang Liu
In this paper, we aim to address the problem of temporal segmentation of videos. Videos acquired from real world usually contain several continuous actions. Some literatures divide these real-world videos into many video clips with fixed length, since the features obtained from a single frame cannot fully describe human motion in a period. But a fixed-length video clip may contain frames from several adjacent actions, which would significantly affect the performance of action segmentation and recognition. Here we propose a novel unsupervised method based on the directions of velocity to divide an input video into a series of clips with unfixed length. Experiments conducted on the IXMAS dataset verify the effectiveness of our method.
{"title":"A Novel Unsupervised Method for Temporal Segmentation of Videos","authors":"Xiangbin Shi, Yaguang Lu, Cuiwei Liu, Deyuan Zhang, Fang Liu","doi":"10.1109/ICVRV.2017.00017","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00017","url":null,"abstract":"In this paper, we aim to address the problem of temporal segmentation of videos. Videos acquired from real world usually contain several continuous actions. Some literatures divide these real-world videos into many video clips with fixed length, since the features obtained from a single frame cannot fully describe human motion in a period. But a fixed-length video clip may contain frames from several adjacent actions, which would significantly affect the performance of action segmentation and recognition. Here we propose a novel unsupervised method based on the directions of velocity to divide an input video into a series of clips with unfixed length. Experiments conducted on the IXMAS dataset verify the effectiveness of our method.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131107841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Symmetry detection of 3D models is a fundamental step in many applications, and various methods have been proposed. Most of them treat 3D models as shell models; by contrast, our algorithm detects volumetric symmetries based on voxelization. Experimental results show that the proposed algorithm can detect symmetries that are closer to human understandings. Despite the detection of accurate symmetries, the algorithm can also provide accurate, intuitive and stable measurements of the detected symmetries. A hierarchical strategy is applied to speed up the algorithm, and the algorithm can detect symmetries in high resolutions very efficiently by reusing computational results that are yielded in low resolutions.
{"title":"A Hierarchical Symmetry Detection Algorithm Based on Voxelization","authors":"Xuanmeng Xie, Shan Luo, Qitong Zhang, Jieqing Feng","doi":"10.1109/ICVRV.2017.00025","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00025","url":null,"abstract":"Symmetry detection of 3D models is a fundamental step in many applications, and various methods have been proposed. Most of them treat 3D models as shell models; by contrast, our algorithm detects volumetric symmetries based on voxelization. Experimental results show that the proposed algorithm can detect symmetries that are closer to human understandings. Despite the detection of accurate symmetries, the algorithm can also provide accurate, intuitive and stable measurements of the detected symmetries. A hierarchical strategy is applied to speed up the algorithm, and the algorithm can detect symmetries in high resolutions very efficiently by reusing computational results that are yielded in low resolutions.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129068023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/ICVRV.2017.00042
Pan Zhou, Quanyu Wang
To achieve better interactivity in virtual surgery system, based on the exposure of surgery view using retractors, which is the basic surgery operation, study in the interaction technology in virtual surgery was carried out. In this paper, we presented a solution to solve the problem that the current methods cannot achieve good accuracy in location and high quality in visual feedback while satisfying the refresh rate. We chose handle to be the interactive equipment, the combination of inertial sensor location method and laser positioning method was applied to spatial location of handle, which can not only guarantee the accuracy, but also make the final output data smoother and the update rate soars. Visual feedback is the main function in the human body model interaction, we proposed an improved Mass Spring Damper model, which includes both surface grid and skeleton grid, and in addition connects particles on the surface grid and internal skeleton grid through the spring, to effectively support the mesh surface and prevent hyperelastic deformation. Finally, experiments were conducted and results show that the methods above achieve higher accuracy and efficiency in interaction.
{"title":"Research on Interaction of Exposure Operation in Virtual Surgery","authors":"Pan Zhou, Quanyu Wang","doi":"10.1109/ICVRV.2017.00042","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00042","url":null,"abstract":"To achieve better interactivity in virtual surgery system, based on the exposure of surgery view using retractors, which is the basic surgery operation, study in the interaction technology in virtual surgery was carried out. In this paper, we presented a solution to solve the problem that the current methods cannot achieve good accuracy in location and high quality in visual feedback while satisfying the refresh rate. We chose handle to be the interactive equipment, the combination of inertial sensor location method and laser positioning method was applied to spatial location of handle, which can not only guarantee the accuracy, but also make the final output data smoother and the update rate soars. Visual feedback is the main function in the human body model interaction, we proposed an improved Mass Spring Damper model, which includes both surface grid and skeleton grid, and in addition connects particles on the surface grid and internal skeleton grid through the spring, to effectively support the mesh surface and prevent hyperelastic deformation. Finally, experiments were conducted and results show that the methods above achieve higher accuracy and efficiency in interaction.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"398 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130623385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The traditional picture book has poor interaction and low learning efficiency. To improve this shortage, a picture book of a mobile augmented reality system (MARS) is presented in this paper. It contains the functions of painting, literacy and listening to the story, which can help children understand things from the perspective of vision, hearing and tactile. This paper designs MARS based on image recognition and Bi-Directional matching technology, and developed interactive AR card named ARMonkey by MARS. Using the AR picture book not only exercises the children's hands-on ability, but also improves their imagination and brings them a lot of fun.
{"title":"Mobile Augmented Reality System for Preschool Education","authors":"Shou-Ming Hou, Yan-Yan Liu, Qi-Bo Tang, Xiaozhi Guo","doi":"10.1109/ICVRV.2017.00074","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00074","url":null,"abstract":"The traditional picture book has poor interaction and low learning efficiency. To improve this shortage, a picture book of a mobile augmented reality system (MARS) is presented in this paper. It contains the functions of painting, literacy and listening to the story, which can help children understand things from the perspective of vision, hearing and tactile. This paper designs MARS based on image recognition and Bi-Directional matching technology, and developed interactive AR card named ARMonkey by MARS. Using the AR picture book not only exercises the children's hands-on ability, but also improves their imagination and brings them a lot of fun.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131524539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/ICVRV.2017.00090
Z. Geng, Y. Qiao
A speeded up robust feature descriptors and matching algorithm based on brightness order is designed to overcome the problems of precision and robustness of the original SURF algorithm. Pixels are sorted and segmented according to gray values in the features support region. By establishing an index table, each segment of the pixel is represented to form a descriptor, and each segment of the descriptor is serially connected to form a feature descriptor which is used to match the image. The experimental results show that the proposed method is more higher matching accuracy, and better robustness to linear or nonlinear illumination changes compared with SURF.
{"title":"An Improved Illumination Invariant SURF Image Feature Descriptor","authors":"Z. Geng, Y. Qiao","doi":"10.1109/ICVRV.2017.00090","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00090","url":null,"abstract":"A speeded up robust feature descriptors and matching algorithm based on brightness order is designed to overcome the problems of precision and robustness of the original SURF algorithm. Pixels are sorted and segmented according to gray values in the features support region. By establishing an index table, each segment of the pixel is represented to form a descriptor, and each segment of the descriptor is serially connected to form a feature descriptor which is used to match the image. The experimental results show that the proposed method is more higher matching accuracy, and better robustness to linear or nonlinear illumination changes compared with SURF.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132483530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/ICVRV.2017.00013
Yiming Zhang, Xiangyun Xiao, Xubo Yang
The object detection for 360-degree panoramic images is widely applied in many areas such as automatic driving, navigation of drones and driving assistance. Most of state-of-the-art approaches for detecting objects in ordinary images cannot work well on the object detection for 360-degree panoramic images. As a 360-degree panoramic image can be considered as a 2D image which is the result of a 360-degree panoramic sphere being expanded along the longitude line, objects in it will be twisted or divided apart and the detection will be more difficult. In this paper, we present a real-time object detection system for 360-degree panoramic images using convolutional neural network (CNN). We adopt a CNN-based detection framework for object detection with a post-processing stage to fine-tune the result. Additionally, we propose a novel method to reuse those exisiting datasets of ordinary images, e.g., the ImageNet and PASCAL VOC, in the object detection for 360-degree panoramic images. We will demonstrate with several examples that our method yields higher accuracy and recall rate than traditional methods in object detection for 360-degree panoramic images.
{"title":"Real-Time Object Detection for 360-Degree Panoramic Image Using CNN","authors":"Yiming Zhang, Xiangyun Xiao, Xubo Yang","doi":"10.1109/ICVRV.2017.00013","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00013","url":null,"abstract":"The object detection for 360-degree panoramic images is widely applied in many areas such as automatic driving, navigation of drones and driving assistance. Most of state-of-the-art approaches for detecting objects in ordinary images cannot work well on the object detection for 360-degree panoramic images. As a 360-degree panoramic image can be considered as a 2D image which is the result of a 360-degree panoramic sphere being expanded along the longitude line, objects in it will be twisted or divided apart and the detection will be more difficult. In this paper, we present a real-time object detection system for 360-degree panoramic images using convolutional neural network (CNN). We adopt a CNN-based detection framework for object detection with a post-processing stage to fine-tune the result. Additionally, we propose a novel method to reuse those exisiting datasets of ordinary images, e.g., the ImageNet and PASCAL VOC, in the object detection for 360-degree panoramic images. We will demonstrate with several examples that our method yields higher accuracy and recall rate than traditional methods in object detection for 360-degree panoramic images.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131962141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/ICVRV.2017.00032
Lanling Zeng, Lingling Zhang, Yang Yang, Baoan Yang, Yongzhao Zhan
Plants show different beauty since the leaves and structures are in a variety of colors, shapes, sizes and texture in our vision. Thus, noise and edge bonding in plant modeling reconstruction are highly challenging. In order to separate the adhesive and gain ideal point cloud, we present a method of limited detail multi-density point reconstruction. The core of proposed method is a point-to-surface approach, which constantly refines the density of points so as to generate the fuzzy surface. Comparing with traditional mesh reconstruction, our method can solve the problem of adhesion between the branches and leaves. Results show that the limited detail multi-density point reconstruction is feasible, and as well as good effect and fast speed.
{"title":"Plants Modeling Based on Limited Points","authors":"Lanling Zeng, Lingling Zhang, Yang Yang, Baoan Yang, Yongzhao Zhan","doi":"10.1109/ICVRV.2017.00032","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00032","url":null,"abstract":"Plants show different beauty since the leaves and structures are in a variety of colors, shapes, sizes and texture in our vision. Thus, noise and edge bonding in plant modeling reconstruction are highly challenging. In order to separate the adhesive and gain ideal point cloud, we present a method of limited detail multi-density point reconstruction. The core of proposed method is a point-to-surface approach, which constantly refines the density of points so as to generate the fuzzy surface. Comparing with traditional mesh reconstruction, our method can solve the problem of adhesion between the branches and leaves. Results show that the limited detail multi-density point reconstruction is feasible, and as well as good effect and fast speed.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134530143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/ICVRV.2017.00068
Ge Zhang, Haosheng Chen, Yangdong Ye
The demand on self service moving tools like smart wheelchair become urgent with the development of society. Traditional moving facilities performs poorly in indoor environments and is unable to do fine-grained navigation in outdoor environments with GPS locators. Base on simultaneous localization and mapping with heterogeneous sensors and dynamic navigation with threat degree, we introduced a multi-granularity navigation approach for self service moving tools. Visual Inertial Odometry measurements are integrated with readings from GPS for target orientation and generates probabilistic octree represented 3D maps that fitted with real environment, providing dynamic probabilistic octree navigation for self service moving tools. This approach is able to correct visual odometry errors with inertial and GPS readings. The multi-granularity environment representation fused with probabilistic octree has taken sensor characteristics and mapping accuracy into concern and is able to achieve autonomous navigation without any prior knowledge. Experiments demonstrate the effectiveness in minimizing trajectory error under comprehensive material and luminance conditions. This approach also provides theoretical principle for research and development in self service moving facilities.
{"title":"Multi-granularity Navigation for Self Service Moving","authors":"Ge Zhang, Haosheng Chen, Yangdong Ye","doi":"10.1109/ICVRV.2017.00068","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00068","url":null,"abstract":"The demand on self service moving tools like smart wheelchair become urgent with the development of society. Traditional moving facilities performs poorly in indoor environments and is unable to do fine-grained navigation in outdoor environments with GPS locators. Base on simultaneous localization and mapping with heterogeneous sensors and dynamic navigation with threat degree, we introduced a multi-granularity navigation approach for self service moving tools. Visual Inertial Odometry measurements are integrated with readings from GPS for target orientation and generates probabilistic octree represented 3D maps that fitted with real environment, providing dynamic probabilistic octree navigation for self service moving tools. This approach is able to correct visual odometry errors with inertial and GPS readings. The multi-granularity environment representation fused with probabilistic octree has taken sensor characteristics and mapping accuracy into concern and is able to achieve autonomous navigation without any prior knowledge. Experiments demonstrate the effectiveness in minimizing trajectory error under comprehensive material and luminance conditions. This approach also provides theoretical principle for research and development in self service moving facilities.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121350215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}