Pub Date : 2014-12-09DOI: 10.1109/IC3D.2014.7032601
Sid Ahmed Fezza, M. Larabi
In this paper, we propose a no-reference perceptual blur metric for 3D stereoscopic images. The proposed approach relies on computing perceptual local blurriness map for each image of the stereo pair. To take into account the disparity/depth masking effect, we modulate the obtained perceptual score at each position of the blurriness maps according to its location in the scene. Under the assumption that, in case of asymmetric stereoscopic image quality, 3D perception mechanisms place more emphasis on the view providing the most important and contrasted information, the two derived local blurriness maps are combined using weighting factors based on local information content. Thanks to the inclusion of those psychophysical findings, the proposed metric handles efficiently symmetric as well as asymmetric distortions. Experimental results show that the proposed metric correlates better with human perception than state-of-the-art metrics.
{"title":"No-reference perceptual blur metric for stereoscopic images","authors":"Sid Ahmed Fezza, M. Larabi","doi":"10.1109/IC3D.2014.7032601","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032601","url":null,"abstract":"In this paper, we propose a no-reference perceptual blur metric for 3D stereoscopic images. The proposed approach relies on computing perceptual local blurriness map for each image of the stereo pair. To take into account the disparity/depth masking effect, we modulate the obtained perceptual score at each position of the blurriness maps according to its location in the scene. Under the assumption that, in case of asymmetric stereoscopic image quality, 3D perception mechanisms place more emphasis on the view providing the most important and contrasted information, the two derived local blurriness maps are combined using weighting factors based on local information content. Thanks to the inclusion of those psychophysical findings, the proposed metric handles efficiently symmetric as well as asymmetric distortions. Experimental results show that the proposed metric correlates better with human perception than state-of-the-art metrics.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"39 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132747303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-09DOI: 10.1109/IC3D.2014.7032598
M. Heshmat, M. Abdellatif, Kazuaki Nakamura, A. Abouelsoud, N. Babaguchi
Visual SLAM algorithms exploit natural scene features to infer the camera motion and build a map of the environment landmarks. SLAM algorithm has two interrelated processes localization and mapping. For accurate localization, we need the features location estimates to converge quickly. On the other hand, to build an accurate map, we need accurate localization. Recently, a biologically inspired approach exploits deliberate camera oscillation has been used to improve the convergence speed of depth estimate. In this paper, we explore the effect of camera oscillation pattern on the accuracy of VSLAM. Two main oscillation patterns are used for distance estimation: translational and rotational. Experiments, using static and moving robot, are made to explore the effect of these oscillation patterns on the VSLAM performance.
{"title":"Camera oscillation pattern for VSLAM: Translational versus rotational","authors":"M. Heshmat, M. Abdellatif, Kazuaki Nakamura, A. Abouelsoud, N. Babaguchi","doi":"10.1109/IC3D.2014.7032598","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032598","url":null,"abstract":"Visual SLAM algorithms exploit natural scene features to infer the camera motion and build a map of the environment landmarks. SLAM algorithm has two interrelated processes localization and mapping. For accurate localization, we need the features location estimates to converge quickly. On the other hand, to build an accurate map, we need accurate localization. Recently, a biologically inspired approach exploits deliberate camera oscillation has been used to improve the convergence speed of depth estimate. In this paper, we explore the effect of camera oscillation pattern on the accuracy of VSLAM. Two main oscillation patterns are used for distance estimation: translational and rotational. Experiments, using static and moving robot, are made to explore the effect of these oscillation patterns on the VSLAM performance.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126782394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-09DOI: 10.1109/IC3D.2014.7032602
Iana Iatsun, M. Larabi, C. Fernandez-Maloigne
Visual attention is one of the most important mechanisms in the human visual perception. Recently, its modeling becomes a principal requirement for the optimization of the image processing systems. Numerous algorithms have already been designed for 2D saliency prediction. However, only few works can be found for 3D content. In this study, we propose a saliency model for stereoscopic 3D video. This algorithm extracts information from three dimensions of content, i.e. spatial, temporal and depth. This model benefits from the properties of interest points to be close to human fixations in order to build spatial salient features. Besides, as the perception of depth relies strongly on monocular cues, our model extracts the depth salient features using the pictorial depth sources. Since weights for fusion strategy are often selected in ad-hoc manner, in this work, we suggest to use a machine learning approach. The used artificial Neural Network allows to define adaptive weights based on the eye-tracking data. The results of the proposed algorithm are tested versus ground-truth information using the state-of-the-art techniques.
{"title":"Visual attention modeling for 3D video using neural networks","authors":"Iana Iatsun, M. Larabi, C. Fernandez-Maloigne","doi":"10.1109/IC3D.2014.7032602","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032602","url":null,"abstract":"Visual attention is one of the most important mechanisms in the human visual perception. Recently, its modeling becomes a principal requirement for the optimization of the image processing systems. Numerous algorithms have already been designed for 2D saliency prediction. However, only few works can be found for 3D content. In this study, we propose a saliency model for stereoscopic 3D video. This algorithm extracts information from three dimensions of content, i.e. spatial, temporal and depth. This model benefits from the properties of interest points to be close to human fixations in order to build spatial salient features. Besides, as the perception of depth relies strongly on monocular cues, our model extracts the depth salient features using the pictorial depth sources. Since weights for fusion strategy are often selected in ad-hoc manner, in this work, we suggest to use a machine learning approach. The used artificial Neural Network allows to define adaptive weights based on the eye-tracking data. The results of the proposed algorithm are tested versus ground-truth information using the state-of-the-art techniques.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"113 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114010005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-09DOI: 10.1109/IC3D.2014.7032600
S. Pujades, Laurent Boiron, Rémi Ronfard, Frederic Devernay
The pre-production stage in a film workflow is important to save time during production. To be useful in stereoscopic 3-D movie-making, storyboards and previz tools need to be adapted in at least two ways. First, it should be possible to specify the desired depth values with suitable and intuitive user interfaces. Second, it should be possible to preview the stereoscopic movie with a suitable screen size. In this paper, we describe a novel technique for simulating a cinema projection room with arbitrary dimensions in a realtime game engine, while controling the camera interaxial and convergence parameters with a gamepad controller. Our technique has been implemented in the Blender Game Engine and tested during the shooting of a short movie. Qualitative experimental results show that our technique overcomes the limitations of previous work in stereoscopic previz and can usefully complement traditional storyboards during pre-production of stereoscopic 3-D movies.
{"title":"Dynamic stereoscopic previz","authors":"S. Pujades, Laurent Boiron, Rémi Ronfard, Frederic Devernay","doi":"10.1109/IC3D.2014.7032600","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032600","url":null,"abstract":"The pre-production stage in a film workflow is important to save time during production. To be useful in stereoscopic 3-D movie-making, storyboards and previz tools need to be adapted in at least two ways. First, it should be possible to specify the desired depth values with suitable and intuitive user interfaces. Second, it should be possible to preview the stereoscopic movie with a suitable screen size. In this paper, we describe a novel technique for simulating a cinema projection room with arbitrary dimensions in a realtime game engine, while controling the camera interaxial and convergence parameters with a gamepad controller. Our technique has been implemented in the Blender Game Engine and tested during the shooting of a short movie. Qualitative experimental results show that our technique overcomes the limitations of previous work in stereoscopic previz and can usefully complement traditional storyboards during pre-production of stereoscopic 3-D movies.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131120736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-09DOI: 10.1109/IC3D.2014.7032584
M. Heshmat, M. Abdellatif, Kazuaki Nakamura, A. Abouelsoud, N. Babaguchi
Visual SLAM algorithms exploit natural scene features to infer the camera motion and build a map of a static environment. In this paper, we relax the severe assumption of a static scene to allow for the detection and deletion of dynamic points. A new "virtual correction" method is introduced which serves to detect the dynamic points by checking the re-projection error of the points before and after the virtual measurement update. It can also recover the erroneously excluded useful features, particularly the distant points which may be deleted because of the change in its position after new measurement observation. Deliberate camera oscillations are also used to improve the VSLAM accuracy and the camera observability. The simulation results showed the effectiveness of the virtual correction when combined with camera oscillation in recovering the misclassified features and detecting the dynamic features even in difficult scenarios.
{"title":"Dynamic feature detection using virtual correction and camera oscillations","authors":"M. Heshmat, M. Abdellatif, Kazuaki Nakamura, A. Abouelsoud, N. Babaguchi","doi":"10.1109/IC3D.2014.7032584","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032584","url":null,"abstract":"Visual SLAM algorithms exploit natural scene features to infer the camera motion and build a map of a static environment. In this paper, we relax the severe assumption of a static scene to allow for the detection and deletion of dynamic points. A new \"virtual correction\" method is introduced which serves to detect the dynamic points by checking the re-projection error of the points before and after the virtual measurement update. It can also recover the erroneously excluded useful features, particularly the distant points which may be deleted because of the change in its position after new measurement observation. Deliberate camera oscillations are also used to improve the VSLAM accuracy and the camera observability. The simulation results showed the effectiveness of the virtual correction when combined with camera oscillation in recovering the misclassified features and detecting the dynamic features even in difficult scenarios.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125607046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/IC3D.2014.7032603
R. R. Tamboli, K. Vupparaboina, Jayanth Reddy Regatti, S. Jana, Sumohana S. Channappayya
We present the results of the first-ever subjective evaluation of true 3D images performed on a light field display. Given the ever-increasing volume of true 3D image content being created and consumed, it is imperative to construct a systematic framework for the subjective evaluation of such content. We first describe our experimental setup and propose a methodology for subjective evaluation on the setup. We then describe the dataset used for our study. Subjective evaluation results are reported for 20 subjects. In addition to subjective results, we also report results of popular full-reference objective 2D image quality assessment methods applied on a per view basis.
{"title":"A subjective evaluation of true 3D images","authors":"R. R. Tamboli, K. Vupparaboina, Jayanth Reddy Regatti, S. Jana, Sumohana S. Channappayya","doi":"10.1109/IC3D.2014.7032603","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032603","url":null,"abstract":"We present the results of the first-ever subjective evaluation of true 3D images performed on a light field display. Given the ever-increasing volume of true 3D image content being created and consumed, it is imperative to construct a systematic framework for the subjective evaluation of such content. We first describe our experimental setup and propose a methodology for subjective evaluation on the setup. We then describe the dataset used for our study. Subjective evaluation results are reported for 20 subjects. In addition to subjective results, we also report results of popular full-reference objective 2D image quality assessment methods applied on a per view basis.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123606318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/IC3D.2014.7032583
C. Riedinger, M. Jordan, Hedi Tabia
This paper presents a set of algorithms dedicated to the 3D modeling of historical buildings from a collection of old architecture plans, including floor plans, elevations and cutoffs. Image processing algorithms help to detect and localize main structures of the building from the floor plans (thick and thin walls, openings). The extrusion of the walls allow us to build a first 3D model. We compute height informations and add textures to the model by analyzing the elevation images from the same collection of documents. We applied this pipeline to XVIIIth century plans of the Château de Versailles, and show results for two different parts of the Château.
{"title":"3D models over the centuries: From old floor plans to 3D representation","authors":"C. Riedinger, M. Jordan, Hedi Tabia","doi":"10.1109/IC3D.2014.7032583","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032583","url":null,"abstract":"This paper presents a set of algorithms dedicated to the 3D modeling of historical buildings from a collection of old architecture plans, including floor plans, elevations and cutoffs. Image processing algorithms help to detect and localize main structures of the building from the floor plans (thick and thin walls, openings). The extrusion of the walls allow us to build a first 3D model. We compute height informations and add textures to the model by analyzing the elevation images from the same collection of documents. We applied this pipeline to XVIIIth century plans of the Château de Versailles, and show results for two different parts of the Château.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133306864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/IC3D.2014.7032582
Miguel Heredia Conde, K. Hartmann, O. Loffeld
A critical element of any Time-of-Flight (ToF) 3D imaging system is the illumination. Most commercial solutions are restricted to short range indoor operation and use simple illumination setups of single or few LEDs, grouped together. Recent developments towards medium and long range ToF imaging, ready for outdoor operation, bring the need for powerful illumination setups, constituted by many emitters, which might be grouped in distributed modules. Provided that the depth accuracy of ToF cameras strongly depends on the quality of the illumination waveform, assuring that a complex illumination system is providing a homogeneous in-phase wavefront is of capital importance to minimize systematic inaccuracies. In this work we present a novel framework for multichannel simultaneous testing of illumination waveforms, which is able to recover the waveform of the incident light on each pixel of a ToF camera, exploiting the sparsity of typical continuous wave (CW) illumination signals in frequency domain.
{"title":"Turning a ToF camera into an illumination tester: Multichannel waveform recovery from few measurements using compressed sensing","authors":"Miguel Heredia Conde, K. Hartmann, O. Loffeld","doi":"10.1109/IC3D.2014.7032582","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032582","url":null,"abstract":"A critical element of any Time-of-Flight (ToF) 3D imaging system is the illumination. Most commercial solutions are restricted to short range indoor operation and use simple illumination setups of single or few LEDs, grouped together. Recent developments towards medium and long range ToF imaging, ready for outdoor operation, bring the need for powerful illumination setups, constituted by many emitters, which might be grouped in distributed modules. Provided that the depth accuracy of ToF cameras strongly depends on the quality of the illumination waveform, assuring that a complex illumination system is providing a homogeneous in-phase wavefront is of capital importance to minimize systematic inaccuracies. In this work we present a novel framework for multichannel simultaneous testing of illumination waveforms, which is able to recover the waveform of the incident light on each pixel of a ToF camera, exploiting the sparsity of typical continuous wave (CW) illumination signals in frequency domain.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126377460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/IC3D.2014.7032581
Maarten Dumont, Patrik Goorts, S. Maesen, Donald Degraen, P. Bekaert, G. Lafruit
We present a novel iterative refinement process to apply to any stereo matching algorithm. The quality of its disparity map output is increased using four rigorously defined refinement modules, which can be iterated multiple times: a disparity cross check, bitwise fast voting, invalid disparity handling, and median filtering. We apply our refinement process to our recently developed aggregation window method for stereo matching that combines two adaptive windows per pixel region [2]; one following the horizontal edges in the image, the other the vertical edges. Their combination defines the final aggregation window shape that closely follows all object edges and thereby achieves increased hypothesis confidence. We demonstrate that the iterative disparity refinement has a large effect on the overall quality, especially around occluded areas, and tends to converge to a final solution. We perform a quantitative evaluation on various Middlebury datasets. Our whole disparity estimation process supports efficient GPU implementation to facilitate scalability and real-time performance.
{"title":"Iterative refinement for real-time local stereo matching","authors":"Maarten Dumont, Patrik Goorts, S. Maesen, Donald Degraen, P. Bekaert, G. Lafruit","doi":"10.1109/IC3D.2014.7032581","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032581","url":null,"abstract":"We present a novel iterative refinement process to apply to any stereo matching algorithm. The quality of its disparity map output is increased using four rigorously defined refinement modules, which can be iterated multiple times: a disparity cross check, bitwise fast voting, invalid disparity handling, and median filtering. We apply our refinement process to our recently developed aggregation window method for stereo matching that combines two adaptive windows per pixel region [2]; one following the horizontal edges in the image, the other the vertical edges. Their combination defines the final aggregation window shape that closely follows all object edges and thereby achieves increased hypothesis confidence. We demonstrate that the iterative disparity refinement has a large effect on the overall quality, especially around occluded areas, and tends to converge to a final solution. We perform a quantitative evaluation on various Middlebury datasets. Our whole disparity estimation process supports efficient GPU implementation to facilitate scalability and real-time performance.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"12 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120936329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/IC3D.2014.7032580
P. Aflaki, Maryam Homayouni, M. Hannuksela, M. Gabbouj
In this paper, a coding scheme targeting stereoscopic content for polarized displays is introduced. It is proposed to use row-interleaved sampling of the views. Asymmetry is achieved by selection of odd/even rows for different views based on the format they will be shown on a polarized display. Coding performance of several different multiview coding schemes with inter-view prediction was analyzed and compared with the anchor case where there is no downsampling applied to the input content. The objective results show that the proposed row-interleaved sampling scheme outperforms all other schemes.
{"title":"Row-interleaved sampling for stereoscopic video coding targeting polarized displays","authors":"P. Aflaki, Maryam Homayouni, M. Hannuksela, M. Gabbouj","doi":"10.1109/IC3D.2014.7032580","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032580","url":null,"abstract":"In this paper, a coding scheme targeting stereoscopic content for polarized displays is introduced. It is proposed to use row-interleaved sampling of the views. Asymmetry is achieved by selection of odd/even rows for different views based on the format they will be shown on a polarized display. Coding performance of several different multiview coding schemes with inter-view prediction was analyzed and compared with the anchor case where there is no downsampling applied to the input content. The objective results show that the proposed row-interleaved sampling scheme outperforms all other schemes.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125663297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}