Pub Date : 2010-07-07DOI: 10.1109/IPTA.2010.5586811
M. Stojmenovic, Andrés Solís Montero, A. Nayak
The goal of image segmentation is to partition an image into regions that are internally homogeneous and heterogeneous with respect to other neighbouring regions. We build on the pyramid image segmentation work proposed by [3] and [9] by making a more efficient method by which children chose parents within the pyramid structure. Instead of considering only four immediate parents as in [3], in [9] each child node considers the neighbours of its candidate parent, and the candidate parents of its neighbouring nodes in the same level. In this paper, we also introduce the concept of a co-parent node for possible region merging at the end of each iteration. The new parents of the former children are co-parent candidates as if they are similar. The co-parent is chosen to be the one with the largest receptive field among candidate co-parents. Each child then additionally considers one more candidate, the co-parent of the previous parent. Other steps in the algorithm, and its overall layout, were also improved. The new algorithm is tested on a set of images. Our algorithm is fast (produces segmentations within seconds), results in the correct segmentation of elongated and large regions, very simple compared to plethora of existing algorithms, and appears competitive in segmentation quality with the best publicly available implementations. The major improvement over [9] is that it produces visually appealing results at earlier levels of pyramid segmentation, and not only at the top one.
{"title":"Co-parent selection for fast region merging in pyramidal image segmentation","authors":"M. Stojmenovic, Andrés Solís Montero, A. Nayak","doi":"10.1109/IPTA.2010.5586811","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586811","url":null,"abstract":"The goal of image segmentation is to partition an image into regions that are internally homogeneous and heterogeneous with respect to other neighbouring regions. We build on the pyramid image segmentation work proposed by [3] and [9] by making a more efficient method by which children chose parents within the pyramid structure. Instead of considering only four immediate parents as in [3], in [9] each child node considers the neighbours of its candidate parent, and the candidate parents of its neighbouring nodes in the same level. In this paper, we also introduce the concept of a co-parent node for possible region merging at the end of each iteration. The new parents of the former children are co-parent candidates as if they are similar. The co-parent is chosen to be the one with the largest receptive field among candidate co-parents. Each child then additionally considers one more candidate, the co-parent of the previous parent. Other steps in the algorithm, and its overall layout, were also improved. The new algorithm is tested on a set of images. Our algorithm is fast (produces segmentations within seconds), results in the correct segmentation of elongated and large regions, very simple compared to plethora of existing algorithms, and appears competitive in segmentation quality with the best publicly available implementations. The major improvement over [9] is that it produces visually appealing results at earlier levels of pyramid segmentation, and not only at the top one.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114323662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-07DOI: 10.1109/IPTA.2010.5586759
Yicong Zhou, K. Panetta, S. Agaian
This paper introduces a new mammogram enhancement algorithm using the human visual system (HVS) based image decomposition. A new enhancement measure based on the second derivative is also introduced to measure and assess the enhancement performance. Experimental results show that the presented algorithm can improve the visual quality of fine details in mammograms. The HVS-based image decomposition can segment the regions/objects from their surroundings. It offers the users flexibility to enhance either sub-images containing only significant illumination information or all the sub-images of the original mammograms. The algorithm can be used in the computer-aided diagnosis systems for breast cancer detection.
{"title":"Human visual system based mammogram enhancement and analysis","authors":"Yicong Zhou, K. Panetta, S. Agaian","doi":"10.1109/IPTA.2010.5586759","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586759","url":null,"abstract":"This paper introduces a new mammogram enhancement algorithm using the human visual system (HVS) based image decomposition. A new enhancement measure based on the second derivative is also introduced to measure and assess the enhancement performance. Experimental results show that the presented algorithm can improve the visual quality of fine details in mammograms. The HVS-based image decomposition can segment the regions/objects from their surroundings. It offers the users flexibility to enhance either sub-images containing only significant illumination information or all the sub-images of the original mammograms. The algorithm can be used in the computer-aided diagnosis systems for breast cancer detection.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129889032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-07DOI: 10.1109/IPTA.2010.5586808
Changryoul Choi, Jechang Jeong
An improved two-bit transform-based motion estimation algorithm is proposed in this paper. By extending the typical two-bit transform (2BT) matching criterion, the proposed algorithm enhances the motion estimation accuracy with almost the same computational complexity, while preserving the binary matching characteristic. Experimental results show that the proposed algorithm achieves peak-to-peak signal-to-noise ratio (PSNR) gains of 0.29dB on average compared with the conventional 2BT-based motion estimation.
{"title":"Improved two-bit transform-based motion estimation via extension of matching criterion","authors":"Changryoul Choi, Jechang Jeong","doi":"10.1109/IPTA.2010.5586808","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586808","url":null,"abstract":"An improved two-bit transform-based motion estimation algorithm is proposed in this paper. By extending the typical two-bit transform (2BT) matching criterion, the proposed algorithm enhances the motion estimation accuracy with almost the same computational complexity, while preserving the binary matching characteristic. Experimental results show that the proposed algorithm achieves peak-to-peak signal-to-noise ratio (PSNR) gains of 0.29dB on average compared with the conventional 2BT-based motion estimation.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132444872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-07DOI: 10.1109/IPTA.2010.5586809
D. T. Cong, C. Achard, L. Khoudour
The research presented in this paper consists in developing an automatic system for people re-identification across multiple cameras with non-overlapping fields of view. We first propose a robust algorithm for silhouette extraction which is based on an adaptive spatio-colorimetric background and foreground model coupled with a dynamic decision framework. Such a method is able to deal with the difficult conditions of outdoor environments where lighting is not stable and distracting motions are very numerous. A robust classification procedure, which exploits the discriminative nature of sparse representation, is then presented to perform people re-identification task. The global system is tested on two real data sets recorded in very difficult environments. The experimental results show that the proposed system leads to very satisfactory results compared to other approaches of the literature.
{"title":"People re-identification by classification of silhouettes based on sparse representation","authors":"D. T. Cong, C. Achard, L. Khoudour","doi":"10.1109/IPTA.2010.5586809","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586809","url":null,"abstract":"The research presented in this paper consists in developing an automatic system for people re-identification across multiple cameras with non-overlapping fields of view. We first propose a robust algorithm for silhouette extraction which is based on an adaptive spatio-colorimetric background and foreground model coupled with a dynamic decision framework. Such a method is able to deal with the difficult conditions of outdoor environments where lighting is not stable and distracting motions are very numerous. A robust classification procedure, which exploits the discriminative nature of sparse representation, is then presented to perform people re-identification task. The global system is tested on two real data sets recorded in very difficult environments. The experimental results show that the proposed system leads to very satisfactory results compared to other approaches of the literature.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131829365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-07DOI: 10.1109/IPTA.2010.5586772
N. Milisavljevic, D. Closson, I. Bloch
This paper describes detection and interpretation of temporal changes in an area of interest using coherent change detection in repeat-pass Synthetic Aperture Radar imagery, with the main goal of detecting subtle scene changes such as potential human activities. Possibilities of introducing knowledge sources in order to improve the final result are also presented.
{"title":"Detecting potential human activities using coherent change detection","authors":"N. Milisavljevic, D. Closson, I. Bloch","doi":"10.1109/IPTA.2010.5586772","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586772","url":null,"abstract":"This paper describes detection and interpretation of temporal changes in an area of interest using coherent change detection in repeat-pass Synthetic Aperture Radar imagery, with the main goal of detecting subtle scene changes such as potential human activities. Possibilities of introducing knowledge sources in order to improve the final result are also presented.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"101 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132236425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-07DOI: 10.1109/IPTA.2010.5586741
Sang-Jun Park, Gwanggil Jeon, Jechang Jeong
The purpose of this article is to discuss deinterlacing results in a computationally constrained and varied environment. The proposed covariance-based adaptive deinterlacing method using edge map (CADEM) consists of two methods: the modified edge-based line averaging (MELA) method for plain regions and the covariance-based adaptive deinterlacing (CAD) method along the edges. The proposed CADEM uses the edge map of the interlaced input image for assigning the appropriate method between MELA and the modified CAD (MCAD) methods. We first introduce the MCAD method. The principle idea of the MCAD is based on the correspondence between the high-resolution covariance and the low-resolution covariance. The MCAD estimates the local covariance coefficients from an interlaced image using Wiener filtering theory and then uses these optimal minimum mean squared error interpolation coefficients to obtain a deinterlaced image. However, the MCAD method, though more robust than most known methods, was not found to be very fast compared with the others. To alleviate this issue, we propose an adaptive selection approach rather than using only one MCAD algorithm. The proposed hybrid approach of switching between the MELA and MCAD is proposed to reduce the overall computational load. A reliable condition to be used for switching the schemes is established by the edge map composed of binary image. The results of computer simulations showed that the proposed methods outperformed a number of methods presented in the literature.
{"title":"Covariance-based adaptive deinterlacing method using edge map","authors":"Sang-Jun Park, Gwanggil Jeon, Jechang Jeong","doi":"10.1109/IPTA.2010.5586741","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586741","url":null,"abstract":"The purpose of this article is to discuss deinterlacing results in a computationally constrained and varied environment. The proposed covariance-based adaptive deinterlacing method using edge map (CADEM) consists of two methods: the modified edge-based line averaging (MELA) method for plain regions and the covariance-based adaptive deinterlacing (CAD) method along the edges. The proposed CADEM uses the edge map of the interlaced input image for assigning the appropriate method between MELA and the modified CAD (MCAD) methods. We first introduce the MCAD method. The principle idea of the MCAD is based on the correspondence between the high-resolution covariance and the low-resolution covariance. The MCAD estimates the local covariance coefficients from an interlaced image using Wiener filtering theory and then uses these optimal minimum mean squared error interpolation coefficients to obtain a deinterlaced image. However, the MCAD method, though more robust than most known methods, was not found to be very fast compared with the others. To alleviate this issue, we propose an adaptive selection approach rather than using only one MCAD algorithm. The proposed hybrid approach of switching between the MELA and MCAD is proposed to reduce the overall computational load. A reliable condition to be used for switching the schemes is established by the edge map composed of binary image. The results of computer simulations showed that the proposed methods outperformed a number of methods presented in the literature.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125135047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-07DOI: 10.1109/IPTA.2010.5586803
M. Jourdain, J. Meunier, J. Sequeira, G. Cloutier, J. Tardif
During an Intravascular Ultrasound (IVUS) examination, a catheter with an ultrasound transducer is introduced in the body through a blood vessel and then pulled back to image a sequence of vessel cross-sections. An IVUS exam results in several hundred noisy images often hard to analyze. Hence, developing powerful automatic analysis tool would facilitate the interpretation of structures in IVUS images. In this paper we present a new IVUS segmentation method based on an original active contour model. The contour has a helical geometry and evolves like a spiral shape that is distorted until it reaches the artery lumen boundaries. Despite the use of a simple statistical model and a very sparse initialization of the snake, the algorithm converges to satisfying solutions that can be compared with much more sophisticated segmentation methods. To validate the method, we compared our results to manually traced contours and obtained an Hausdorff distance < 0∶61mm (n = 540 images) indicating the robustness of the method.
{"title":"Intravascular Ultrasound image segmentation: A helical active contour method","authors":"M. Jourdain, J. Meunier, J. Sequeira, G. Cloutier, J. Tardif","doi":"10.1109/IPTA.2010.5586803","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586803","url":null,"abstract":"During an Intravascular Ultrasound (IVUS) examination, a catheter with an ultrasound transducer is introduced in the body through a blood vessel and then pulled back to image a sequence of vessel cross-sections. An IVUS exam results in several hundred noisy images often hard to analyze. Hence, developing powerful automatic analysis tool would facilitate the interpretation of structures in IVUS images. In this paper we present a new IVUS segmentation method based on an original active contour model. The contour has a helical geometry and evolves like a spiral shape that is distorted until it reaches the artery lumen boundaries. Despite the use of a simple statistical model and a very sparse initialization of the snake, the algorithm converges to satisfying solutions that can be compared with much more sophisticated segmentation methods. To validate the method, we compared our results to manually traced contours and obtained an Hausdorff distance < 0∶61mm (n = 540 images) indicating the robustness of the method.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127709827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-07DOI: 10.1109/IPTA.2010.5586757
Menno Israël, J. Schaar, E. V. D. Broek, M. D. Uyl, P. V. D. Putten
A central debate in visual perception theory is the argument for indirect versus direct perception; i.e., the use of intermediate, abstract, and hierarchical representations versus direct semantic interpretation of images through interaction with the outside world. We present a content-based representation that combines both approaches. The previously developed Visual Alphabet method is extended with a hierarchy of representations, each level feeding into the next one, but based on features that are not abstract but directly relevant to the task at hand. Explorative benchmark experiments are carried out on face images to investigate and explain the impact of the key parameters such as pattern size, number of prototypes, and distance measures used. Results show that adding an additional middle layer improves results, by encoding the spatial co-occurrence of lower-level pattern prototypes.
{"title":"Multi-level visual alphabets","authors":"Menno Israël, J. Schaar, E. V. D. Broek, M. D. Uyl, P. V. D. Putten","doi":"10.1109/IPTA.2010.5586757","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586757","url":null,"abstract":"A central debate in visual perception theory is the argument for indirect versus direct perception; i.e., the use of intermediate, abstract, and hierarchical representations versus direct semantic interpretation of images through interaction with the outside world. We present a content-based representation that combines both approaches. The previously developed Visual Alphabet method is extended with a hierarchy of representations, each level feeding into the next one, but based on features that are not abstract but directly relevant to the task at hand. Explorative benchmark experiments are carried out on face images to investigate and explain the impact of the key parameters such as pattern size, number of prototypes, and distance measures used. Results show that adding an additional middle layer improves results, by encoding the spatial co-occurrence of lower-level pattern prototypes.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126918369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-07DOI: 10.1109/IPTA.2010.5586756
Tomáš Fabián, Jan Gaura, Petr Kotas
In this paper, we describe a new method for detecting iris in digital images. Our method is simple yet effective. It is based on statistical point of view when searching for limbic boundary and rather analytical approach when detecting pupillary boundary. It can be described in three simple steps; firstly, the bright point inside the pupil is detected; secondly, outer limbic boundary is found via statistical measurements of outer boundary points; and thirdly, inner boundary points are found by means of defined cost function maximization. Performance of the presented method is evaluated on series of iris close-up images and compared with the traditional Hough method as well.
{"title":"An algorithm for iris extraction","authors":"Tomáš Fabián, Jan Gaura, Petr Kotas","doi":"10.1109/IPTA.2010.5586756","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586756","url":null,"abstract":"In this paper, we describe a new method for detecting iris in digital images. Our method is simple yet effective. It is based on statistical point of view when searching for limbic boundary and rather analytical approach when detecting pupillary boundary. It can be described in three simple steps; firstly, the bright point inside the pupil is detected; secondly, outer limbic boundary is found via statistical measurements of outer boundary points; and thirdly, inner boundary points are found by means of defined cost function maximization. Performance of the presented method is evaluated on series of iris close-up images and compared with the traditional Hough method as well.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122889918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-07DOI: 10.1109/IPTA.2010.5586788
R. Pouteau, B. Stoll, S. Chabrier
One of the major stakeholders of image fusion is being able to process the most complex images at the finest possible integration level and with the most reliable accuracy. The use of support vector machine (SVM) fusion for the classification of multisensors images representing a complex tropical ecosystem is investigated. First, SVM are trained individually on a set of complementary sources: multispectral, synthetic aperture radar (SAR) images and a digital elevation model (DEM). Then a SVM-based decision fusion is performed on the three sources. SVM fusion outperforms all monosource classifications outputting results with the same accuracy as the majority of other comparable studies on cultural landscapes. SVM-based hybrid consensus classification does not only balance successful and misclassified results, it also uses misclassification patterns as information. Such a successful approach is partially due to the integration of DEM-extracted indices which are relevant to land cover mapping in non-cultural and topographically complex landscapes.
{"title":"Support vector machine fusion of multisensor imagery in tropical ecosystems","authors":"R. Pouteau, B. Stoll, S. Chabrier","doi":"10.1109/IPTA.2010.5586788","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586788","url":null,"abstract":"One of the major stakeholders of image fusion is being able to process the most complex images at the finest possible integration level and with the most reliable accuracy. The use of support vector machine (SVM) fusion for the classification of multisensors images representing a complex tropical ecosystem is investigated. First, SVM are trained individually on a set of complementary sources: multispectral, synthetic aperture radar (SAR) images and a digital elevation model (DEM). Then a SVM-based decision fusion is performed on the three sources. SVM fusion outperforms all monosource classifications outputting results with the same accuracy as the majority of other comparable studies on cultural landscapes. SVM-based hybrid consensus classification does not only balance successful and misclassified results, it also uses misclassification patterns as information. Such a successful approach is partially due to the integration of DEM-extracted indices which are relevant to land cover mapping in non-cultural and topographically complex landscapes.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124490517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}