Pub Date : 2004-03-28DOI: 10.1109/IAI.2004.1300963
N. Kitiyanan, J. Havlicek
Accurate reference point detection is one of the first and most important signal processing steps in automatic fingerprint identification systems. The fingerprint reference point, which is also known as the core point, except in the case of arch type fingerprints, is defined as the location where the concave ridge curvature attains a maximum. We introduce a multi-resolution reference point detection algorithm that calculates the Poincare index in the modulation domain using an AM-FM model of the fingerprint image. We present experimental results where this new algorithm is tested against the FVC 2000 Database 2 and a second database from the University of Bologna. In both cases, we find that the modulation domain algorithm delivers accuracy and consistency that exceed those of a recent competing technique (Jain, A.K. et al., IEEE Trans. Image Proc., vol.9, no.5, p.846-59, 2000) based on integration of sine components in two adjacent regions.
{"title":"Modulation domain reference point detection for fingerprint recognition","authors":"N. Kitiyanan, J. Havlicek","doi":"10.1109/IAI.2004.1300963","DOIUrl":"https://doi.org/10.1109/IAI.2004.1300963","url":null,"abstract":"Accurate reference point detection is one of the first and most important signal processing steps in automatic fingerprint identification systems. The fingerprint reference point, which is also known as the core point, except in the case of arch type fingerprints, is defined as the location where the concave ridge curvature attains a maximum. We introduce a multi-resolution reference point detection algorithm that calculates the Poincare index in the modulation domain using an AM-FM model of the fingerprint image. We present experimental results where this new algorithm is tested against the FVC 2000 Database 2 and a second database from the University of Bologna. In both cases, we find that the modulation domain algorithm delivers accuracy and consistency that exceed those of a recent competing technique (Jain, A.K. et al., IEEE Trans. Image Proc., vol.9, no.5, p.846-59, 2000) based on integration of sine components in two adjacent regions.","PeriodicalId":326040,"journal":{"name":"6th IEEE Southwest Symposium on Image Analysis and Interpretation, 2004.","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134148999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-03-28DOI: 10.1109/IAI.2004.1300939
M. Smith, A. Khotanzad
A new method for detecting the dissolve production effect within digital videos is proposed. Possible dissolve candidates are first identified based on the MPEG-7 edge histogram (Day, N. and Martinez, J.M., Proc. ISO/IEC/SC29/WG11 N4325, 2001) differences accumulated across a sampled region of the video. These potential candidates are then classified as a dissolve based on the analysis of tracking interesting objects within the considered video segment. The MPEG-7 descriptors (Day and Martinez, 2001), consisting of the edge histogram, homogenous texture, dominant colors and color structure (196 features in all), are then extracted from corresponding objects in successive frames for the duration of the potential dissolve sequence. The object's features are observed to undergo profound changes during a dissolve effect, while changing very little during other types of gradual transitions (e.g. camera panning and zooming). These object changes are used in classifying the sequence as a dissolve.
提出了一种检测数字视频中溶解产生效应的新方法。首先根据MPEG-7边缘直方图(Day, N.和Martinez, J.M., Proc. ISO/IEC/SC29/WG11 N4325, 2001)在视频采样区域累积的差异来识别可能的溶解候选点。然后,根据对所考虑的视频片段中跟踪有趣对象的分析,将这些潜在的候选对象分类为溶解。MPEG-7描述符(Day and Martinez, 2001)由边缘直方图、均匀纹理、主色和颜色结构(共196个特征)组成,然后在潜在溶解序列的持续时间内从连续帧中的相应对象中提取。对象的特征被观察到在溶解效果期间经历深刻的变化,而在其他类型的渐变过渡期间变化很小(例如相机平移和缩放)。这些对象变化用于将序列分类为溶解。
{"title":"Unsupervised object-based detection of dissolves in video sequences","authors":"M. Smith, A. Khotanzad","doi":"10.1109/IAI.2004.1300939","DOIUrl":"https://doi.org/10.1109/IAI.2004.1300939","url":null,"abstract":"A new method for detecting the dissolve production effect within digital videos is proposed. Possible dissolve candidates are first identified based on the MPEG-7 edge histogram (Day, N. and Martinez, J.M., Proc. ISO/IEC/SC29/WG11 N4325, 2001) differences accumulated across a sampled region of the video. These potential candidates are then classified as a dissolve based on the analysis of tracking interesting objects within the considered video segment. The MPEG-7 descriptors (Day and Martinez, 2001), consisting of the edge histogram, homogenous texture, dominant colors and color structure (196 features in all), are then extracted from corresponding objects in successive frames for the duration of the potential dissolve sequence. The object's features are observed to undergo profound changes during a dissolve effect, while changing very little during other types of gradual transitions (e.g. camera panning and zooming). These object changes are used in classifying the sequence as a dissolve.","PeriodicalId":326040,"journal":{"name":"6th IEEE Southwest Symposium on Image Analysis and Interpretation, 2004.","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116645377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-03-28DOI: 10.1109/IAI.2004.1300959
M. Muhlich, R. Mester
The natural characteristics of image signals and the statistics of measurement noise are decisive for designing optimal filter sets and optimal estimation methods in signal processing. Astonishingly, this principle has so far only partially found its way into the field of image sequence processing. We show how a Wiener-type MMSE optimization criterion for the resulting image signal, based on a simple covariance model of images or image sequences, provides direct and intelligible solutions for various, apparently different, problems, such as error concealment, or adaption of filters to signal and noise statistics.
{"title":"A statistical unification of image interpolation, error concealment, and source-adapted filter design","authors":"M. Muhlich, R. Mester","doi":"10.1109/IAI.2004.1300959","DOIUrl":"https://doi.org/10.1109/IAI.2004.1300959","url":null,"abstract":"The natural characteristics of image signals and the statistics of measurement noise are decisive for designing optimal filter sets and optimal estimation methods in signal processing. Astonishingly, this principle has so far only partially found its way into the field of image sequence processing. We show how a Wiener-type MMSE optimization criterion for the resulting image signal, based on a simple covariance model of images or image sequences, provides direct and intelligible solutions for various, apparently different, problems, such as error concealment, or adaption of filters to signal and noise statistics.","PeriodicalId":326040,"journal":{"name":"6th IEEE Southwest Symposium on Image Analysis and Interpretation, 2004.","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129735304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-03-28DOI: 10.1109/IAI.2004.1300962
M. Homem, N. Mascarenhas, L. Costa
We present two linear, non-iterative approaches for deconvolution of three-dimensional images that are able to produce good approximations of the true fluorescence concentration in computational optical sectioning microscopy. Both the proposed filters take into account the nature of the noise due to the low level of photon counts. We present some results of the applicability of the methods using a phantom image, where the improvement in signal-to-noise ratio was used in order to quantify the restoration results, and also using real cell images. We compare the algorithms with the regularized linear least squares algorithm considering different levels of Poisson noise.
{"title":"Linear filters for deconvolution microscopy","authors":"M. Homem, N. Mascarenhas, L. Costa","doi":"10.1109/IAI.2004.1300962","DOIUrl":"https://doi.org/10.1109/IAI.2004.1300962","url":null,"abstract":"We present two linear, non-iterative approaches for deconvolution of three-dimensional images that are able to produce good approximations of the true fluorescence concentration in computational optical sectioning microscopy. Both the proposed filters take into account the nature of the noise due to the low level of photon counts. We present some results of the applicability of the methods using a phantom image, where the improvement in signal-to-noise ratio was used in order to quantify the restoration results, and also using real cell images. We compare the algorithms with the regularized linear least squares algorithm considering different levels of Poisson noise.","PeriodicalId":326040,"journal":{"name":"6th IEEE Southwest Symposium on Image Analysis and Interpretation, 2004.","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126265915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-03-28DOI: 10.1109/IAI.2004.1300958
T.A. El Doker, J. King, D. Scott
A new paradigm for wafer inspection is being developed that would resolve many of today's pending wafer inspection issues. This paradigm integrates 1) a DRAM fabrication line simulation model, producing synthetic images of "typical" wafer maps and associated defects, to 2) fuzzy clustering/declustering algorithms that identify various defects and 3) a unique defect tracking mechanism to monitor patterns of defects across wafer maps. This approach holds promise for in-line process control by allowing for off-site analysis of fabrication line problems and unsupervised adaptation and optimization of application-specific inspection algorithms. The paper reports on the progress made towards the fulfilment of this paradigm.
{"title":"Initial results on the development of a new wafer inspection paradigm","authors":"T.A. El Doker, J. King, D. Scott","doi":"10.1109/IAI.2004.1300958","DOIUrl":"https://doi.org/10.1109/IAI.2004.1300958","url":null,"abstract":"A new paradigm for wafer inspection is being developed that would resolve many of today's pending wafer inspection issues. This paradigm integrates 1) a DRAM fabrication line simulation model, producing synthetic images of \"typical\" wafer maps and associated defects, to 2) fuzzy clustering/declustering algorithms that identify various defects and 3) a unique defect tracking mechanism to monitor patterns of defects across wafer maps. This approach holds promise for in-line process control by allowing for off-site analysis of fabrication line problems and unsupervised adaptation and optimization of application-specific inspection algorithms. The paper reports on the progress made towards the fulfilment of this paradigm.","PeriodicalId":326040,"journal":{"name":"6th IEEE Southwest Symposium on Image Analysis and Interpretation, 2004.","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117187762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-03-28DOI: 10.1109/IAI.2004.1300935
B. Tedla, S. Cabrera, N.J. Parks
Our investigation is aimed at analyzing and restoring images of desert and urban scenes that are degraded by the atmosphere. The modulation transfer function (MTF) is used to capture the atmospheric distortion and to remove it. The MTF is estimated from the scene itself from a large discontinuity. The results are presented for various images of the same scene taken under various atmospheric conditions. The restoration is done using the Lucy-Richardson iterative algorithm.
{"title":"Analysis and restoration of desert/urban scenes degraded by the atmosphere","authors":"B. Tedla, S. Cabrera, N.J. Parks","doi":"10.1109/IAI.2004.1300935","DOIUrl":"https://doi.org/10.1109/IAI.2004.1300935","url":null,"abstract":"Our investigation is aimed at analyzing and restoring images of desert and urban scenes that are degraded by the atmosphere. The modulation transfer function (MTF) is used to capture the atmospheric distortion and to remove it. The MTF is estimated from the scene itself from a large discontinuity. The results are presented for various images of the same scene taken under various atmospheric conditions. The restoration is done using the Lucy-Richardson iterative algorithm.","PeriodicalId":326040,"journal":{"name":"6th IEEE Southwest Symposium on Image Analysis and Interpretation, 2004.","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132647578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-03-28DOI: 10.1109/IAI.2004.1300946
Rongkai Zhao, G. Belford, M. Gabriel
Optimization is a key component of image registration. Due to the non-convexity and high computation cost of the objective function, a common tactic is to set an initial guess and then use multi-resolution or local optimization methods to find a local optimum of the objective function. For almost all local optimization methods, the initial location in the search space plays a critical role in the accuracy of the registration. Initial guesses are often obtained through data-specific methods. The paper offers a new hybrid optimization method assisted by a density-based clustering algorithm. The new method is less data-specific and more suitable for semi-automatic or automatic image registration. Global optimization does not guarantee timely convergence. A genetic algorithm is a component of our hybrid method; however, our method usually converges within a reasonable time. This new method has been applied to registering high resolution brain images.
{"title":"A cluster-assisted global optimization method for high resolution medical image registration","authors":"Rongkai Zhao, G. Belford, M. Gabriel","doi":"10.1109/IAI.2004.1300946","DOIUrl":"https://doi.org/10.1109/IAI.2004.1300946","url":null,"abstract":"Optimization is a key component of image registration. Due to the non-convexity and high computation cost of the objective function, a common tactic is to set an initial guess and then use multi-resolution or local optimization methods to find a local optimum of the objective function. For almost all local optimization methods, the initial location in the search space plays a critical role in the accuracy of the registration. Initial guesses are often obtained through data-specific methods. The paper offers a new hybrid optimization method assisted by a density-based clustering algorithm. The new method is less data-specific and more suitable for semi-automatic or automatic image registration. Global optimization does not guarantee timely convergence. A genetic algorithm is a component of our hybrid method; however, our method usually converges within a reasonable time. This new method has been applied to registering high resolution brain images.","PeriodicalId":326040,"journal":{"name":"6th IEEE Southwest Symposium on Image Analysis and Interpretation, 2004.","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115955532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-03-28DOI: 10.1109/IAI.2004.1300944
Qiang-feng Zhou, Limin Ma, Min Zhou, D. Chelberg
Strong image segmentation is a very challenging problem in computer vision research. Both data-driven and model-driven approaches have been investigated in the past two decades, and many approaches proposed. Although model-based approaches are more promising in addressing strong image segmentation, data-driven approaches present more general frameworks which could potentially be adopted to segment general scenes without any prior model information. We discuss the problems of strong image segmentation from a data-driven perspective, and present a modeling technique describing an object with both its segments and a hierarchical relationship among the segments. The paper is devoted to the discussion of the feasibility of data-driven approaches for strong image segmentation. Existing approaches are not suitable for strong image segmentation in complex environments, but preliminary experimental results show the feasibility of our proposed model.
{"title":"Strong image segmentation from a data-driven perspective: impossible?","authors":"Qiang-feng Zhou, Limin Ma, Min Zhou, D. Chelberg","doi":"10.1109/IAI.2004.1300944","DOIUrl":"https://doi.org/10.1109/IAI.2004.1300944","url":null,"abstract":"Strong image segmentation is a very challenging problem in computer vision research. Both data-driven and model-driven approaches have been investigated in the past two decades, and many approaches proposed. Although model-based approaches are more promising in addressing strong image segmentation, data-driven approaches present more general frameworks which could potentially be adopted to segment general scenes without any prior model information. We discuss the problems of strong image segmentation from a data-driven perspective, and present a modeling technique describing an object with both its segments and a hierarchical relationship among the segments. The paper is devoted to the discussion of the feasibility of data-driven approaches for strong image segmentation. Existing approaches are not suitable for strong image segmentation in complex environments, but preliminary experimental results show the feasibility of our proposed model.","PeriodicalId":326040,"journal":{"name":"6th IEEE Southwest Symposium on Image Analysis and Interpretation, 2004.","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126459717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-03-28DOI: 10.1109/IAI.2004.1300952
R. Araújo, F. Medeiros, Rodrigo C. S. Costa, R. Marques, R. B. Moreira, J.L. Silva
The paper proposes an algorithm to segment spots in synthetic aperture radar (SAR) images in order to support environmental remote monitoring. This approach consists of isolating dark areas that may have originated from oil pollution, thus achieving the aim of our investigation. The proposed algorithm combines a region growing approach and a multiscale analysis employed by an undecimated wavelet transform to localize dark areas in the sea. The undecimated wavelet applied to SAR images smooths the speckle noise while enhancing edges, thus providing a better result for the proposed segmentation algorithm that is achieved by a modified region growing approach. The minmax scheme is used to provide post processing of the segmented image. The algorithms were tested on real SAR images of oil spills.
{"title":"Spots segmentation in SAR images for remote sensing of environment","authors":"R. Araújo, F. Medeiros, Rodrigo C. S. Costa, R. Marques, R. B. Moreira, J.L. Silva","doi":"10.1109/IAI.2004.1300952","DOIUrl":"https://doi.org/10.1109/IAI.2004.1300952","url":null,"abstract":"The paper proposes an algorithm to segment spots in synthetic aperture radar (SAR) images in order to support environmental remote monitoring. This approach consists of isolating dark areas that may have originated from oil pollution, thus achieving the aim of our investigation. The proposed algorithm combines a region growing approach and a multiscale analysis employed by an undecimated wavelet transform to localize dark areas in the sea. The undecimated wavelet applied to SAR images smooths the speckle noise while enhancing edges, thus providing a better result for the proposed segmentation algorithm that is achieved by a modified region growing approach. The minmax scheme is used to provide post processing of the segmented image. The algorithms were tested on real SAR images of oil spills.","PeriodicalId":326040,"journal":{"name":"6th IEEE Southwest Symposium on Image Analysis and Interpretation, 2004.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128465201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-03-28DOI: 10.1109/IAI.2004.1300961
Z. Hammal, A. Caplier
The aim of our work is automatic facial expression analysis based on the study of temporal evolution of facial feature boundaries. Previously, we developed a robust and fast algorithm for accurate lip contour segmentation (Eveno, N. et al., IEEE Trans. Circuits and Systems for Video Technology, 2004). Now, we focus on eye and eyebrow boundary extraction. The segmentation of eyes and eyebrows involves three steps: first, an accurate model based on flexible curves is defined for each feature; second, models are initialized on the image to be processed after the detection of characteristic points such as eye corners; third, models are accurately fitted to the facial features of an image according to some information of luminance gradient. The performance of our method is evaluated by a quantitative comparison with a manual ground truth and also by the analysis of expression skeletons based on the results of our facial features segmentation.
我们的工作目标是基于面部特征边界时间演化的自动面部表情分析。之前,我们开发了一种鲁棒且快速的精确唇轮廓分割算法(Eveno, N. et al., IEEE Trans.)。视频技术电路和系统,2004)。现在,我们专注于眼睛和眉毛边界的提取。眼睛和眉毛的分割分为三个步骤:首先,为每个特征定义基于柔性曲线的精确模型;其次,在检测到眼角等特征点后,在待处理图像上初始化模型;第三,根据亮度梯度的一些信息,将模型精确拟合到图像的面部特征上。我们的方法的性能是通过与人工地面真值的定量比较以及基于我们的面部特征分割结果的表情骨架分析来评估的。
{"title":"Eyes and eyebrows parametric models for automatic segmentation","authors":"Z. Hammal, A. Caplier","doi":"10.1109/IAI.2004.1300961","DOIUrl":"https://doi.org/10.1109/IAI.2004.1300961","url":null,"abstract":"The aim of our work is automatic facial expression analysis based on the study of temporal evolution of facial feature boundaries. Previously, we developed a robust and fast algorithm for accurate lip contour segmentation (Eveno, N. et al., IEEE Trans. Circuits and Systems for Video Technology, 2004). Now, we focus on eye and eyebrow boundary extraction. The segmentation of eyes and eyebrows involves three steps: first, an accurate model based on flexible curves is defined for each feature; second, models are initialized on the image to be processed after the detection of characteristic points such as eye corners; third, models are accurately fitted to the facial features of an image according to some information of luminance gradient. The performance of our method is evaluated by a quantitative comparison with a manual ground truth and also by the analysis of expression skeletons based on the results of our facial features segmentation.","PeriodicalId":326040,"journal":{"name":"6th IEEE Southwest Symposium on Image Analysis and Interpretation, 2004.","volume":"27 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120885806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}