Compressive Sensing (CS) is an emerging signal processing technique where a sparse signal is reconstructed from a small set of random projections. In the recent literature, CS techniques have demonstrated promising results for signal compression and reconstruction. However, their potential as dimensionality reduction techniques for time series has not been significantly explored to date. To this aim, this work investigates the suitability of compressive-sensed time series in an application of human action recognition. In the paper, results from several experiments are presented: (1) in a first set of experiments, the time series are transformed into the CS domain and fed into a hidden Markov model (HMM) for action recognition, (2) in a second set of experiments, the time series are explicitly reconstructed after CS compression and then used for recognition, (3) in the third set of experiments, the time series are compressed by a hybrid CS-Haar basis prior to input into HMM, (4) in the fourth set, the time series are reconstructed from the hybrid CS-Haar basis and used for recognition. We further compare these approaches with alternative techniques such as sub-sampling and filtering. Results from our experiments show unequivocally that the application of CS does not degrade the recognition accuracy, rather, it often increases it. This proves that CS can provide a desirable form of dimensionality reduction in pattern recognition over time series.
{"title":"Compressive Sensing of Time Series for Human Action Recognition","authors":"Óscar Pérez, R. Xu, M. Piccardi","doi":"10.1109/DICTA.2010.83","DOIUrl":"https://doi.org/10.1109/DICTA.2010.83","url":null,"abstract":"Compressive Sensing (CS) is an emerging signal processing technique where a sparse signal is reconstructed from a small set of random projections. In the recent literature, CS techniques have demonstrated promising results for signal compression and reconstruction. However, their potential as dimensionality reduction techniques for time series has not been significantly explored to date. To this aim, this work investigates the suitability of compressive-sensed time series in an application of human action recognition. In the paper, results from several experiments are presented: (1) in a first set of experiments, the time series are transformed into the CS domain and fed into a hidden Markov model (HMM) for action recognition, (2) in a second set of experiments, the time series are explicitly reconstructed after CS compression and then used for recognition, (3) in the third set of experiments, the time series are compressed by a hybrid CS-Haar basis prior to input into HMM, (4) in the fourth set, the time series are reconstructed from the hybrid CS-Haar basis and used for recognition. We further compare these approaches with alternative techniques such as sub-sampling and filtering. Results from our experiments show unequivocally that the application of CS does not degrade the recognition accuracy, rather, it often increases it. This proves that CS can provide a desirable form of dimensionality reduction in pattern recognition over time series.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"146 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132201019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A multimodal biometric system employing the hand based modalities (i.e. palmprint, hand veins, and hand geometry) is developed. The proposed approach for the decision level fusion combines the decisions from these modalities using t-norms due to Hamacher, Yager, Weber, Schweizer and Sklar. These norms deal with the uncertainty and imperfection pervading the different sources of knowledge (error rates from different modalities). The proposed biometric system is quite computationally fast and outperforms the decision level fusion accomplished through the conventional rules (OR, AND) The experimental evaluation on a database of 100 users confirms the effectiveness of the decision level fusion. The preliminary results are encouraging in terms of the decision accuracy and computing efficiency.
{"title":"Decision Level Fusion Using t-Norms","authors":"M. Hanmandlu, J. Grover, V. Madasu","doi":"10.1109/DICTA.2010.15","DOIUrl":"https://doi.org/10.1109/DICTA.2010.15","url":null,"abstract":"A multimodal biometric system employing the hand based modalities (i.e. palmprint, hand veins, and hand geometry) is developed. The proposed approach for the decision level fusion combines the decisions from these modalities using t-norms due to Hamacher, Yager, Weber, Schweizer and Sklar. These norms deal with the uncertainty and imperfection pervading the different sources of knowledge (error rates from different modalities). The proposed biometric system is quite computationally fast and outperforms the decision level fusion accomplished through the conventional rules (OR, AND) The experimental evaluation on a database of 100 users confirms the effectiveness of the decision level fusion. The preliminary results are encouraging in terms of the decision accuracy and computing efficiency.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115119408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Video Stabilization is now considered an old problem which is almost solved but there are still some connecting problems which needs research attention. One of such issues arises due to multiple unstable videos streams coming from multiple sensors which often contain complementary information. To enhance system performance, instability should be removed in a single go rather than stabilizing each sensor individually. This paper proposes a cooperative video stabilization framework, VSAMS for multisensory aerial data based on robust boosting curves which encapsulate stability of high spatial frequency information as used by flying parakeets (budgerigars). For reducing shake and jitter and preservation of actual camera path, a multistage smoothing approach is visualized. Experiments are performed on multisensory UAV data which contains infrared and electro-optical video streams. Subjective and objective quality evaluation proves effectiveness of the proposed cooperative stabilization framework.
{"title":"VSAMS: Video Stabilization Approach for Multiple Sensors","authors":"Anwaar Ul Haq, I. Gondal, M. Murshed","doi":"10.1109/DICTA.2010.76","DOIUrl":"https://doi.org/10.1109/DICTA.2010.76","url":null,"abstract":"Video Stabilization is now considered an old problem which is almost solved but there are still some connecting problems which needs research attention. One of such issues arises due to multiple unstable videos streams coming from multiple sensors which often contain complementary information. To enhance system performance, instability should be removed in a single go rather than stabilizing each sensor individually. This paper proposes a cooperative video stabilization framework, VSAMS for multisensory aerial data based on robust boosting curves which encapsulate stability of high spatial frequency information as used by flying parakeets (budgerigars). For reducing shake and jitter and preservation of actual camera path, a multistage smoothing approach is visualized. Experiments are performed on multisensory UAV data which contains infrared and electro-optical video streams. Subjective and objective quality evaluation proves effectiveness of the proposed cooperative stabilization framework.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132303547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, a new method for face localization in color images, which is based on co-evolutionary systems, is introduced. The proposed method uses a co-evolutionary system to locate the eyes in a face image. The used coevolutionary system involves two genetic algorithm models. The first GA model searches for a solution in the given environment, and the second GA model searches for useful genetic information in the first GA model. In the next step, by using the location of eyes in image the parameters of face's bounding ellipse (center, orientation, major and minor axis) are computed. To evaluate and compare the proposed method with other methods, high order Pseudo Zernike Moments (PZM) are utilized to produce feature vectors and a Radial Basis Function (RBF) neural network is used as the classifier. Simulation results indicate that the speed and accuracy of the new system using the proposed face localization method which uses a co-evolutionary approach is higher than the system proposed in [10].
{"title":"Face Localization Using an Effective Co-evolutionary Genetic Algorithm","authors":"F. Hajati, C. Lucas, Yongsheng Gao","doi":"10.1109/DICTA.2010.116","DOIUrl":"https://doi.org/10.1109/DICTA.2010.116","url":null,"abstract":"In this paper, a new method for face localization in color images, which is based on co-evolutionary systems, is introduced. The proposed method uses a co-evolutionary system to locate the eyes in a face image. The used coevolutionary system involves two genetic algorithm models. The first GA model searches for a solution in the given environment, and the second GA model searches for useful genetic information in the first GA model. In the next step, by using the location of eyes in image the parameters of face's bounding ellipse (center, orientation, major and minor axis) are computed. To evaluate and compare the proposed method with other methods, high order Pseudo Zernike Moments (PZM) are utilized to produce feature vectors and a Radial Basis Function (RBF) neural network is used as the classifier. Simulation results indicate that the speed and accuracy of the new system using the proposed face localization method which uses a co-evolutionary approach is higher than the system proposed in [10].","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126085815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, a model-based depth estimation technique has been proposed, which estimates surface model parameters by means of Hooke-Jeeves optimization. Assuming a parametric surface model, the parameters best explaining the perspective changes of the surface between different views are estimated. This constitutes a fitting of models directly into stereo images, which is in contrast to the usual approach of fitting models into pre-processed disparity data. In this paper, we conduct a comparison of the image fitting based on Hooke-Jeeves, an image fitting based on gradient descent and a disparity fitting based on RANSAC. We show that the image fitting based on Hooke-Jeeves as well as the image fitting based on gradient descent are sensitive to occlusion. However, we also propose a simple pre-processing that eliminates this problem. Our experiments revealed that all three approaches have a similar depth accuracy. However, tests under challenging conditions show that the fitting based on Hooke-Jeeves is more robust than RANSAC and gradient descent.
{"title":"Evaluation of Direct Plane Fitting for Depth and Parameter Estimation","authors":"Nils Einecke, J. Eggert","doi":"10.1109/DICTA.2010.89","DOIUrl":"https://doi.org/10.1109/DICTA.2010.89","url":null,"abstract":"Recently, a model-based depth estimation technique has been proposed, which estimates surface model parameters by means of Hooke-Jeeves optimization. Assuming a parametric surface model, the parameters best explaining the perspective changes of the surface between different views are estimated. This constitutes a fitting of models directly into stereo images, which is in contrast to the usual approach of fitting models into pre-processed disparity data. In this paper, we conduct a comparison of the image fitting based on Hooke-Jeeves, an image fitting based on gradient descent and a disparity fitting based on RANSAC. We show that the image fitting based on Hooke-Jeeves as well as the image fitting based on gradient descent are sensitive to occlusion. However, we also propose a simple pre-processing that eliminates this problem. Our experiments revealed that all three approaches have a similar depth accuracy. However, tests under challenging conditions show that the fitting based on Hooke-Jeeves is more robust than RANSAC and gradient descent.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126093439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The automatic extraction of contour lines and generation of digital elevation models (DEMs) from topographic maps is one of the challenging subjects mostly because of aliasing, false colors, closely spaced lines and other features causing intersection or overlapping. In this paper we present an algorithm to extract contour lines from colored images of scanned topographic maps. In our approach, we first segment the color image using adaptive thresholding to extract basic contour structure. Noise in the images is removed utilizing morphological operation. Next, the contour lines are reduced up to unitary thickness using Zhang’s thinning algorithm. The bifurcation and holes that result after thinning are removed using different masks. After thinning, end points of the broken contours are identified and best candidate for connection is determined, this is performed by analyzing the Euclidean distance and direction of end points near the gap. Then broken contour lines are joined employing curve fitting technique. The performance of the algorithm is tested on several samples of topographic maps and results show good segmentation of the contour lines. This automatic extraction algorithm for contour lines from topographic maps can save significant amount of time and labor as well as improving the accuracy of the contour line extraction.
{"title":"Automatic Extraction of Contour Lines from Topographic Maps","authors":"Sadia Gul, Muhammad Faisal Khan","doi":"10.1109/DICTA.2010.105","DOIUrl":"https://doi.org/10.1109/DICTA.2010.105","url":null,"abstract":"The automatic extraction of contour lines and generation of digital elevation models (DEMs) from topographic maps is one of the challenging subjects mostly because of aliasing, false colors, closely spaced lines and other features causing intersection or overlapping. In this paper we present an algorithm to extract contour lines from colored images of scanned topographic maps. In our approach, we first segment the color image using adaptive thresholding to extract basic contour structure. Noise in the images is removed utilizing morphological operation. Next, the contour lines are reduced up to unitary thickness using Zhang’s thinning algorithm. The bifurcation and holes that result after thinning are removed using different masks. After thinning, end points of the broken contours are identified and best candidate for connection is determined, this is performed by analyzing the Euclidean distance and direction of end points near the gap. Then broken contour lines are joined employing curve fitting technique. The performance of the algorithm is tested on several samples of topographic maps and results show good segmentation of the contour lines. This automatic extraction algorithm for contour lines from topographic maps can save significant amount of time and labor as well as improving the accuracy of the contour line extraction.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130303439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mira Park, Jesse S. Jin, P. Summons, S. Luo, R. Hofstetter
This paper proposes an explicit parametric model for colonic polyps. The model captures the overall shape of the polyp and is then used to derive the probability distribution of features relevant for polyp detection. The probability distribution represents the glocal properties of the polyp candidates, where the glocal properties capture both global and local information of an object. The probability distribution is implemented on the unit sphere, which is divided into 26 partitions, and each partition captures the local properties of a polyp candidate. From the partitions on the sphere, an observation sequence also defines global properties of the polyp candidate and the observation sequence is assessed by explicit models for classification. When it represents glocal parameters of a polyp candidate, we call the unit sphere a brilliant sphere. The parametric models are estimated from 20 geometric models typifying the various cap shapes of colonic polyps.
{"title":"False Positive Reduction in Colonic Polyp Detection Using Glocal Information","authors":"Mira Park, Jesse S. Jin, P. Summons, S. Luo, R. Hofstetter","doi":"10.1109/DICTA.2010.12","DOIUrl":"https://doi.org/10.1109/DICTA.2010.12","url":null,"abstract":"This paper proposes an explicit parametric model for colonic polyps. The model captures the overall shape of the polyp and is then used to derive the probability distribution of features relevant for polyp detection. The probability distribution represents the glocal properties of the polyp candidates, where the glocal properties capture both global and local information of an object. The probability distribution is implemented on the unit sphere, which is divided into 26 partitions, and each partition captures the local properties of a polyp candidate. From the partitions on the sphere, an observation sequence also defines global properties of the polyp candidate and the observation sequence is assessed by explicit models for classification. When it represents glocal parameters of a polyp candidate, we call the unit sphere a brilliant sphere. The parametric models are estimated from 20 geometric models typifying the various cap shapes of colonic polyps.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130330041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yang Song, Weidong (Tom) Cai, S. Eberl, M. Fulham, D. Feng
Content-based image retrieval (CBIR) has been an active research area since mid 90’s with major focus on feature extraction, due to its significant impact on image retrieval performance. When applying CBIR in the medical domain, different imaging modalities and anatomical regions require different feature extraction methods that integrate some domain-specific knowledge for effective image retrieval. This paper presents some new CBIR techniques for positron emission tomography - computed tomography (PET-CT) lung images, which exhibit special characteristics such as similar image intensities of lung tumors and soft tissues. Adaptive texture feature extraction and structural signature representation are proposed, and implemented based on our recently developed CBIR framework. Evaluation of the method on clinical data from lung cancer patients with various disease stages demonstrates its benefits.
{"title":"Structure-Adaptive Feature Extraction and Representation for Multi-modality Lung Images Retrieval","authors":"Yang Song, Weidong (Tom) Cai, S. Eberl, M. Fulham, D. Feng","doi":"10.1109/DICTA.2010.37","DOIUrl":"https://doi.org/10.1109/DICTA.2010.37","url":null,"abstract":"Content-based image retrieval (CBIR) has been an active research area since mid 90’s with major focus on feature extraction, due to its significant impact on image retrieval performance. When applying CBIR in the medical domain, different imaging modalities and anatomical regions require different feature extraction methods that integrate some domain-specific knowledge for effective image retrieval. This paper presents some new CBIR techniques for positron emission tomography - computed tomography (PET-CT) lung images, which exhibit special characteristics such as similar image intensities of lung tumors and soft tissues. Adaptive texture feature extraction and structural signature representation are proposed, and implemented based on our recently developed CBIR framework. Evaluation of the method on clinical data from lung cancer patients with various disease stages demonstrates its benefits.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130398343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. G. Vázquez, Brais Cancela, N. Barreira, M. G. Penedo, M. Sáez
Abnormalities in the retinal vessel tree are associated with different pathologies. Usually, they affect arteries and veins differently. In this regard, the arteriovenous ratio(AVR) is a measure of retinal vessel caliber, widely used in medicine to study the influence of these irregularities in disease evolution. Hence, the development of an automatic tool for AVR computation as well as any other tool for diagnosis support need an objective, reliable and fast artery/vein classifier. This paper proposes a technique to improve the retinal vessel classification in an AVR computation framework. The proposed methodology combines a color clustering strategy and a vessel tracking procedure based on minimal path approaches. The tests performed with 58 images manually labeled by three experts show promising results.
{"title":"On the Automatic Computation of the Arterio-Venous Ratio in Retinal Images: Using Minimal Paths for the Artery/Vein Classification","authors":"S. G. Vázquez, Brais Cancela, N. Barreira, M. G. Penedo, M. Sáez","doi":"10.1109/DICTA.2010.106","DOIUrl":"https://doi.org/10.1109/DICTA.2010.106","url":null,"abstract":"Abnormalities in the retinal vessel tree are associated with different pathologies. Usually, they affect arteries and veins differently. In this regard, the arteriovenous ratio(AVR) is a measure of retinal vessel caliber, widely used in medicine to study the influence of these irregularities in disease evolution. Hence, the development of an automatic tool for AVR computation as well as any other tool for diagnosis support need an objective, reliable and fast artery/vein classifier. This paper proposes a technique to improve the retinal vessel classification in an AVR computation framework. The proposed methodology combines a color clustering strategy and a vessel tracking procedure based on minimal path approaches. The tests performed with 58 images manually labeled by three experts show promising results.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127734256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we propose a novel approach for geometric shape classification by using shape simplification and discrete Hidden Markov Model (HMM). The HMM is constructed using the landmark points obtained from the shape simplification for each shape image in the dataset. Some useful strategies have been employed for the constructed HMM for geometric shape classification. Experimental results based on the common MPEG7 CE shapes database shows that our proposed method can achieve very good accuracy in different kinds of shapes.
{"title":"Geometric Invariant Shape Classification Using Hidden Markov Model","authors":"Chi-Man Pun, Cong Lin","doi":"10.1109/DICTA.2010.75","DOIUrl":"https://doi.org/10.1109/DICTA.2010.75","url":null,"abstract":"In this paper we propose a novel approach for geometric shape classification by using shape simplification and discrete Hidden Markov Model (HMM). The HMM is constructed using the landmark points obtained from the shape simplification for each shape image in the dataset. Some useful strategies have been employed for the constructed HMM for geometric shape classification. Experimental results based on the common MPEG7 CE shapes database shows that our proposed method can achieve very good accuracy in different kinds of shapes.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127926015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}