This paper addresses image enhancement system consisting of image denoising technique based on Dual Tree Complex Wavelet Transform (DT-CWT) . The proposed algorithm at the outset models the noisy remote sensing image (NRSI) statistically by aptly amalgamating the structural features and textures from it. This statistical model is decomposed using DTCWT with Tap-10 or length-10 filter banks based on Farras wavelet implementation and sub band coefficients are suitably modeled to denoise with a method which is efficiently organized by combining the clustering techniques with soft thresholding softclustering technique. The clustering techniques classify the noisy and image pixels based on the neighborhood connected component analysis(CCA), connected pixel analysis and inter-pixel intensity variance (IPIV) and calculate an appropriate threshold value for noise removal. This threshold value is used with soft thresholding technique to denoise the image .Experimental results shows that that the proposed technique outperforms the conventional and state-of-the-art techniques .It is also evaluated that the denoised images using DTCWT (Dual Tree Complex Wavelet Transform) is better balance between smoothness and accuracy than the DWT.. We used the PSNR (Peak Signal to Noise Ratio) along with RMSE to assess the quality of denoised images.
本文研究了基于对偶树复小波变换(DT-CWT)的图像去噪技术的图像增强系统。该算法首先对噪声遥感图像进行统计建模,适当地融合噪声遥感图像的结构特征和纹理特征。利用基于Farras小波实现的Tap-10或length-10滤波器组对该统计模型进行分解,并对子带系数进行适当建模,采用聚类技术和软阈值软聚类技术相结合的有效组织方法进行降噪。聚类技术基于邻域连通分量分析(CCA)、连通像素分析和像素间强度方差(IPIV)对噪声和图像像素进行分类,并计算合适的阈值进行去噪。将该阈值与软阈值技术相结合,对图像进行去噪。实验结果表明,该方法优于传统的和最先进的去噪技术,并评价了使用双树复小波变换(Dual Tree Complex Wavelet Transform, DTCWT)去噪后的图像在平滑性和准确性之间的平衡优于小波变换。我们使用PSNR(峰值信噪比)和RMSE来评估去噪图像的质量。
{"title":"A Novel Algorithm for Image Denoising Using DT-CWT","authors":"S. Faruq, K. Ramanaiah, K. Soundararajan","doi":"10.5121/SIPIJ.2017.8302","DOIUrl":"https://doi.org/10.5121/SIPIJ.2017.8302","url":null,"abstract":"This paper addresses image enhancement system consisting of image denoising technique based on Dual Tree Complex Wavelet Transform (DT-CWT) . The proposed algorithm at the outset models the noisy remote sensing image (NRSI) statistically by aptly amalgamating the structural features and textures from it. This statistical model is decomposed using DTCWT with Tap-10 or length-10 filter banks based on Farras wavelet implementation and sub band coefficients are suitably modeled to denoise with a method which is efficiently organized by combining the clustering techniques with soft thresholding softclustering technique. The clustering techniques classify the noisy and image pixels based on the neighborhood connected component analysis(CCA), connected pixel analysis and inter-pixel intensity variance (IPIV) and calculate an appropriate threshold value for noise removal. This threshold value is used with soft thresholding technique to denoise the image .Experimental results shows that that the proposed technique outperforms the conventional and state-of-the-art techniques .It is also evaluated that the denoised images using DTCWT (Dual Tree Complex Wavelet Transform) is better balance between smoothness and accuracy than the DWT.. We used the PSNR (Peak Signal to Noise Ratio) along with RMSE to assess the quality of denoised images.","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"123 1","pages":"15-29"},"PeriodicalIF":0.0,"publicationDate":"2017-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83156138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Imaging and Communications in Medicine (DICOM) standard is an image archive system which allow itself to serve as an image manager that control the acquisition, retrieval, and distributions of medical images within entire picture archiving and communication The DICOM technology is suitable when sending images between different departments within hospitals or/and other hospitals, and consultant. However, some hospitals lack the DICOM system. In this paper proposed algorithm view and converts .dcm image files jpeg2000 standard image, whereby the image should be viewable, using with common image viewer programs. Now this files are ready to transfer via internet and easily viewable on normal computer systems using JPEG2000 viewer or on Linux platform and Windows platform.
{"title":"To Develop a Dicom Viewer Tool for Viewing JPEG 2000 Image and Patient Information","authors":"T. Baraskar, V. Mankar","doi":"10.5121/sipij.2017.8204","DOIUrl":"https://doi.org/10.5121/sipij.2017.8204","url":null,"abstract":"Imaging and Communications in Medicine (DICOM) standard is an image archive system which allow itself to serve as an image manager that control the acquisition, retrieval, and distributions of medical images within entire picture archiving and communication The DICOM technology is suitable when sending images between different departments within hospitals or/and other hospitals, and consultant. However, some hospitals lack the DICOM system. In this paper proposed algorithm view and converts .dcm image files jpeg2000 standard image, whereby the image should be viewable, using with common image viewer programs. Now this files are ready to transfer via internet and easily viewable on normal computer systems using JPEG2000 viewer or on Linux platform and Windows platform.","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"27 1","pages":"39-51"},"PeriodicalIF":0.0,"publicationDate":"2017-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89596201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Malignant and Benign Brain Tumor Segmentation and Classification Using SVM with Weighted Kernel Width","authors":"Kimia Rezaei, H. Agahi","doi":"10.5121/SIPIJ.2017.8203","DOIUrl":"https://doi.org/10.5121/SIPIJ.2017.8203","url":null,"abstract":"","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"164 1","pages":"25-37"},"PeriodicalIF":0.0,"publicationDate":"2017-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75852327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Jaba, Mosbah Elsghair, Najeb Tonish, Abdusalam Abugraga
Copyright protection has currently become a difficult domain in reality situation. an honest quality watermarking scheme might to have high sensory activity transparency, and may even be robust enough against potential attacks. This paper tends to propose the special domain based mostly watermarking scheme for color pictures. This scheme uses the Sobel and canny edge detection strategies to work out edge data of the luminance and chrominance elements of the colour image. The edge detection strategies are used to verify the embedding capability of every color element. The massive capacities of watermark bits are embedded into an element of enormous edge information. The strength of the projected scheme is analyzed considering differing kinds of image process attacks, like Blurring and adding noise.
{"title":"Applying Edge Information in YCbCr Color Space on the Image Watermarking","authors":"M. Jaba, Mosbah Elsghair, Najeb Tonish, Abdusalam Abugraga","doi":"10.5121/SIPIJ.2017.8205","DOIUrl":"https://doi.org/10.5121/SIPIJ.2017.8205","url":null,"abstract":"Copyright protection has currently become a difficult domain in reality situation. an honest quality watermarking scheme might to have high sensory activity transparency, and may even be robust enough against potential attacks. This paper tends to propose the special domain based mostly watermarking scheme for color pictures. This scheme uses the Sobel and canny edge detection strategies to work out edge data of the luminance and chrominance elements of the colour image. The edge detection strategies are used to verify the embedding capability of every color element. The massive capacities of watermark bits are embedded into an element of enormous edge information. The strength of the projected scheme is analyzed considering differing kinds of image process attacks, like Blurring and adding noise.","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"47 1","pages":"53-63"},"PeriodicalIF":0.0,"publicationDate":"2017-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79431079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a statistical framework for recognising 2D shapes which are represented as an arrangement of curves or strokes. The approach is a hierarchical one which mixes geometric and symbolic information in a three-layer architecture. Each curve primitive is represented using a point-distribution model which describes how its shape varies over a set of training data. We assign stroke labels to the primitives and these indicate to which class they belong. Shapes are decomposed into an arrangement of primitives and the global shape representation has two components. The first of these is a second point distribution model that is used to represent the geometric arrangement of the curve centre-points. The second component is a string of stroke labels that represents the symbolic arrangement of strokes. Hence each shape can be represented by a set of centre-point deformation parameters and a dictionary of permissible stroke label configurations. The hierarchy is a two-level architecture in which the curve models reside at the nonterminal lower level of the tree. The top level represents the curve arrangements allowed by the dictionary of permissible stroke combinations. The aim in recognition is to minimise the cross entropy between the probability distributions for geometric alignment errors and curve label errors. We show how the stroke parameters, shape-alignment parameters and stroke labels may be recovered by applying the expectation maximization EM algorithm to the utility measure. We apply the resulting shape-recognition method to Arabic character recognition.
{"title":"Arabic Handwritten Character Recognition Using Structural Shape Decomposition","authors":"Abdullah A. Al-Shaher, E. Hancock","doi":"10.5121/SIPIJ.2017.8201","DOIUrl":"https://doi.org/10.5121/SIPIJ.2017.8201","url":null,"abstract":"This paper presents a statistical framework for recognising 2D shapes which are represented as an arrangement of curves or strokes. The approach is a hierarchical one which mixes geometric and symbolic information in a three-layer architecture. Each curve primitive is represented using a point-distribution model which describes how its shape varies over a set of training data. We assign stroke labels to the primitives and these indicate to which class they belong. Shapes are decomposed into an arrangement of primitives and the global shape representation has two components. The first of these is a second point distribution model that is used to represent the geometric arrangement of the curve centre-points. The second component is a string of stroke labels that represents the symbolic arrangement of strokes. Hence each shape can be represented by a set of centre-point deformation parameters and a dictionary of permissible stroke label configurations. The hierarchy is a two-level architecture in which the curve models reside at the nonterminal lower level of the tree. The top level represents the curve arrangements allowed by the dictionary of permissible stroke combinations. The aim in recognition is to minimise the cross entropy between the probability distributions for geometric alignment errors and curve label errors. We show how the stroke parameters, shape-alignment parameters and stroke labels may be recovered by applying the expectation maximization EM algorithm to the utility measure. We apply the resulting shape-recognition method to Arabic character recognition.","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"18 1","pages":"01-11"},"PeriodicalIF":0.0,"publicationDate":"2017-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88361059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Object tracking can be defined as the process of detecting an object of interest from a video scene and keeping track of its motion, orientation, occlusion etc. in order to extract useful information. It is indeed a challenging problem and it’s an important task. Many researchers are getting attracted in the field of computer vision, specifically the field of object tracking in video surveillance. The main purpose of this paper is to give to the reader information of the present state of the art object tracking, together with presenting steps involved in Background Subtraction and their techniques. In related literature we found three main methods of object tracking: the first method is the optical flow; the second is related to the background subtraction, which is divided into two types presented in this paper, then the temporal differencing and the SIFT method and the last one is the mean shift method. We present a novel approach to background subtraction that compare a current frame with the background model that we have set before, so we can classified each pixel of the image as a foreground or a background element, then comes the tracking step to present our object of interest, which is a person, by his centroid. The tracking step is divided into two different methods, the surface method and the K-NN method, both are explained in the paper. Our proposed method is implemented and evaluated using CAVIAR database.
{"title":"A Novel Method for Person Tracking Based K-NN : Comparison with Sift and Mean Shift Method","authors":"Asmaa Ait Moulay, A. Amine","doi":"10.5121/SIPIJ.2017.8104","DOIUrl":"https://doi.org/10.5121/SIPIJ.2017.8104","url":null,"abstract":"Object tracking can be defined as the process of detecting an object of interest from a video scene and keeping track of its motion, orientation, occlusion etc. in order to extract useful information. It is indeed a challenging problem and it’s an important task. Many researchers are getting attracted in the field of computer vision, specifically the field of object tracking in video surveillance. The main purpose of this paper is to give to the reader information of the present state of the art object tracking, together with presenting steps involved in Background Subtraction and their techniques. In related literature we found three main methods of object tracking: the first method is the optical flow; the second is related to the background subtraction, which is divided into two types presented in this paper, then the temporal differencing and the SIFT method and the last one is the mean shift method. We present a novel approach to background subtraction that compare a current frame with the background model that we have set before, so we can classified each pixel of the image as a foreground or a background element, then comes the tracking step to present our object of interest, which is a person, by his centroid. The tracking step is divided into two different methods, the surface method and the K-NN method, both are explained in the paper. Our proposed method is implemented and evaluated using CAVIAR database.","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"269 1","pages":"45-59"},"PeriodicalIF":0.0,"publicationDate":"2017-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79860958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study aimed to estimate the original voice signal which is interrupted by noise with MMSE method based on Wiener filter. The Wiener filter is classified as the MMSE estimator studied by previous researchers. The study assessed the voice signal input count down by European woman that are distorted by two types of noises, the one is noise based on the outdoor location, sound of siren firefighter, and the other is the indoor location noise, which represented by noise in lecturer room in campus. The two process signal is estimated by MMSE estimator which approximated by Wiener filter that must have founded and counted the covariance of each signal processes are related to the system. Thus the researchers tried to estimate both types of sounds noisy European woman are due to interference noise source of fire fighter and faculty room and assess its impact in the form of graphs vote against a function of time, the pattern shape of the signal and SNR.
{"title":"Analysis of MMSE Speech Estimation Impact in West Sumatra's Noises","authors":"Suardinata, V. Sayuthi, Zainul Efendy","doi":"10.5121/SIPIJ.2017.8102","DOIUrl":"https://doi.org/10.5121/SIPIJ.2017.8102","url":null,"abstract":"This study aimed to estimate the original voice signal which is interrupted by noise with MMSE method based on Wiener filter. The Wiener filter is classified as the MMSE estimator studied by previous researchers. The study assessed the voice signal input count down by European woman that are distorted by two types of noises, the one is noise based on the outdoor location, sound of siren firefighter, and the other is the indoor location noise, which represented by noise in lecturer room in campus. The two process signal is estimated by MMSE estimator which approximated by Wiener filter that must have founded and counted the covariance of each signal processes are related to the system. Thus the researchers tried to estimate both types of sounds noisy European woman are due to interference noise source of fire fighter and faculty room and assess its impact in the form of graphs vote against a function of time, the pattern shape of the signal and SNR.","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"11 1","pages":"11-21"},"PeriodicalIF":0.0,"publicationDate":"2017-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86228380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Previous research has found autocorrelation domain as an appropriate domain for signal and noise separation. This paper discusses a simple and effective method for decreasing the effect of noise on the autocorrelation of the clean signal. This could later be used in extracting mel cepstral parameters for speech recognition. Two different methods are proposed to deal with the effect of error introduced by considering speech and noise completely uncorrelated. The basic approach deals with reducing the effect of noise via estimation and subtraction of its effect from the noisy speech signal autocorrelation. In order to improve this method, we consider inserting a speech/noise cross correlation term into the equations used for the estimation of clean speech autocorrelation, using an estimate of it, found through Kernel method. Alternatively, we used an estimate of the cross correlation term using an averaging approach. A further improvement was obtained through introduction of an overestimation parameter in the basic method. We tested our proposed methods on the Aurora 2 task. The Basic method has shown considerable improvement over the standard features and some other robust autocorrelation-based features. The proposed techniques have further increased the robustness of the basic autocorrelation-based method.
{"title":"Robust Feature Extraction Using Autocorrelation Domain for Noisy Speech Recognition","authors":"G. Farahani","doi":"10.5121/SIPIJ.2017.8103","DOIUrl":"https://doi.org/10.5121/SIPIJ.2017.8103","url":null,"abstract":"Previous research has found autocorrelation domain as an appropriate domain for signal and noise separation. This paper discusses a simple and effective method for decreasing the effect of noise on the autocorrelation of the clean signal. This could later be used in extracting mel cepstral parameters for speech recognition. Two different methods are proposed to deal with the effect of error introduced by considering speech and noise completely uncorrelated. The basic approach deals with reducing the effect of noise via estimation and subtraction of its effect from the noisy speech signal autocorrelation. In order to improve this method, we consider inserting a speech/noise cross correlation term into the equations used for the estimation of clean speech autocorrelation, using an estimate of it, found through Kernel method. Alternatively, we used an estimate of the cross correlation term using an averaging approach. A further improvement was obtained through introduction of an overestimation parameter in the basic method. We tested our proposed methods on the Aurora 2 task. The Basic method has shown considerable improvement over the standard features and some other robust autocorrelation-based features. The proposed techniques have further increased the robustness of the basic autocorrelation-based method.","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"21 1","pages":"23-44"},"PeriodicalIF":0.0,"publicationDate":"2017-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87260515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Vision Based Hand Gesture Recognition Using Fourier Descriptor for Indian Sign Language","authors":"Archana Ghotkar, P. Vidap, Santosh N. Ghotkar","doi":"10.5121/SIPIJ.2016.7603","DOIUrl":"https://doi.org/10.5121/SIPIJ.2016.7603","url":null,"abstract":"","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"32 1","pages":"29-38"},"PeriodicalIF":0.0,"publicationDate":"2016-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73298274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-30DOI: 10.6084/M9.FIGSHARE.6114833.V1
H. Citak, M. Coramik, Y. Ege
Today, coins are used to operate many electric devices that are open to the public service. Washing machines, play stations, computers, auto brooms, foam machines, beverage machines, telephone chargers, hair dryers and water heaters are some examples of these devices These devices include coin recognition systems. In these systems, there are coils at two different radius, which become electromagnets when the current is passed through them. The AC current supplied to the coils creates a variable magnetic field, which induces the eddy current on the coil during the passing of money. The magnetic field generated by the Eddy current reduces the current passing through the coil. The amount of change of current in the coil gives information about the coin; the type of metal (element) and the amount of metal (element). In this study, a new coin identification system (magnetic measurement system) is designed. In this system, the magnetic anomaly generated by the coin as a result of applying direct current to the coils is tried to be detected by fluxgate sensor. In this study, sensor voltages are acquired in computer environment by using developed electronic unit and LabVIEW based software. In the paper, experimental results have been discussed in detail.
{"title":"A FLUXGATE SENSOR APPLICATION: COIN IDENTIFICATION","authors":"H. Citak, M. Coramik, Y. Ege","doi":"10.6084/M9.FIGSHARE.6114833.V1","DOIUrl":"https://doi.org/10.6084/M9.FIGSHARE.6114833.V1","url":null,"abstract":"Today, coins are used to operate many electric devices that are open to the public service. Washing machines, play stations, computers, auto brooms, foam machines, beverage machines, telephone chargers, hair dryers and water heaters are some examples of these devices These devices include coin recognition systems. In these systems, there are coils at two different radius, which become electromagnets when the current is passed through them. The AC current supplied to the coils creates a variable magnetic field, which induces the eddy current on the coil during the passing of money. The magnetic field generated by the Eddy current reduces the current passing through the coil. The amount of change of current in the coil gives information about the coin; the type of metal (element) and the amount of metal (element). In this study, a new coin identification system (magnetic measurement system) is designed. In this system, the magnetic anomaly generated by the coin as a result of applying direct current to the coils is tried to be detected by fluxgate sensor. In this study, sensor voltages are acquired in computer environment by using developed electronic unit and LabVIEW based software. In the paper, experimental results have been discussed in detail.","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"50 1","pages":"01-10"},"PeriodicalIF":0.0,"publicationDate":"2016-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73642108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}