Pub Date : 2017-12-01DOI: 10.1109/PIC.2017.8359525
Zhongzheng Ding, Zebin Wu, Wei Huang, Xianliang Yin, Jin Sun, Yi Zhang, Zhihui Wei, Yan Zhang
The Pan-sharpening method, which is used to address the fusion problem of multispectral (MS) images and panchromatic (PAN) images, has continuously been a hot spot in image fusion technology. In order to improve the quality and accuracy of the fused image, this paper proposes a Pan-sharpening method for MS image based on Back Propagation (BP) neural network, and further uses the Spark platform and the TensorFlowOnSpark (TFOS) framework to optimize the BP neural network. The experimental results show that the proposed method effectively enhances the quality of the fused image, and the parallel optimization method for BP neural network based on Spark improves the computational efficiency while ensuring the fusion accuracy.
{"title":"A Pan-sharpening method for multispectral image with back propagation neural network and its parallel optimization based on Spark","authors":"Zhongzheng Ding, Zebin Wu, Wei Huang, Xianliang Yin, Jin Sun, Yi Zhang, Zhihui Wei, Yan Zhang","doi":"10.1109/PIC.2017.8359525","DOIUrl":"https://doi.org/10.1109/PIC.2017.8359525","url":null,"abstract":"The Pan-sharpening method, which is used to address the fusion problem of multispectral (MS) images and panchromatic (PAN) images, has continuously been a hot spot in image fusion technology. In order to improve the quality and accuracy of the fused image, this paper proposes a Pan-sharpening method for MS image based on Back Propagation (BP) neural network, and further uses the Spark platform and the TensorFlowOnSpark (TFOS) framework to optimize the BP neural network. The experimental results show that the proposed method effectively enhances the quality of the fused image, and the parallel optimization method for BP neural network based on Spark improves the computational efficiency while ensuring the fusion accuracy.","PeriodicalId":370588,"journal":{"name":"2017 International Conference on Progress in Informatics and Computing (PIC)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125189517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/PIC.2017.8359559
Yonghui Dai, Jinfei Wang, Haoyuan Pan, Fenfen Zhou, Chen Ye
In recent years, the trend of population aging brings great demand to the service industry for the aged. Meanwhile, the old-age service resources are relatively scarce, which has become an important issue in the development of society all over the world. In order to improve the service quality of the aged, smart care service based on context awareness was proposed in this paper. Firstly, environmental data, physical sign, voice and facial expression data of the aged are collected. And then affective computing and data mining methods were used to these data, so as to find out the demand of the aged and provide them with smart care service. Results show that this method can effectively help the aged.
{"title":"Study on smart care service for the aged based on context awareness","authors":"Yonghui Dai, Jinfei Wang, Haoyuan Pan, Fenfen Zhou, Chen Ye","doi":"10.1109/PIC.2017.8359559","DOIUrl":"https://doi.org/10.1109/PIC.2017.8359559","url":null,"abstract":"In recent years, the trend of population aging brings great demand to the service industry for the aged. Meanwhile, the old-age service resources are relatively scarce, which has become an important issue in the development of society all over the world. In order to improve the service quality of the aged, smart care service based on context awareness was proposed in this paper. Firstly, environmental data, physical sign, voice and facial expression data of the aged are collected. And then affective computing and data mining methods were used to these data, so as to find out the demand of the aged and provide them with smart care service. Results show that this method can effectively help the aged.","PeriodicalId":370588,"journal":{"name":"2017 International Conference on Progress in Informatics and Computing (PIC)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121885516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose a video enhancement method using temporal-spatial total variation Retinex and luminance adaption. To utilize the temporal information between video frames, we construct a illumination data fidelity term and propose an temporal-spatial total variation model for Retinex. In order to further enhance the contrast of video frames, we use the adaptive Gamma correction with weighting distribution as a post processing step. Thus, the proposed method is able to enhance video frames' contrast while producing coherent illumination. Experimental results demonstrate the efficiency of the proposed method.
{"title":"Video enhancement using temporal-spatial total variation retinex and luminance adaptation","authors":"Liqian Wang, W. Shao, Qi Ge, Haibo Li, Liang Xiao, Zhihui Wei","doi":"10.1109/PIC.2017.8359524","DOIUrl":"https://doi.org/10.1109/PIC.2017.8359524","url":null,"abstract":"In this paper, we propose a video enhancement method using temporal-spatial total variation Retinex and luminance adaption. To utilize the temporal information between video frames, we construct a illumination data fidelity term and propose an temporal-spatial total variation model for Retinex. In order to further enhance the contrast of video frames, we use the adaptive Gamma correction with weighting distribution as a post processing step. Thus, the proposed method is able to enhance video frames' contrast while producing coherent illumination. Experimental results demonstrate the efficiency of the proposed method.","PeriodicalId":370588,"journal":{"name":"2017 International Conference on Progress in Informatics and Computing (PIC)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122717147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/PIC.2017.8359564
Yufei Wang, Guangtao Xue, Shiyou Qian, Minglu Li
The hybrid cloud model has received considerable attention in recent years. However, it is challenging to schedule computing applications in hybrid clouds because of the various product types and pricing models provided by different cloud providers in conjunction with the complexity of private cloud resource management. In this paper, we focus on a typical scenario to schedule computing requests in hybrid clouds, trying to minimize the costs incurred while keeping the deadline missing of the requests at an acceptable level. We formulate this problem as an online optimization model originally, and by taking advantage of the Lyapunov optimization techniques, we transform it into a one-shot binary linear optimization problem which is much easier to solve. Based on that we develop a hybrid cloud scheduler, and simulation results suggest that our scheduler strikes a good balance between the cost of public clouds and the deadline missing of computing requests. Besides, the proposed scheduler shows a nearly optimal resource utilization rate and a good average scheduling delay.
{"title":"An online cost-efficient scheduler for requests with deadline constraint in hybrid clouds","authors":"Yufei Wang, Guangtao Xue, Shiyou Qian, Minglu Li","doi":"10.1109/PIC.2017.8359564","DOIUrl":"https://doi.org/10.1109/PIC.2017.8359564","url":null,"abstract":"The hybrid cloud model has received considerable attention in recent years. However, it is challenging to schedule computing applications in hybrid clouds because of the various product types and pricing models provided by different cloud providers in conjunction with the complexity of private cloud resource management. In this paper, we focus on a typical scenario to schedule computing requests in hybrid clouds, trying to minimize the costs incurred while keeping the deadline missing of the requests at an acceptable level. We formulate this problem as an online optimization model originally, and by taking advantage of the Lyapunov optimization techniques, we transform it into a one-shot binary linear optimization problem which is much easier to solve. Based on that we develop a hybrid cloud scheduler, and simulation results suggest that our scheduler strikes a good balance between the cost of public clouds and the deadline missing of computing requests. Besides, the proposed scheduler shows a nearly optimal resource utilization rate and a good average scheduling delay.","PeriodicalId":370588,"journal":{"name":"2017 International Conference on Progress in Informatics and Computing (PIC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130510033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the advance of time and scholars pay close attention to prediction-error expansion in reversible data hiding, a large number of adaptive prediction-error expansion algorithms are emerging. Previous methods often use closed pixel correlation to predict pixels, but the prediction accuracy is low in the image texture region. In this paper, we sum a reversible data hiding framework based on prediction-error expansion at first. Depending on this framework, we proposed an iterative regularization method to predict pixels by applying a first order difference edge preserving operator predictor. The continuous iterative algorithm is used to modify the prediction results to obtain the optimal and stable prediction results. In this way, the overall prediction effect of the image is improved, especially in the texture region of the image. Moreover, the first order difference sum is used to sort the order of the embedded information, so as to improve the quality of the stego image. The experimental results show the mathod proposed is better than some state-of-the-art methods.
{"title":"Improved reversible information hiding with adaptive prediction","authors":"Ting-Liang Xu, Xinchun Cui, Yingshuai Han, Yusheng Zhang","doi":"10.1109/PIC.2017.8359547","DOIUrl":"https://doi.org/10.1109/PIC.2017.8359547","url":null,"abstract":"With the advance of time and scholars pay close attention to prediction-error expansion in reversible data hiding, a large number of adaptive prediction-error expansion algorithms are emerging. Previous methods often use closed pixel correlation to predict pixels, but the prediction accuracy is low in the image texture region. In this paper, we sum a reversible data hiding framework based on prediction-error expansion at first. Depending on this framework, we proposed an iterative regularization method to predict pixels by applying a first order difference edge preserving operator predictor. The continuous iterative algorithm is used to modify the prediction results to obtain the optimal and stable prediction results. In this way, the overall prediction effect of the image is improved, especially in the texture region of the image. Moreover, the first order difference sum is used to sort the order of the embedded information, so as to improve the quality of the stego image. The experimental results show the mathod proposed is better than some state-of-the-art methods.","PeriodicalId":370588,"journal":{"name":"2017 International Conference on Progress in Informatics and Computing (PIC)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120951057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/PIC.2017.8359531
Yong Wei, Bin Xu, Mengyi Ying, Junfeng Qu, R. Duke
Paraspinal muscles support the spine and are the source of movement force. The size, shape, density, and volume of the paraspinal muscles cross-section area (CSA) are affected by many factors, such as age, health condition, exercise, and low back pain. It is invaluable to segment the paraspinal muscle regions in images in order to measure and study them. Manual segmentation of the paraspinal muscle CSA is time-consuming and inaccurate. In this work, an atlas-based image segmentation algorithm is proposed to segment the paraspinal muscles in CT images. To address the challenges of variations of muscle shape and its relative spatial relationship to other organs, mutual information is utilized to register the atlas and target images, followed by gradient vector flow contour deformation. Experimental results show that the proposed method can successfully segment paraspinal muscle regions in CT images in both intrapatient and interpatient cases. Furthermore, using mutual information to register atlas and target images outperforms the method using spine-spine registration. It segments the muscle regions accurately without the need of the computationally expensive iterative local contour optimization. The results can be used to evaluate paraspinal muscle tissue injury and postoperative back muscle atrophy of spine surgery patients.
{"title":"Two dimensional paraspinal muscle segmentation in CT images","authors":"Yong Wei, Bin Xu, Mengyi Ying, Junfeng Qu, R. Duke","doi":"10.1109/PIC.2017.8359531","DOIUrl":"https://doi.org/10.1109/PIC.2017.8359531","url":null,"abstract":"Paraspinal muscles support the spine and are the source of movement force. The size, shape, density, and volume of the paraspinal muscles cross-section area (CSA) are affected by many factors, such as age, health condition, exercise, and low back pain. It is invaluable to segment the paraspinal muscle regions in images in order to measure and study them. Manual segmentation of the paraspinal muscle CSA is time-consuming and inaccurate. In this work, an atlas-based image segmentation algorithm is proposed to segment the paraspinal muscles in CT images. To address the challenges of variations of muscle shape and its relative spatial relationship to other organs, mutual information is utilized to register the atlas and target images, followed by gradient vector flow contour deformation. Experimental results show that the proposed method can successfully segment paraspinal muscle regions in CT images in both intrapatient and interpatient cases. Furthermore, using mutual information to register atlas and target images outperforms the method using spine-spine registration. It segments the muscle regions accurately without the need of the computationally expensive iterative local contour optimization. The results can be used to evaluate paraspinal muscle tissue injury and postoperative back muscle atrophy of spine surgery patients.","PeriodicalId":370588,"journal":{"name":"2017 International Conference on Progress in Informatics and Computing (PIC)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128335237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/PIC.2017.8359578
Xu-dong Zhong, Yuan-zhi He, Hao Yin, Jingchao Wang, Zhou-quan Du
This paper investigates the joint power and timeslot allocation for multi-beam satellite downlinks. Users with real-time and non-real-time services under different channel conditions are considered to be satisfied with limited power and timeslots. To guarantee the fairness among different users, the delay priority is introduced for optimization, while the delay constrain is used to avoid delay deterioration of real-time users. Based on this, a joint power and timeslot allocation problem is formulated to maximize the weighted sum throughput. With convex optimization theory, the optimal power and timeslot allocation is derived, and a joint allocation algorithm based on subgradient method is proposed to determine the optimal solutions. Simulation results show that the proposed algorithm realize the trade-off between throughput and loss probability with delay fairness guarantee.
{"title":"Joint power and timeslot allocation based on delay priority for multi-beam satellite downlinks","authors":"Xu-dong Zhong, Yuan-zhi He, Hao Yin, Jingchao Wang, Zhou-quan Du","doi":"10.1109/PIC.2017.8359578","DOIUrl":"https://doi.org/10.1109/PIC.2017.8359578","url":null,"abstract":"This paper investigates the joint power and timeslot allocation for multi-beam satellite downlinks. Users with real-time and non-real-time services under different channel conditions are considered to be satisfied with limited power and timeslots. To guarantee the fairness among different users, the delay priority is introduced for optimization, while the delay constrain is used to avoid delay deterioration of real-time users. Based on this, a joint power and timeslot allocation problem is formulated to maximize the weighted sum throughput. With convex optimization theory, the optimal power and timeslot allocation is derived, and a joint allocation algorithm based on subgradient method is proposed to determine the optimal solutions. Simulation results show that the proposed algorithm realize the trade-off between throughput and loss probability with delay fairness guarantee.","PeriodicalId":370588,"journal":{"name":"2017 International Conference on Progress in Informatics and Computing (PIC)","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132876212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zero-shot classification (ZSC) aims to classify images from the class whose training samples are unavailable. A typical method addressing this issue is to learn a projection from feature space to attribute space so that a relation of training samples and test samples could be built. However, the projection merely learned from training samples does not apply in unseen classes due to domain shift between them. To tackle this issue, we propose a novel method in this paper that jointly learns coupled autoencoders to alleviate the distribution divergence of samples. We learn a projection by adopting encoder-decoder paradigm in both seen and unseen classes. The proposed method is evaluated for zero-shot recognition on two benchmark datasets, achieving competitive results.
{"title":"Coupled autoencoders learning for zero-shot classification with domain shift","authors":"Guangcheng Sun, Songsong Wu, Guangwei Gao, Fei Wu, Xiaoyuan Jing","doi":"10.1109/PIC.2017.8359516","DOIUrl":"https://doi.org/10.1109/PIC.2017.8359516","url":null,"abstract":"Zero-shot classification (ZSC) aims to classify images from the class whose training samples are unavailable. A typical method addressing this issue is to learn a projection from feature space to attribute space so that a relation of training samples and test samples could be built. However, the projection merely learned from training samples does not apply in unseen classes due to domain shift between them. To tackle this issue, we propose a novel method in this paper that jointly learns coupled autoencoders to alleviate the distribution divergence of samples. We learn a projection by adopting encoder-decoder paradigm in both seen and unseen classes. The proposed method is evaluated for zero-shot recognition on two benchmark datasets, achieving competitive results.","PeriodicalId":370588,"journal":{"name":"2017 International Conference on Progress in Informatics and Computing (PIC)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134063486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/PIC.2017.8359518
Jinhua Zeng, Jinfeng Zeng, Xiulian Qiu
Deep learning for face identification-verification application has been proven to be fruitful. Human faces constituted the main information for human identification besides gait, body silhouette, etc. Deep learning for forensic face identification could provide quantitative indexes for face similarity measurement between the questioned and the known human faces in cases, which had the advantage of result objectivity without expert experience influences. We studied the deep learning based face representation for forensic verification of human images. Its application strategies and technical limitations were discussed. We proposed a “winner-take-all” strategy in the case of the forensic identification of human images in videos. We expected the theories and techniques for forensic identification of human images in which both qualitative and quantitative analysis methods were included and expert judgment and automatic identification methods were coexisted.
{"title":"Deep learning based forensic face verification in videos","authors":"Jinhua Zeng, Jinfeng Zeng, Xiulian Qiu","doi":"10.1109/PIC.2017.8359518","DOIUrl":"https://doi.org/10.1109/PIC.2017.8359518","url":null,"abstract":"Deep learning for face identification-verification application has been proven to be fruitful. Human faces constituted the main information for human identification besides gait, body silhouette, etc. Deep learning for forensic face identification could provide quantitative indexes for face similarity measurement between the questioned and the known human faces in cases, which had the advantage of result objectivity without expert experience influences. We studied the deep learning based face representation for forensic verification of human images. Its application strategies and technical limitations were discussed. We proposed a “winner-take-all” strategy in the case of the forensic identification of human images in videos. We expected the theories and techniques for forensic identification of human images in which both qualitative and quantitative analysis methods were included and expert judgment and automatic identification methods were coexisted.","PeriodicalId":370588,"journal":{"name":"2017 International Conference on Progress in Informatics and Computing (PIC)","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116274602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/PIC.2017.8359588
Qin Zhang, Jianhua Liu, Ying Wang, Zhixiong Zhang
Up to now, the relation classification systems focus on using various features generated by parsing modules. However, feature extraction is a time consuming work. Selecting wrong features also lead to classification errors. In this paper, we study the Convolutional Neural Network method for entity relation classification. It uses the embedding vector and the original position information relative to entities of words instead of the features extracted by traditional methods. The N-gram features are extracted by filters in the convolutional layer and the whole sentence features are extracted by the pooling layer. Then the softmax classifier in the fully connected layer is applied for relation classification. Experimental results show that the method of random initialization of the position vector is unreasonable, and the method using the vector and the original position information of words performs better. In addition, filters with multiple window sizes can capture the sentence features and the original location information can replace the complex window sizes.
{"title":"A convolutional neural network method for relation classification","authors":"Qin Zhang, Jianhua Liu, Ying Wang, Zhixiong Zhang","doi":"10.1109/PIC.2017.8359588","DOIUrl":"https://doi.org/10.1109/PIC.2017.8359588","url":null,"abstract":"Up to now, the relation classification systems focus on using various features generated by parsing modules. However, feature extraction is a time consuming work. Selecting wrong features also lead to classification errors. In this paper, we study the Convolutional Neural Network method for entity relation classification. It uses the embedding vector and the original position information relative to entities of words instead of the features extracted by traditional methods. The N-gram features are extracted by filters in the convolutional layer and the whole sentence features are extracted by the pooling layer. Then the softmax classifier in the fully connected layer is applied for relation classification. Experimental results show that the method of random initialization of the position vector is unreasonable, and the method using the vector and the original position information of words performs better. In addition, filters with multiple window sizes can capture the sentence features and the original location information can replace the complex window sizes.","PeriodicalId":370588,"journal":{"name":"2017 International Conference on Progress in Informatics and Computing (PIC)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131964974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}