Pub Date : 2013-11-01DOI: 10.1109/VCIP.2013.6706443
Chih-Chung Hsu, Chia-Wen Lin, Yuming Fang, Weisi Lin
Image retargeting techniques aim to obtain retargeted images with different sizes or aspect ratios for various display screens. Various content-aware image retargeting algorithms have been proposed recently. However, there is still no accurate objective metric for visual quality assessment of retargeted images. In this paper, we propose a novel objective metric for assessing visual quality of retargeted images based on perceptual geometric distortion and information loss. The proposed metric measures the geometric distortion of retargeted images by SIFT flow variation. Furthermore, a visual saliency map is derived to characterize human perception of the geometric distortion. On the other hand, the information loss in a retargeted image, which is calculated based on the saliency map, is integrated into the proposed metric. A user study is conducted to evaluate the performance of the proposed metric. Experimental results show the consistency between the objective assessments from the proposed metric and subjective assessments.
{"title":"Objective quality assessment for image retargeting based on perceptual distortion and information loss","authors":"Chih-Chung Hsu, Chia-Wen Lin, Yuming Fang, Weisi Lin","doi":"10.1109/VCIP.2013.6706443","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706443","url":null,"abstract":"Image retargeting techniques aim to obtain retargeted images with different sizes or aspect ratios for various display screens. Various content-aware image retargeting algorithms have been proposed recently. However, there is still no accurate objective metric for visual quality assessment of retargeted images. In this paper, we propose a novel objective metric for assessing visual quality of retargeted images based on perceptual geometric distortion and information loss. The proposed metric measures the geometric distortion of retargeted images by SIFT flow variation. Furthermore, a visual saliency map is derived to characterize human perception of the geometric distortion. On the other hand, the information loss in a retargeted image, which is calculated based on the saliency map, is integrated into the proposed metric. A user study is conducted to evaluate the performance of the proposed metric. Experimental results show the consistency between the objective assessments from the proposed metric and subjective assessments.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126110995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/VCIP.2013.6706330
Kaili Zhao, Honggang Zhang, Mingzhi Dong, Jun Guo, Yonggang Qi, Yi-Zhe Song
Facial Expression Recognition (FER) techniques have already been adopted in numerous multimedia systems. Plenty of previous research assumes that each facial picture should be linked to only one of the predefined affective labels. Nevertheless, in practical applications, few of the expressions are exactly one of the predefined affective states. Therefore, to depict the facial expressions more accurately, this paper proposes a multi-label classification approach for FER and each facial expression would be labeled with one or multiple affective states. Meanwhile, by modeling the relationship between labels via Group Lasso regularization term, a maximum margin multi-label classifier is presented and the convex optimization formulation guarantees a global optimal solution. To evaluate the performance of our classifier, the JAFFE dataset is extended into a multi-label facial expression dataset by setting threshold to its continuous labels marked in the original dataset and the labeling results have shown that multiple labels can output a far more accurate description of facial expression. At the same time, the classification results have verified the superior performance of our algorithm.
{"title":"A multi-label classification approach for Facial Expression Recognition","authors":"Kaili Zhao, Honggang Zhang, Mingzhi Dong, Jun Guo, Yonggang Qi, Yi-Zhe Song","doi":"10.1109/VCIP.2013.6706330","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706330","url":null,"abstract":"Facial Expression Recognition (FER) techniques have already been adopted in numerous multimedia systems. Plenty of previous research assumes that each facial picture should be linked to only one of the predefined affective labels. Nevertheless, in practical applications, few of the expressions are exactly one of the predefined affective states. Therefore, to depict the facial expressions more accurately, this paper proposes a multi-label classification approach for FER and each facial expression would be labeled with one or multiple affective states. Meanwhile, by modeling the relationship between labels via Group Lasso regularization term, a maximum margin multi-label classifier is presented and the convex optimization formulation guarantees a global optimal solution. To evaluate the performance of our classifier, the JAFFE dataset is extended into a multi-label facial expression dataset by setting threshold to its continuous labels marked in the original dataset and the labeling results have shown that multiple labels can output a far more accurate description of facial expression. At the same time, the classification results have verified the superior performance of our algorithm.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124818731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/VCIP.2013.6706420
Caixia Wang, Zehai Song, Songhe Feng, Congyan Lang, Shuicheng Yan
As the explosive growth of the web image data, image tag ranking used for image retrieval accurately from mass images is becoming an active research topic. However, the existing ranking approaches are not very ideal, which remains to be improved. This paper proposed a new image tag saliency ranking algorithm based on sparse representation. we firstly propagate labels from image-level to region-level via Multi-instance Learning driven by sparse representation, which means reconstructing the target instance from positive bag via the sparse linear combination of all the instances from training set, instances with nonzero reconstruction coefficients are considered to be similar to the target instance; then visual attention model is used for tag saliency analysis. Comparing with the existing approaches, the proposed method achieves a better effect and shows a better performance.
{"title":"A novel image tag saliency ranking algorithm based on sparse representation","authors":"Caixia Wang, Zehai Song, Songhe Feng, Congyan Lang, Shuicheng Yan","doi":"10.1109/VCIP.2013.6706420","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706420","url":null,"abstract":"As the explosive growth of the web image data, image tag ranking used for image retrieval accurately from mass images is becoming an active research topic. However, the existing ranking approaches are not very ideal, which remains to be improved. This paper proposed a new image tag saliency ranking algorithm based on sparse representation. we firstly propagate labels from image-level to region-level via Multi-instance Learning driven by sparse representation, which means reconstructing the target instance from positive bag via the sparse linear combination of all the instances from training set, instances with nonzero reconstruction coefficients are considered to be similar to the target instance; then visual attention model is used for tag saliency analysis. Comparing with the existing approaches, the proposed method achieves a better effect and shows a better performance.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122363013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/VCIP.2013.6706419
Haoqian Wang, Yushi Tian, Yongbing Zhang
Depth propagation is an effective and efficient way to produce depth maps for a video sequence. Motion estimation in most existing depth propagation schemes is performed only based on the estimated depth maps without consideration for color information. This paper presents a novel key frame depth propagation algorithm combining bilateral filtering and motion estimation. A color guided motion estimation process is proposed by taking both color and depth information into account when estimating the motion vectors. In addition, a bidirectional propagation strategy is adopted to reduce the accumulation of depth errors. Experimental results show that the proposed algorithm outperforms most of the existing techniques in obtaining high quality depth maps indicating a better effect of the synthesized stereoscopic video.
{"title":"A novel depth propagation algorithm with color guided motion estimation","authors":"Haoqian Wang, Yushi Tian, Yongbing Zhang","doi":"10.1109/VCIP.2013.6706419","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706419","url":null,"abstract":"Depth propagation is an effective and efficient way to produce depth maps for a video sequence. Motion estimation in most existing depth propagation schemes is performed only based on the estimated depth maps without consideration for color information. This paper presents a novel key frame depth propagation algorithm combining bilateral filtering and motion estimation. A color guided motion estimation process is proposed by taking both color and depth information into account when estimating the motion vectors. In addition, a bidirectional propagation strategy is adopted to reduce the accumulation of depth errors. Experimental results show that the proposed algorithm outperforms most of the existing techniques in obtaining high quality depth maps indicating a better effect of the synthesized stereoscopic video.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132165406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/VCIP.2013.6706361
Yurui Xie, Chao Huang, Tiecheng Song, Jinxiu Ma, J. Jing
In this paper, we exploit an algorithm for detecting the individual objects from multiple images in a weakly supervised manner. Specifically, we treat the object co-detection as a jointly dictionary learning and objects localization problem. Thus a novel low-rank and sparse representation dictionary learning algorithm is proposed. It aims to learn a compact and discriminative dictionary associated with the specific object category. Different from previous dictionary learning methods, the sparsity imposed on representation coefficients, the rank minimization of learned dictionary, data reconstruction error and the low-rank constraint of sample data are all incorporated in a unitized objective function. Then we optimize all the constraint terms via an extended version of augmented lagrange multipliers (ALM) method simultaneously. The experimental results demonstrate that the low-rank and sparse representation dictionary learning algorithm can compare favorably to other single object detection method.
{"title":"Object co-detection via low-rank and sparse representation dictionary learning","authors":"Yurui Xie, Chao Huang, Tiecheng Song, Jinxiu Ma, J. Jing","doi":"10.1109/VCIP.2013.6706361","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706361","url":null,"abstract":"In this paper, we exploit an algorithm for detecting the individual objects from multiple images in a weakly supervised manner. Specifically, we treat the object co-detection as a jointly dictionary learning and objects localization problem. Thus a novel low-rank and sparse representation dictionary learning algorithm is proposed. It aims to learn a compact and discriminative dictionary associated with the specific object category. Different from previous dictionary learning methods, the sparsity imposed on representation coefficients, the rank minimization of learned dictionary, data reconstruction error and the low-rank constraint of sample data are all incorporated in a unitized objective function. Then we optimize all the constraint terms via an extended version of augmented lagrange multipliers (ALM) method simultaneously. The experimental results demonstrate that the low-rank and sparse representation dictionary learning algorithm can compare favorably to other single object detection method.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129033429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/VCIP.2013.6706370
Yunyun Yang, Yi Zhao
In this paper we propose an efficient multi-phase image segmentation for color images based on the piecewise constant multi-phase Vese-Chan model and the split Bregman method. The proposed model is first presented in a four-phase level set formulation and then extended to a multi-phase formulation. The four-phase and multi-phase energy functionals are defined and the corresponding minimization problems of the proposed active contour model are presented. The split Bregman method is applied to minimize the multi-phase energy functional efficiently. The proposed model has been applied to synthetic and real color images with promising results. The advantages of the proposed active contour model have been demonstrated by numerical results.
{"title":"Efficient active contour model based on Vese-Chan model and split bregman method","authors":"Yunyun Yang, Yi Zhao","doi":"10.1109/VCIP.2013.6706370","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706370","url":null,"abstract":"In this paper we propose an efficient multi-phase image segmentation for color images based on the piecewise constant multi-phase Vese-Chan model and the split Bregman method. The proposed model is first presented in a four-phase level set formulation and then extended to a multi-phase formulation. The four-phase and multi-phase energy functionals are defined and the corresponding minimization problems of the proposed active contour model are presented. The split Bregman method is applied to minimize the multi-phase energy functional efficiently. The proposed model has been applied to synthetic and real color images with promising results. The advantages of the proposed active contour model have been demonstrated by numerical results.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126739541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/VCIP.2013.6706451
Y. Ahn, Tae-Jin Hwang, D. Sim, W. Han
Load balancing algorithm supporting parallel tools for HEVC encoder is proposed in this paper. Standardization of HEVC version 1 was finalized and which is known that its RD performance is two times better than H.264/AVC which was the most efficient video coder. However, computational complexity of HEVC encoding process derived from variable block sizes based on hierarchical structure and recursive encoding structure should be dealt as a prerequisite for technique commercialization. In this paper, basic performances of slice- and tile-level parallel tools adopted in HEVC are firstly presented and load balancing algorithm based on complexity model for slices and tiles is proposed. For four slices and four tiles cases, average time saving gains are 12.05% and 3.81% against simple slice- and tile-level parallelization, respectively.
{"title":"Complexity model based load-balancing algorithm for parallel tools of HEVC","authors":"Y. Ahn, Tae-Jin Hwang, D. Sim, W. Han","doi":"10.1109/VCIP.2013.6706451","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706451","url":null,"abstract":"Load balancing algorithm supporting parallel tools for HEVC encoder is proposed in this paper. Standardization of HEVC version 1 was finalized and which is known that its RD performance is two times better than H.264/AVC which was the most efficient video coder. However, computational complexity of HEVC encoding process derived from variable block sizes based on hierarchical structure and recursive encoding structure should be dealt as a prerequisite for technique commercialization. In this paper, basic performances of slice- and tile-level parallel tools adopted in HEVC are firstly presented and load balancing algorithm based on complexity model for slices and tiles is proposed. For four slices and four tiles cases, average time saving gains are 12.05% and 3.81% against simple slice- and tile-level parallelization, respectively.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121018897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/VCIP.2013.6706414
G. Praveen, Ramakrishna Adireddy
In the HEVC standardization process & HM test model implementation, it's been indicated that SAO operation can be executed only at frame level. But for the purpose of low-latency, better memory-bandwidth efficiency and cache performance, it is needed to implement SAO filter at CTU level, along with other encode modules, for majority applications. As well, if any ASIC to be developed for HEVC, all modules are very much expected to execute at CTU/CU level for better pipeline performance. In this paper, we present two methods to carry out SAO offset estimation at CTU level. The proposed two methods are very suitable for the realization in pipe-lined architectures including both software and hardware solutions. Our experimentation results demonstrate that, the proposed two methods produce similar results as SAO frame level results, for both video quality & bit-rate by improving the memory bandwidth and cache performance efficiency.
{"title":"Analysis and approximation of SAO estimation for CTU-level HEVC encoder","authors":"G. Praveen, Ramakrishna Adireddy","doi":"10.1109/VCIP.2013.6706414","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706414","url":null,"abstract":"In the HEVC standardization process & HM test model implementation, it's been indicated that SAO operation can be executed only at frame level. But for the purpose of low-latency, better memory-bandwidth efficiency and cache performance, it is needed to implement SAO filter at CTU level, along with other encode modules, for majority applications. As well, if any ASIC to be developed for HEVC, all modules are very much expected to execute at CTU/CU level for better pipeline performance. In this paper, we present two methods to carry out SAO offset estimation at CTU level. The proposed two methods are very suitable for the realization in pipe-lined architectures including both software and hardware solutions. Our experimentation results demonstrate that, the proposed two methods produce similar results as SAO frame level results, for both video quality & bit-rate by improving the memory bandwidth and cache performance efficiency.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127804850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/VCIP.2013.6706373
Junyong You, X. Tai
Contrast sensitivity plays an important role in visual perception when viewing external stimuli, e.g., video, and it has been taken into account in development of advanced video coding algorithms. This paper proposes a perceptual foveation model based on accurate prediction of video fixations and modeling of contrast sensitivity function (CSF). Consequently, an adaptive bit allocation strategy in H.264/AVC video compression is proposed by considering visible frequency threshold of the human visual system (HVS). A subjective video quality assessment together with objective quality metrics have been performed and demonstrated that the proposed perceptual foveation driven bit allocation strategy can significantly improve the perceived quality of coded video compared with standard coding scheme and another visual attention guided coding approach.
{"title":"Enhancing coded video quality with perceptual foveation driven bit allocation strategy","authors":"Junyong You, X. Tai","doi":"10.1109/VCIP.2013.6706373","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706373","url":null,"abstract":"Contrast sensitivity plays an important role in visual perception when viewing external stimuli, e.g., video, and it has been taken into account in development of advanced video coding algorithms. This paper proposes a perceptual foveation model based on accurate prediction of video fixations and modeling of contrast sensitivity function (CSF). Consequently, an adaptive bit allocation strategy in H.264/AVC video compression is proposed by considering visible frequency threshold of the human visual system (HVS). A subjective video quality assessment together with objective quality metrics have been performed and demonstrated that the proposed perceptual foveation driven bit allocation strategy can significantly improve the perceived quality of coded video compared with standard coding scheme and another visual attention guided coding approach.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132566318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/VCIP.2013.6706382
Hongbo Zhang, Songzhi Su, Shaozi Li, Duansheng Chen, Bineng Zhong, R. Ji
Recognizing human actions is not alone, as hinted by the scene herein. In this paper, we investigate the possibility to boost the action recognition performance by exploiting their scene context associated. To this end, we model the scene as a mid-level “hidden layer” to bridge action descriptors and action categories. This is achieved via a scene topic model, in which hybrid visual descriptors including spatiotemporal action features and scene descriptors are first extracted from the video sequence. Then, we learn a joint probability distribution between scene and action by a Naive Bayesian N-earest Neighbor algorithm, which is adopted to jointly infer the action categories online by combining off-the-shelf action recognition algorithms. We demonstrate our merits by comparing to state-of-the-arts in several action recognition benchmarks.
{"title":"Seeing actions through scene context","authors":"Hongbo Zhang, Songzhi Su, Shaozi Li, Duansheng Chen, Bineng Zhong, R. Ji","doi":"10.1109/VCIP.2013.6706382","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706382","url":null,"abstract":"Recognizing human actions is not alone, as hinted by the scene herein. In this paper, we investigate the possibility to boost the action recognition performance by exploiting their scene context associated. To this end, we model the scene as a mid-level “hidden layer” to bridge action descriptors and action categories. This is achieved via a scene topic model, in which hybrid visual descriptors including spatiotemporal action features and scene descriptors are first extracted from the video sequence. Then, we learn a joint probability distribution between scene and action by a Naive Bayesian N-earest Neighbor algorithm, which is adopted to jointly infer the action categories online by combining off-the-shelf action recognition algorithms. We demonstrate our merits by comparing to state-of-the-arts in several action recognition benchmarks.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"283 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132569627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}