Pub Date : 2014-12-01DOI: 10.1109/VCIP.2014.7051496
Mai Xu, Jingze Zhang, Yuan Ma, Zulin Wang
Recently, numerous perceptual video coding approaches have been proposed to use face as ROI regions, for improving perceived visual quality of compressed conversational videos. However, there exists no objective metric, specialized for efficiently evaluating the perceived visual quality of compressed conversational videos. This paper thus proposes an efficient objective quality assessment method, namely Gaussian mixture model based PSNR (GMM-PSNR), for conversational videos. First, eye tracking experiments, together with a face extraction technique, were carried out to identify importance of the regions of background, face, and facial features, through eye fixation points. Next, assuming that the distribution of some eye fixation points obeys Gaussian mixture model, an importance weight map is generated by introducing a new term, eye fixation points/pixel(efp/p). Finally, GMM-PSNR is computed by assigning different penalties to the distortion of each pixel in a video frame, according to the generated weight map. The experimental results show the effectiveness of our GMM-PSNR by investigating its correlation with subjective quality on several test video sequences.
{"title":"A novel objective quality assessment method for perceptual video coding in conversational scenarios","authors":"Mai Xu, Jingze Zhang, Yuan Ma, Zulin Wang","doi":"10.1109/VCIP.2014.7051496","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051496","url":null,"abstract":"Recently, numerous perceptual video coding approaches have been proposed to use face as ROI regions, for improving perceived visual quality of compressed conversational videos. However, there exists no objective metric, specialized for efficiently evaluating the perceived visual quality of compressed conversational videos. This paper thus proposes an efficient objective quality assessment method, namely Gaussian mixture model based PSNR (GMM-PSNR), for conversational videos. First, eye tracking experiments, together with a face extraction technique, were carried out to identify importance of the regions of background, face, and facial features, through eye fixation points. Next, assuming that the distribution of some eye fixation points obeys Gaussian mixture model, an importance weight map is generated by introducing a new term, eye fixation points/pixel(efp/p). Finally, GMM-PSNR is computed by assigning different penalties to the distortion of each pixel in a video frame, according to the generated weight map. The experimental results show the effectiveness of our GMM-PSNR by investigating its correlation with subjective quality on several test video sequences.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121196149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/VCIP.2014.7051506
Jing Chen, Sunil Lee, E. Alshina, Yinji Piao
AVS2 video standard is the next-generation video coding standard under the development of Audio Video coding Standard (AVS) workgroup of China. In this paper, the design of Sample Adaptive Offset (SAO) in AVS2 is presented. Considering the implementation issues, a shifted structure in which the SAO parameter region is shifted from the Largest Coding Unit (LCU) to the upper-left is adopted to make the SAO parameter region consistent with the processing region in implementation. Moreover, the category dependent offset is introduced in the edge type based on the statistical results to improve the offset coding and non-consecutive offset bands are adopted in the band type to optimize offset bands. The test results show that SAO achieves on average 0.3% to 1.4% luma coding gain in AVS2 common test conditions.
{"title":"Sample adaptive offset in AVS2 video standard","authors":"Jing Chen, Sunil Lee, E. Alshina, Yinji Piao","doi":"10.1109/VCIP.2014.7051506","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051506","url":null,"abstract":"AVS2 video standard is the next-generation video coding standard under the development of Audio Video coding Standard (AVS) workgroup of China. In this paper, the design of Sample Adaptive Offset (SAO) in AVS2 is presented. Considering the implementation issues, a shifted structure in which the SAO parameter region is shifted from the Largest Coding Unit (LCU) to the upper-left is adopted to make the SAO parameter region consistent with the processing region in implementation. Moreover, the category dependent offset is introduced in the edge type based on the statistical results to improve the offset coding and non-consecutive offset bands are adopted in the band type to optimize offset bands. The test results show that SAO achieves on average 0.3% to 1.4% luma coding gain in AVS2 common test conditions.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122445993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/VCIP.2014.7051605
Xiaofeng Huang, Huizhu Jia, Kaijin Wei, Jie Liu, Chuang Zhu, Zhengguang Lv, Don Xie
The emerging high efficiency video coding standard (HEVC) achieves significantly better coding efficiency than all existing video coding standards. The quad tree structured coding unit (CU) is adopted in HEVC to improve the compression efficiency, but this causes a very high computational complexity because it exhausts all the combinations of the prediction unit (PU) and transform unit (TU) in every CU attempt. In order to alleviate the computational burden in HEVC intra coding, a fast CU depth decision algorithm is proposed in this paper. The CU texture complexity and the correlation between the current CU and neighbouring CUs are adaptively taken into consideration for the decision of the CU split and the CU depth search range. Experimental results show that the proposed scheme provides 39.3% encoder time savings on average compared to the default encoding scheme in HM-RExt-13.0 with only 0.6% BDBR penalty in coding performance.
{"title":"Fast algorithm of coding unit depth decision for HEVC intra coding","authors":"Xiaofeng Huang, Huizhu Jia, Kaijin Wei, Jie Liu, Chuang Zhu, Zhengguang Lv, Don Xie","doi":"10.1109/VCIP.2014.7051605","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051605","url":null,"abstract":"The emerging high efficiency video coding standard (HEVC) achieves significantly better coding efficiency than all existing video coding standards. The quad tree structured coding unit (CU) is adopted in HEVC to improve the compression efficiency, but this causes a very high computational complexity because it exhausts all the combinations of the prediction unit (PU) and transform unit (TU) in every CU attempt. In order to alleviate the computational burden in HEVC intra coding, a fast CU depth decision algorithm is proposed in this paper. The CU texture complexity and the correlation between the current CU and neighbouring CUs are adaptively taken into consideration for the decision of the CU split and the CU depth search range. Experimental results show that the proposed scheme provides 39.3% encoder time savings on average compared to the default encoding scheme in HM-RExt-13.0 with only 0.6% BDBR penalty in coding performance.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133485906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/VCIP.2014.7051617
M. Mohanty, C. Gehrmann, P. Atrey
Secret image sharing is a popular image hiding scheme that typically uses (3, 3, n) multi-secret sharing to hide the colors of a secret image. The use of (3, 3, n) multi-secret sharing, however, can lead to information loss. In this paper, we study this loss of information from an image perspective, and show that one-third of the color values of the secret image can be leaked when the sum of any two selected share numbers is equal to the considered prime number in the secret sharing. Furthermore, we show that if the selected share numbers do not satisfy this condition (for example, when the value of each of the selected share number is less than the half of the value of the prime number), then the colors of the secret image are not leaked. In this case, a noise-like image is reconstructed from the knowledge of less than three shares.
{"title":"Avoiding weak parameters in secret image sharing","authors":"M. Mohanty, C. Gehrmann, P. Atrey","doi":"10.1109/VCIP.2014.7051617","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051617","url":null,"abstract":"Secret image sharing is a popular image hiding scheme that typically uses (3, 3, n) multi-secret sharing to hide the colors of a secret image. The use of (3, 3, n) multi-secret sharing, however, can lead to information loss. In this paper, we study this loss of information from an image perspective, and show that one-third of the color values of the secret image can be leaked when the sum of any two selected share numbers is equal to the considered prime number in the secret sharing. Furthermore, we show that if the selected share numbers do not satisfy this condition (for example, when the value of each of the selected share number is less than the half of the value of the prime number), then the colors of the secret image are not leaked. In this case, a noise-like image is reconstructed from the knowledge of less than three shares.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117156917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/VCIP.2014.7051554
Alexey Filippov, Vasily Rufitskiy, V. Potapov
In this paper, we present a novel data-hiding method that does not interfere with other data-hiding techniques (e.g., sign bits hiding) that are already included into state-of-the-art coding standards such as HEVC/H.265. One of the main features that are inherent to the proposed technique is its orientation on hierarchically-structured units (e.g., a hierarchy in HEVC/H.265 that includes coding, prediction and transform units). As shown in the paper, this method provides higher coding gain when applied to scalar-quantized values. Finally, we present experimental results that confirm the high RD-performance of this technique in comparison with explicit signaling and discuss its suitability for HEVC-compatible watermarking.
{"title":"Scalar-quantization-based multi-layer data hiding for video coding applications","authors":"Alexey Filippov, Vasily Rufitskiy, V. Potapov","doi":"10.1109/VCIP.2014.7051554","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051554","url":null,"abstract":"In this paper, we present a novel data-hiding method that does not interfere with other data-hiding techniques (e.g., sign bits hiding) that are already included into state-of-the-art coding standards such as HEVC/H.265. One of the main features that are inherent to the proposed technique is its orientation on hierarchically-structured units (e.g., a hierarchy in HEVC/H.265 that includes coding, prediction and transform units). As shown in the paper, this method provides higher coding gain when applied to scalar-quantized values. Finally, we present experimental results that confirm the high RD-performance of this technique in comparison with explicit signaling and discuss its suitability for HEVC-compatible watermarking.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117306691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/VCIP.2014.7051511
Jun Liu, Xiaojun Jing, Songlin Sun, Zifeng Lian
Gabor filters are one of the most successful methods for face recognition. However they dramatically increase the data volume for face representation. To extract compact and distinctive information, we propose the Variable Length Dominant Gabor Local Binary Pattern (VLD-GLBP) for face recognition. It significantly reduces the face representation data volume whereas the performance is comparable to that of the complex state-of-the-art techniques. Specifically, local binary pattern (LBP) features are first computed from the Gabor images. Then, the most frequently occurred patterns are extracted to form VLD-GLBP. Finally the distance between VLD-GLBPs is computed to realize the face image classification. The experiment results on FERET database verify the efficiency of the proposed VLD-GLBP method.
{"title":"Variable length dominant Gabor local binary pattern (VLD-GLBP) for face recognition","authors":"Jun Liu, Xiaojun Jing, Songlin Sun, Zifeng Lian","doi":"10.1109/VCIP.2014.7051511","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051511","url":null,"abstract":"Gabor filters are one of the most successful methods for face recognition. However they dramatically increase the data volume for face representation. To extract compact and distinctive information, we propose the Variable Length Dominant Gabor Local Binary Pattern (VLD-GLBP) for face recognition. It significantly reduces the face representation data volume whereas the performance is comparable to that of the complex state-of-the-art techniques. Specifically, local binary pattern (LBP) features are first computed from the Gabor images. Then, the most frequently occurred patterns are extracted to form VLD-GLBP. Finally the distance between VLD-GLBPs is computed to realize the face image classification. The experiment results on FERET database verify the efficiency of the proposed VLD-GLBP method.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117340437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/VCIP.2014.7051592
L. Boulard, E. Baccaglini, R. Scopigno
In this paper we propose an innovative video-based architecture aimed to monitor elderly people. It is based on cheap devices and open-source libraries and preliminary tests demonstrate that it manages to achieve a significant performance. The overall architecture of the system and its implementation are shortly discussed from the point of view of the composing functional blocks, also analyzing the effects of loopbacks on the effectiveness of the algorithm.
{"title":"Insights into the role of feedbacks in the tracking loop of a modular fall-detection algorithm","authors":"L. Boulard, E. Baccaglini, R. Scopigno","doi":"10.1109/VCIP.2014.7051592","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051592","url":null,"abstract":"In this paper we propose an innovative video-based architecture aimed to monitor elderly people. It is based on cheap devices and open-source libraries and preliminary tests demonstrate that it manages to achieve a significant performance. The overall architecture of the system and its implementation are shortly discussed from the point of view of the composing functional blocks, also analyzing the effects of loopbacks on the effectiveness of the algorithm.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"11 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114102632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/VCIP.2014.7051517
Xin Zhao, Ying Chen, Li Zhang
In the 3D video extension of H.264/AVC, namely 3D-AVC, Neighboring Based Disparity Vector (NBDV) derivation has been proposed to support multiview/stereo compatibility, therefore texture views can be decoded independently to depth views. NBDV generates a disparity vector for the current macroblock (MB) using the motion information of neighboring blocks, especially those coded with motion vectors pointing to inter-view reference pictures. In 3D-AVC, NBDV has been utilized to access minimum number of spatial and temporal neighboring blocks, therefore there is a high probability that NBDV does not derive an efficient disparity vector. This paper introduces a derived disparity vector scheme, wherein only one disparity vector derived from NBDV is maintained for the whole slice and it is used as the disparity vector of the current MB if NBDV does not derive one from neighboring blocks. Simulation results show that the proposed method provides 3.6% bit rate reduction for multiview coding.
{"title":"Derived disparity vector based NBDV for 3D-AVC","authors":"Xin Zhao, Ying Chen, Li Zhang","doi":"10.1109/VCIP.2014.7051517","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051517","url":null,"abstract":"In the 3D video extension of H.264/AVC, namely 3D-AVC, Neighboring Based Disparity Vector (NBDV) derivation has been proposed to support multiview/stereo compatibility, therefore texture views can be decoded independently to depth views. NBDV generates a disparity vector for the current macroblock (MB) using the motion information of neighboring blocks, especially those coded with motion vectors pointing to inter-view reference pictures. In 3D-AVC, NBDV has been utilized to access minimum number of spatial and temporal neighboring blocks, therefore there is a high probability that NBDV does not derive an efficient disparity vector. This paper introduces a derived disparity vector scheme, wherein only one disparity vector derived from NBDV is maintained for the whole slice and it is used as the disparity vector of the current MB if NBDV does not derive one from neighboring blocks. Simulation results show that the proposed method provides 3.6% bit rate reduction for multiview coding.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116894680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/VCIP.2014.7051552
Sondos M. Fadl, N. Semary
Image forgery detection is currently one of the interested research fields of image processing. Copy-Move (CM) forgery is one of the frequently used techniques. In this paper, we propose a method which is efficient and fast for detect copy-move regions. The proposed method accelerates block matching strategy. Firstly, the image is divided into fixed-size overlapping blocks then discrete cosine transform is applied to each block to represent its features. Fast k-means clustering technique is used to cluster the blocks into different classes. Zigzag scanning is performed to reduce the length of each block feature vector. The feature vectors of each cluster blocks are lexicographically sorted by radix sort, correlation between each nearby blocks indicates their similarity. The experimental results demonstrate that the proposed method can detect the duplicated regions efficiently, and reduce processing time up to 50% of other previous works.
{"title":"A proposed accelerated image copy-move forgery detection","authors":"Sondos M. Fadl, N. Semary","doi":"10.1109/VCIP.2014.7051552","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051552","url":null,"abstract":"Image forgery detection is currently one of the interested research fields of image processing. Copy-Move (CM) forgery is one of the frequently used techniques. In this paper, we propose a method which is efficient and fast for detect copy-move regions. The proposed method accelerates block matching strategy. Firstly, the image is divided into fixed-size overlapping blocks then discrete cosine transform is applied to each block to represent its features. Fast k-means clustering technique is used to cluster the blocks into different classes. Zigzag scanning is performed to reduce the length of each block feature vector. The feature vectors of each cluster blocks are lexicographically sorted by radix sort, correlation between each nearby blocks indicates their similarity. The experimental results demonstrate that the proposed method can detect the duplicated regions efficiently, and reduce processing time up to 50% of other previous works.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114739927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/VCIP.2014.7051581
Tingting Kou, Lei Yang, Y. Wan
Image noise level estimation is an important step in many image processing tasks such as denoising, compression and segmentation. Although recently proposed SVD and PCA approaches have produced the most accurate estimates so far, these linear subspace-based methods still suffer from signal contamination from the clean signal content, especially in the low noise situation. In addition, the common performance evaluation procedure currently in use treats test images as noise-free. This omits the noise already in those test images and invariably incurs a bias. In this paper we make two contributions. First, we propose a new noise level estimation method using nonlinear local surface approximation. In this method, we first approximate image noise-free content in each block using a high degree polynomial. Then the block residual variances, which follow chi squared distribution, are sorted and the upper quantile of a carefully chosen size is used for estimation. Secondly, we propose a new performance evaluation procedure that is free from the influence of the noise already present in the test images. Experimental results show that it has much improved performance than typical state-of-the-art methods in terms of both estimation accuracy and stability.
{"title":"Accurate image noise level estimation by high order polynomial local surface approximation and statistical inference","authors":"Tingting Kou, Lei Yang, Y. Wan","doi":"10.1109/VCIP.2014.7051581","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051581","url":null,"abstract":"Image noise level estimation is an important step in many image processing tasks such as denoising, compression and segmentation. Although recently proposed SVD and PCA approaches have produced the most accurate estimates so far, these linear subspace-based methods still suffer from signal contamination from the clean signal content, especially in the low noise situation. In addition, the common performance evaluation procedure currently in use treats test images as noise-free. This omits the noise already in those test images and invariably incurs a bias. In this paper we make two contributions. First, we propose a new noise level estimation method using nonlinear local surface approximation. In this method, we first approximate image noise-free content in each block using a high degree polynomial. Then the block residual variances, which follow chi squared distribution, are sorted and the upper quantile of a carefully chosen size is used for estimation. Secondly, we propose a new performance evaluation procedure that is free from the influence of the noise already present in the test images. Experimental results show that it has much improved performance than typical state-of-the-art methods in terms of both estimation accuracy and stability.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114723348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}