Pub Date : 2010-12-01DOI: 10.1109/PCS.2010.5702535
T. Kimoto, Fumihiko Kosaka
An image watermarking scheme using the previously proposed bit embedding method is developed. To achieve a desired subjective visual quality in the watermarked image, the embedding parameter that is related to both the image quality and the embedding capacity is determined by using a perceptual model. First, based on the properties of the bit embedding method, the perceptual model of two kinds of objective quality measures is assumed. Then, the measurements of human subjective image quality are analyzed from the viewpoint of the correlation with these two measures. Thereby, the estimating function that can yield an estimate of the subjective quality from two objective measurements is determined. According to the estimating function, the bit embedding method performs in each image region so as to achieve a desired subjective image quality while increasing the capacity of embedding watermark bits. The simulation results demonstrate that the estimating function values have a linear correlation with human subjective evaluations, and the embedding parameters can be adaptively changed in every image region by using the function.
{"title":"A subjective image quality metric for bit-inversion-based watermarking","authors":"T. Kimoto, Fumihiko Kosaka","doi":"10.1109/PCS.2010.5702535","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702535","url":null,"abstract":"An image watermarking scheme using the previously proposed bit embedding method is developed. To achieve a desired subjective visual quality in the watermarked image, the embedding parameter that is related to both the image quality and the embedding capacity is determined by using a perceptual model. First, based on the properties of the bit embedding method, the perceptual model of two kinds of objective quality measures is assumed. Then, the measurements of human subjective image quality are analyzed from the viewpoint of the correlation with these two measures. Thereby, the estimating function that can yield an estimate of the subjective quality from two objective measurements is determined. According to the estimating function, the bit embedding method performs in each image region so as to achieve a desired subjective image quality while increasing the capacity of embedding watermark bits. The simulation results demonstrate that the estimating function values have a linear correlation with human subjective evaluations, and the embedding parameters can be adaptively changed in every image region by using the function.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114301558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PCS.2010.5702585
Dong Yon Kim, Dongsan Jun, H. W. Park
Recently, a new video coding technique, distributed video coding (DVC), is an emerging research area for low power video coding applications. In the DVC, the encoder is much simpler than the conventional video codec, whereas the decoder is very heavy. The DVC decoder exploits side information which is generated by motion compensated frame interpolation to reconstruct the Wyner-Ziv frame. This paper proposes an efficient side information generation algorithm using seed blocks for DVC. Seed blocks are firstly selected to be used for motion estimation of the other blocks. As the side information is close to the target image, the final reconstructed image in the DVC decoder has better quality and the compression ratio becomes high. The proposed method contributes to improve the DVC compression performance with reduced computing time. Experimental results show that accurate motion vectors are estimated by the proposed method and its computational complexity of motion estimation is significantly reduced in comparison with the previous methods.
分布式视频编码(distributed video coding, DVC)是一种新的视频编码技术,是目前低功耗视频编码应用的一个新兴研究领域。在DVC中,编码器比传统的视频编解码器简单得多,而解码器却非常笨重。DVC解码器利用运动补偿帧插值产生的边信息重构Wyner-Ziv帧。提出了一种基于种子块的高效边信息生成算法。首先选择种子块用于其他块的运动估计。由于边线信息更接近目标图像,最终在DVC解码器中重构的图像质量更好,压缩比更高。该方法在提高DVC压缩性能的同时减少了计算时间。实验结果表明,该方法能够准确估计出运动矢量,且运动估计的计算量较以往方法显著降低。
{"title":"An efficient side information generation using seed blocks for distributed video coding","authors":"Dong Yon Kim, Dongsan Jun, H. W. Park","doi":"10.1109/PCS.2010.5702585","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702585","url":null,"abstract":"Recently, a new video coding technique, distributed video coding (DVC), is an emerging research area for low power video coding applications. In the DVC, the encoder is much simpler than the conventional video codec, whereas the decoder is very heavy. The DVC decoder exploits side information which is generated by motion compensated frame interpolation to reconstruct the Wyner-Ziv frame. This paper proposes an efficient side information generation algorithm using seed blocks for DVC. Seed blocks are firstly selected to be used for motion estimation of the other blocks. As the side information is close to the target image, the final reconstructed image in the DVC decoder has better quality and the compression ratio becomes high. The proposed method contributes to improve the DVC compression performance with reduced computing time. Experimental results show that accurate motion vectors are estimated by the proposed method and its computational complexity of motion estimation is significantly reduced in comparison with the previous methods.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128387021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PCS.2010.5702545
Christian Keimel, K. Diepold, M. Sarkis
The future of video coding for 3DTV lies in the combination of depth maps and corresponding textures. Most current video coding standards, however, are only optimized for visual quality and are not able to efficiently compress depth maps. We present in this work a content adaptive depth map meshing with tritree and entropy encoding for 3D videos. We show that this approach outperforms the intra frame prediction of AVC/H.264 for the coding of depth maps of still images. We also demonstrate by combining AVC/H.264 with our algorithm that we are able to increase the visual quality of the encoded texture on average by 6 dB. This work is currently limited to still images but an extension to intra coding of 3D video is straightforward.
{"title":"Improving the visual quality of AVC/H.264 by combining it with content adaptive depth map compression","authors":"Christian Keimel, K. Diepold, M. Sarkis","doi":"10.1109/PCS.2010.5702545","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702545","url":null,"abstract":"The future of video coding for 3DTV lies in the combination of depth maps and corresponding textures. Most current video coding standards, however, are only optimized for visual quality and are not able to efficiently compress depth maps. We present in this work a content adaptive depth map meshing with tritree and entropy encoding for 3D videos. We show that this approach outperforms the intra frame prediction of AVC/H.264 for the coding of depth maps of still images. We also demonstrate by combining AVC/H.264 with our algorithm that we are able to increase the visual quality of the encoded texture on average by 6 dB. This work is currently limited to still images but an extension to intra coding of 3D video is straightforward.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134618847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PCS.2010.5702564
Eduardo Peixoto, Toni Zgaljic, E. Izquierdo
Scalable Video Coding (SVC) enables low complexity adaptation of the compressed video, providing an efficient solution for video content delivery through heterogeneous networks and to different displays. However, legacy video and most commercially available content capturing devices use conventional non-scalable coding, e.g., H.264/AVC. This paper proposes an efficient transcoder from H.264/AVC to a wavelet-based SVC to exploit the advantages offerend by the SVC technology. The proposed transcoder is able to cope with different coding configurations in H.264/AVC, such as IPP or IBBP with multiple reference frames. To reduce the transcoder's complexity, motion information and presence of the residual data extracted from the decoded H.264/AVC video are exploited. Experimental results show a good performance of the proposed transcoder in terms of decoded video quality and system complexity.
{"title":"H.264/AVC to wavelet-based scalable video transcoding supporting multiple coding configurations","authors":"Eduardo Peixoto, Toni Zgaljic, E. Izquierdo","doi":"10.1109/PCS.2010.5702564","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702564","url":null,"abstract":"Scalable Video Coding (SVC) enables low complexity adaptation of the compressed video, providing an efficient solution for video content delivery through heterogeneous networks and to different displays. However, legacy video and most commercially available content capturing devices use conventional non-scalable coding, e.g., H.264/AVC. This paper proposes an efficient transcoder from H.264/AVC to a wavelet-based SVC to exploit the advantages offerend by the SVC technology. The proposed transcoder is able to cope with different coding configurations in H.264/AVC, such as IPP or IBBP with multiple reference frames. To reduce the transcoder's complexity, motion information and presence of the residual data extracted from the decoded H.264/AVC video are exploited. Experimental results show a good performance of the proposed transcoder in terms of decoded video quality and system complexity.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"275 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134535569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PCS.2010.5702527
B. Zatt, M. Shafique, S. Bampi, J. Henkel
In this work a novel scheme is proposed for adaptive early SKIP mode decision in the multiview video coding based on mode correlation in the 3D-neighborhood, variance, and ratedistortion properties. Our scheme employs an adaptive thresholding mechanism in order to react to the changing values of Quantization Parameter (QP). Experimental results demonstrate that our scheme provides a consistent time saving over a wide range of QP values. Compared to the exhaustive mode decision, our scheme provides a significant reduction in the encoding complexity (up to 77%) at the cost of a small PSNR loss (0.172 dB in average). Compared to state-of-the-art, our scheme provides an average 2× higher complexity reduction with a relatively higher PSNR value (avg. 0.2 dB).
{"title":"An adaptive early skip mode decision scheme for multiview video coding","authors":"B. Zatt, M. Shafique, S. Bampi, J. Henkel","doi":"10.1109/PCS.2010.5702527","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702527","url":null,"abstract":"In this work a novel scheme is proposed for adaptive early SKIP mode decision in the multiview video coding based on mode correlation in the 3D-neighborhood, variance, and ratedistortion properties. Our scheme employs an adaptive thresholding mechanism in order to react to the changing values of Quantization Parameter (QP). Experimental results demonstrate that our scheme provides a consistent time saving over a wide range of QP values. Compared to the exhaustive mode decision, our scheme provides a significant reduction in the encoding complexity (up to 77%) at the cost of a small PSNR loss (0.172 dB in average). Compared to state-of-the-art, our scheme provides an average 2× higher complexity reduction with a relatively higher PSNR value (avg. 0.2 dB).","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125967492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PCS.2010.5702533
Gang He, Dajiang Zhou, Jinjia Zhou, S. Goto
This paper proposes a high-performance intra prediction architecture that can support H.264/AVC high profile. The proposed MB/block co-reordering can avoid data dependency and improve pipeline utilization. Therefore, the timing constraint of real-time 4k×2k encoding can be achieved with negligible quality loss. 16×16 prediction engine and 8×8 prediction engine work parallel for prediction and coefficients generating. A reordering interlaced reconstruction is also designed for fully pipelined architecture. It takes only 160 cycles to process one macroblock (MB). Hardware utilization of prediction and reconstruction modules is almost 100%. Furthermore, PE-reusable 8×8 intra predictor and hybrid SAD & SATD mode decision are proposed to save hardware cost. The design is implemented by 90nm CMOS technology with 113.2k gates and can encode 4k×2k video sequences at 60 fps with operation frequency of 310MHz.
{"title":"Intra prediction architecture for H.264/AVC QFHD encoder","authors":"Gang He, Dajiang Zhou, Jinjia Zhou, S. Goto","doi":"10.1109/PCS.2010.5702533","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702533","url":null,"abstract":"This paper proposes a high-performance intra prediction architecture that can support H.264/AVC high profile. The proposed MB/block co-reordering can avoid data dependency and improve pipeline utilization. Therefore, the timing constraint of real-time 4k×2k encoding can be achieved with negligible quality loss. 16×16 prediction engine and 8×8 prediction engine work parallel for prediction and coefficients generating. A reordering interlaced reconstruction is also designed for fully pipelined architecture. It takes only 160 cycles to process one macroblock (MB). Hardware utilization of prediction and reconstruction modules is almost 100%. Furthermore, PE-reusable 8×8 intra predictor and hybrid SAD & SATD mode decision are proposed to save hardware cost. The design is implemented by 90nm CMOS technology with 113.2k gates and can encode 4k×2k video sequences at 60 fps with operation frequency of 310MHz.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124915004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PCS.2010.5702572
Songnan Li, Lin Ma, Fan Zhang, K. Ngan
Visual quality assessment plays a crucial role in many vision-related signal processing applications. In the literature, more efforts have been spent on spatial visual quality measure. Although a large number of video quality metrics have been proposed, the methods to use temporal information for quality assessment are less diversified. In this paper, we propose a novel method to measure the temporal impairments. The proposed method can be incorporated into any image quality metric to extend it into a video quality metric. Moreover, it is easy to apply the proposed method in video coding system to incorporate with MSE for rate-distortion optimization.
{"title":"Temporal inconsistency measure for video quality assessment","authors":"Songnan Li, Lin Ma, Fan Zhang, K. Ngan","doi":"10.1109/PCS.2010.5702572","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702572","url":null,"abstract":"Visual quality assessment plays a crucial role in many vision-related signal processing applications. In the literature, more efforts have been spent on spatial visual quality measure. Although a large number of video quality metrics have been proposed, the methods to use temporal information for quality assessment are less diversified. In this paper, we propose a novel method to measure the temporal impairments. The proposed method can be incorporated into any image quality metric to extend it into a video quality metric. Moreover, it is easy to apply the proposed method in video coding system to incorporate with MSE for rate-distortion optimization.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"84 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126023333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PCS.2010.5702534
N. Sprljan, P. Brasnett, S. Paschalakis
This paper presents a new application-specific lossless compression scheme developed for video identification descriptors, also known as video fingerprints or signatures. In designing such a descriptor, one usually has to balance the descriptor size against discriminating power and temporal localisation performance. The proposed compression scheme alleviates this problem by efficiently exploiting the temporal redundancies present in the video fingerprint, allowing highly accurate fingerprints which also entail low transmission and storage costs. In this paper we provide a detailed description of our compression scheme and a comparative evaluation against well known state-of-the-art generic compression tools.
{"title":"Compressed signature for video identification","authors":"N. Sprljan, P. Brasnett, S. Paschalakis","doi":"10.1109/PCS.2010.5702534","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702534","url":null,"abstract":"This paper presents a new application-specific lossless compression scheme developed for video identification descriptors, also known as video fingerprints or signatures. In designing such a descriptor, one usually has to balance the descriptor size against discriminating power and temporal localisation performance. The proposed compression scheme alleviates this problem by efficiently exploiting the temporal redundancies present in the video fingerprint, allowing highly accurate fingerprints which also entail low transmission and storage costs. In this paper we provide a detailed description of our compression scheme and a comparative evaluation against well known state-of-the-art generic compression tools.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130113019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PCS.2010.5702454
Antonio Ortega
For the purpose of this paper we group, under the generic term multiview video, different systems for which multiple standard video cameras and, possibly, additional depth-capturing cameras, are used. Video is then presented to the user using special glasses or displays. Research work in this area has focused on topics ranging from designing compression techniques to developing new 3D displays. In this paper we primarily consider the challenges involved in developing efficient compression tools. Our primary observation is that the “right” coding tools could depend heavily on choices made for content capture, display and communication. This is of course true for conventional video coding as well. But we will argue that it is even more important to address these issues for multiview video because there are much greater differences between different application scenarios (as compared to conventional video). The risk is that coding tools that are too narrowly focused on a specific application scenario may not be at all suitable for others. We focus specifically on three factors for which there exists significant uncertainty, namely, displays, depth estimation and content delivery. Our goal is not to discuss in detail current and future approaches (e.g., emerging alternative display technologies), but rather to show how these various approaches may have an impact on compression system design.
{"title":"Challenges in multiview video — The 3 D'S","authors":"Antonio Ortega","doi":"10.1109/PCS.2010.5702454","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702454","url":null,"abstract":"For the purpose of this paper we group, under the generic term multiview video, different systems for which multiple standard video cameras and, possibly, additional depth-capturing cameras, are used. Video is then presented to the user using special glasses or displays. Research work in this area has focused on topics ranging from designing compression techniques to developing new 3D displays. In this paper we primarily consider the challenges involved in developing efficient compression tools. Our primary observation is that the “right” coding tools could depend heavily on choices made for content capture, display and communication. This is of course true for conventional video coding as well. But we will argue that it is even more important to address these issues for multiview video because there are much greater differences between different application scenarios (as compared to conventional video). The risk is that coding tools that are too narrowly focused on a specific application scenario may not be at all suitable for others. We focus specifically on three factors for which there exists significant uncertainty, namely, displays, depth estimation and content delivery. Our goal is not to discuss in detail current and future approaches (e.g., emerging alternative display technologies), but rather to show how these various approaches may have an impact on compression system design.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126306111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PCS.2010.5702507
Weilan Luo, T. Yamasaki, K. Aizawa
In this paper, a stochastic approach for extracting the articulated 3D human postures by synchronized multiple cameras in the high-dimensional configuration spaces is presented. Annealed Particle Filtering (APF) [1] seeks for the globally optimal solution of the likelihood. We improve and extend the APF with local memorization to estimate the suited kinematic postures for a volume sequence directly instead of projecting a rough simplified body model to 2D images. Our method guides the particles to the global optimization on the basis of local constraints. A segmentation algorithm is performed on the volumetric models and the process is repeated. We assign the articulated models 42 degrees of freedom. The matching error is about 6% on average while tracking the posture between two neighboring frames.
{"title":"3D pose estimation in high dimensional search spaces with local memorization","authors":"Weilan Luo, T. Yamasaki, K. Aizawa","doi":"10.1109/PCS.2010.5702507","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702507","url":null,"abstract":"In this paper, a stochastic approach for extracting the articulated 3D human postures by synchronized multiple cameras in the high-dimensional configuration spaces is presented. Annealed Particle Filtering (APF) [1] seeks for the globally optimal solution of the likelihood. We improve and extend the APF with local memorization to estimate the suited kinematic postures for a volume sequence directly instead of projecting a rough simplified body model to 2D images. Our method guides the particles to the global optimization on the basis of local constraints. A segmentation algorithm is performed on the volumetric models and the process is repeated. We assign the articulated models 42 degrees of freedom. The matching error is about 6% on average while tracking the posture between two neighboring frames.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126439158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}