Pub Date : 2010-12-01DOI: 10.1109/PCS.2010.5702456
Yanjie Li, Lifeng Sun
In 3D video transmission, the depth map is normally compressed by resolution reduction to save bandwidth. The lost information in resolution reduction is recovered by an appropriate upsampling algorithm in decoding step. Most previous work considers the depth upsampling problem as common 2D map upsampling problem and do not take the intrinsic property of depth map into consideration. In this paper, we propose a novel two-step depth map upsampling scheme to address the problem on 3D videos. The first step utilizes the full resolution 2D color map to direct the reconstruction of a more accurate full resolution depth map. And the second step further flats the reconstructed depth map to ensure its local uniformity. Test results show that the proposed novel upsam-pling scheme achieves up to 2dB coding gains for the rendering of free-viewpoint video, and improves its perceptual quality significantly.
{"title":"A novel upsampling scheme for depth map compression in 3DTV system","authors":"Yanjie Li, Lifeng Sun","doi":"10.1109/PCS.2010.5702456","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702456","url":null,"abstract":"In 3D video transmission, the depth map is normally compressed by resolution reduction to save bandwidth. The lost information in resolution reduction is recovered by an appropriate upsampling algorithm in decoding step. Most previous work considers the depth upsampling problem as common 2D map upsampling problem and do not take the intrinsic property of depth map into consideration. In this paper, we propose a novel two-step depth map upsampling scheme to address the problem on 3D videos. The first step utilizes the full resolution 2D color map to direct the reconstruction of a more accurate full resolution depth map. And the second step further flats the reconstructed depth map to ensure its local uniformity. Test results show that the proposed novel upsam-pling scheme achieves up to 2dB coding gains for the rendering of free-viewpoint video, and improves its perceptual quality significantly.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126607198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PCS.2010.5702536
A. Krutz, A. Glantz, T. Sikora
Sprite coding, as standardized in MPEG-4 Visual, can result in superior performance compared to common hybrid video codecs both objectively and subjectively. However, state-of-the-art video coding standard H.264/AVC clearly outperforms MPEG-4 Visual sprite coding in broad bit rate ranges. Based on the sprite coding idea, this paper proposes a video coding technique that merges the advantages of H.264/AVC and sprite coding. For that, sophisticated algorithms for global motion estimation, sprite generation and object segmentation — all needed for thorough sprite coding — are incorporated into an H.264/AVC coding environment. The proposed approach outperforms H.264/AVC especially in lower bit rate ranges. Savings up to 21% can be achieved.
{"title":"Recent advances in video coding using static background models","authors":"A. Krutz, A. Glantz, T. Sikora","doi":"10.1109/PCS.2010.5702536","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702536","url":null,"abstract":"Sprite coding, as standardized in MPEG-4 Visual, can result in superior performance compared to common hybrid video codecs both objectively and subjectively. However, state-of-the-art video coding standard H.264/AVC clearly outperforms MPEG-4 Visual sprite coding in broad bit rate ranges. Based on the sprite coding idea, this paper proposes a video coding technique that merges the advantages of H.264/AVC and sprite coding. For that, sophisticated algorithms for global motion estimation, sprite generation and object segmentation — all needed for thorough sprite coding — are incorporated into an H.264/AVC coding environment. The proposed approach outperforms H.264/AVC especially in lower bit rate ranges. Savings up to 21% can be achieved.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126175139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PCS.2010.5702540
K. Ugur, K. Andersson, A. Fuldseth, G. Bjøntegaard, L. P. Endresen, J. Lainema, A. Hallapuro, J. Ridge, D. Rusanovskyy, Cixun Zhang, A. Norkin, C. Priddle, T. Rusert, Jonatan Samuelsson, Rickard Sjöberg, Zhuangfei Wu
This paper describes a low complexity video codec with high coding efficiency. It was proposed to the High Efficiency Video Coding (HEVC) standardization effort of MPEG and VCEG, and has been partially adopted into the initial HEVC Test Model under Consideration design. The proposal utilizes a quad-tree structure with a support of large macroblocks of size 64×64 and 32×32, in addition to macroblocks of size 16×16. The entropy coding is done using a low complexity variable length coding based scheme with improved context adaptation over the H.264/AVC design. In addition, the proposal includes improved interpolation and deblocking filters, giving better coding efficiency while having low complexity. Finally, an improved intra coding method is presented. The subjective quality of the proposal is evaluated extensively and the results show that the proposed method achieves similar visual quality as H.264/AVC High Profile anchors with around 50% and 35% bit rate reduction for low delay and random-access experiments respectively at high definition sequences. This is achieved with less complexity than H.264/AVC Baseline Profile, making the proposal especially suitable for resource constrained environments.
本文介绍了一种编码效率高、复杂度低的视频编解码器。针对MPEG和VCEG的高效视频编码(High Efficiency Video Coding, HEVC)标准化工作提出了该方法,并在初步HEVC测试模型中部分采用。该提案利用四叉树结构,支持大小为64×64和32×32的大型宏块,以及大小为16×16的宏块。在H.264/AVC设计基础上,熵编码采用了一种低复杂度的变长编码方案,改进了上下文适应性。此外,该方案还包括改进的插值和去块滤波器,在降低复杂度的同时提高了编码效率。最后,提出了一种改进的帧内编码方法。对该方法的主观质量进行了广泛的评估,结果表明,该方法在高清晰度序列下的低延迟和随机访问实验中分别获得了与H.264/AVC High Profile锚点相似的视觉质量,比特率分别降低了50%和35%左右。这比H.264/AVC基线配置文件的复杂性更低,使该提议特别适合资源受限的环境。
{"title":"Low complexity video coding and the emerging HEVC standard","authors":"K. Ugur, K. Andersson, A. Fuldseth, G. Bjøntegaard, L. P. Endresen, J. Lainema, A. Hallapuro, J. Ridge, D. Rusanovskyy, Cixun Zhang, A. Norkin, C. Priddle, T. Rusert, Jonatan Samuelsson, Rickard Sjöberg, Zhuangfei Wu","doi":"10.1109/PCS.2010.5702540","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702540","url":null,"abstract":"This paper describes a low complexity video codec with high coding efficiency. It was proposed to the High Efficiency Video Coding (HEVC) standardization effort of MPEG and VCEG, and has been partially adopted into the initial HEVC Test Model under Consideration design. The proposal utilizes a quad-tree structure with a support of large macroblocks of size 64×64 and 32×32, in addition to macroblocks of size 16×16. The entropy coding is done using a low complexity variable length coding based scheme with improved context adaptation over the H.264/AVC design. In addition, the proposal includes improved interpolation and deblocking filters, giving better coding efficiency while having low complexity. Finally, an improved intra coding method is presented. The subjective quality of the proposal is evaluated extensively and the results show that the proposed method achieves similar visual quality as H.264/AVC High Profile anchors with around 50% and 35% bit rate reduction for low delay and random-access experiments respectively at high definition sequences. This is achieved with less complexity than H.264/AVC Baseline Profile, making the proposal especially suitable for resource constrained environments.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126473079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PCS.2010.5702477
K. Imamura, Naoki Kubo, H. Hashimoto
The present paper proposes an automatic extraction technique of moving objects using x-means clustering. The proposed technique is an extended k-means clustering and can determine the optimal number of clusters based on the Bayesian Information Criterion(BIC). In the proposed method, the feature points are extracted from a current frame, and x-means clustering classifies the feature points based on their estimated affine motion parameters. A label is assigned to the segmented region, which is obtained by morphological watershed, by voting for the feature point cluster in each region. The labeling result represents the moving object extraction. Experimental results reveal that the proposed method provides extraction results with the suitable object number.
{"title":"Automatic moving object extraction using x-means clustering","authors":"K. Imamura, Naoki Kubo, H. Hashimoto","doi":"10.1109/PCS.2010.5702477","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702477","url":null,"abstract":"The present paper proposes an automatic extraction technique of moving objects using x-means clustering. The proposed technique is an extended k-means clustering and can determine the optimal number of clusters based on the Bayesian Information Criterion(BIC). In the proposed method, the feature points are extracted from a current frame, and x-means clustering classifies the feature points based on their estimated affine motion parameters. A label is assigned to the segmented region, which is obtained by morphological watershed, by voting for the feature point cluster in each region. The labeling result represents the moving object extraction. Experimental results reveal that the proposed method provides extraction results with the suitable object number.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125197747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PCS.2010.5702573
M. Shohara, K. Kotani
This paper describes the dependency of noise perception on background color and luminance of noise quantitatively. We conduct subjective and quantitative experiments for three noise models, using a modified grayscale method. The subjective experiment results show the perceived color noise depends on the background color, but the perceived luminance noise does not. The most sensitive background colors for color noises are yellow and purple. The perceived noises against background gray level show the similar trend between noise models. When the background gray level is L∗∼25, we perceive the noise best. In addition, the perceived chromatic noise level is about 8 times smaller than the calculated color noise using CIELAB Euclidean distance.
{"title":"The dependence of visual noise perception on background color and luminance","authors":"M. Shohara, K. Kotani","doi":"10.1109/PCS.2010.5702573","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702573","url":null,"abstract":"This paper describes the dependency of noise perception on background color and luminance of noise quantitatively. We conduct subjective and quantitative experiments for three noise models, using a modified grayscale method. The subjective experiment results show the perceived color noise depends on the background color, but the perceived luminance noise does not. The most sensitive background colors for color noises are yellow and purple. The perceived noises against background gray level show the similar trend between noise models. When the background gray level is L∗∼25, we perceive the noise best. In addition, the perceived chromatic noise level is about 8 times smaller than the calculated color noise using CIELAB Euclidean distance.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116128333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PCS.2010.5702581
Mischa Siekmann, S. Bosse, H. Schwarz, T. Wiegand
Recent investigations have shown that a non-separable Wiener filter, that is applied inside the motion-compensation loop, can improve the coding efficiency of hybrid video coding designs. In this paper, we study the application of separable Wiener filters. Our design includes the possibility to adaptively choose between the application of the vertical, horizontal, or combined filter. The simulation results verify that a separable in-loop Wiener filter is capable of providing virtually the same increase in coding efficiency as a non-separable Wiener filter, but at a significantly reduced decoder complexity.
{"title":"Separable Wiener filter based adaptive in-loop filter for video coding","authors":"Mischa Siekmann, S. Bosse, H. Schwarz, T. Wiegand","doi":"10.1109/PCS.2010.5702581","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702581","url":null,"abstract":"Recent investigations have shown that a non-separable Wiener filter, that is applied inside the motion-compensation loop, can improve the coding efficiency of hybrid video coding designs. In this paper, we study the application of separable Wiener filters. Our design includes the possibility to adaptively choose between the application of the vertical, horizontal, or combined filter. The simulation results verify that a separable in-loop Wiener filter is capable of providing virtually the same increase in coding efficiency as a non-separable Wiener filter, but at a significantly reduced decoder complexity.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128061175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PCS.2010.5702506
M. Shafique, B. Zatt, S. Bampi, J. Henkel
We propose a novel power-aware scheme for complexity-scalable multiview video coding on mobile devices. Our scheme exploits the asymmetric view quality which is based on the binocular suppression theory. Our scheme employs different quality-complexity classes (QCCs) and adapts at run time depending upon the current battery state. It thereby enables a run-time tradeoff between complexity and video quality. The experimental results show that our scheme is superior to state-of-the-art and it provides an up to 87% complexity reduction while keeping the PSNR close to the exhaustive mode decision. We have demonstrated the power-aware adaptivity between different QCCs using a laptop with battery charging and discharging scenarios.
{"title":"Power-aware complexity-scalable multiview video coding for mobile devices","authors":"M. Shafique, B. Zatt, S. Bampi, J. Henkel","doi":"10.1109/PCS.2010.5702506","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702506","url":null,"abstract":"We propose a novel power-aware scheme for complexity-scalable multiview video coding on mobile devices. Our scheme exploits the asymmetric view quality which is based on the binocular suppression theory. Our scheme employs different quality-complexity classes (QCCs) and adapts at run time depending upon the current battery state. It thereby enables a run-time tradeoff between complexity and video quality. The experimental results show that our scheme is superior to state-of-the-art and it provides an up to 87% complexity reduction while keeping the PSNR close to the exhaustive mode decision. We have demonstrated the power-aware adaptivity between different QCCs using a laptop with battery charging and discharging scenarios.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"2016 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127360240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PCS.2010.5702571
Kyohei Oba, Takahiro Bandou, Tian Song, T. Shimamoto
In this paper, an efficient search center definition algorithm is proposed for H.264/AVC. H.264/AVC achieved high coding efficiency by introducing some new coding tools including a new definition of the search center. However, the definition of the search center is not efficient in the case of significant motions. This work proposes some new search center candidates using spatial and temporal correlations of motion vectors to improve the coding efficiency. Simulation results show that the proposed search centers can achieve very much bit saving but induced high computation complexity. Additional complexity reduction algorithm is also introduced to improve the trade off between bit saving and implementation performance. This work realized a maximum bit saving of 19%.
{"title":"Coding efficient improvement by adaptive search center definition","authors":"Kyohei Oba, Takahiro Bandou, Tian Song, T. Shimamoto","doi":"10.1109/PCS.2010.5702571","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702571","url":null,"abstract":"In this paper, an efficient search center definition algorithm is proposed for H.264/AVC. H.264/AVC achieved high coding efficiency by introducing some new coding tools including a new definition of the search center. However, the definition of the search center is not efficient in the case of significant motions. This work proposes some new search center candidates using spatial and temporal correlations of motion vectors to improve the coding efficiency. Simulation results show that the proposed search centers can achieve very much bit saving but induced high computation complexity. Additional complexity reduction algorithm is also introduced to improve the trade off between bit saving and implementation performance. This work realized a maximum bit saving of 19%.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132796554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PCS.2010.5702458
Yusuke Itani, Shun-ichi Sekiguchi, Y. Yamada
This paper proposes a new method for improving direct prediction scheme that has been employed in conventional video coding standards such as AVC/H.264. We extend direct prediction concept to achieve better adaptation to local statistics of video source with the assumption of the use of larger motion blocks than conventional macroblock size. Firstly, our direct prediction method introduces block adaptive spatio-temporal estimation of direct motion vector, in order to compensate loss of accuracy of motion vector estimation for large motion blocks. This estimation method is performed without explicit signaling by employing decode-side collaborative decision. Then, adaptive selection of two reference pictures is performed to improve direct prediction efficiency where reliability of estimated direct motion vector is poor. Experimental results show the proposed method provides up to 3.3% bitrate saving and 1.5% in average in low-bitrate coding.
{"title":"Adaptive direct vector derivation for video coding","authors":"Yusuke Itani, Shun-ichi Sekiguchi, Y. Yamada","doi":"10.1109/PCS.2010.5702458","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702458","url":null,"abstract":"This paper proposes a new method for improving direct prediction scheme that has been employed in conventional video coding standards such as AVC/H.264. We extend direct prediction concept to achieve better adaptation to local statistics of video source with the assumption of the use of larger motion blocks than conventional macroblock size. Firstly, our direct prediction method introduces block adaptive spatio-temporal estimation of direct motion vector, in order to compensate loss of accuracy of motion vector estimation for large motion blocks. This estimation method is performed without explicit signaling by employing decode-side collaborative decision. Then, adaptive selection of two reference pictures is performed to improve direct prediction efficiency where reliability of estimated direct motion vector is poor. Experimental results show the proposed method provides up to 3.3% bitrate saving and 1.5% in average in low-bitrate coding.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"3 23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129684953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PCS.2010.5702529
R. Cajote, S. Aramvith
The use of Flexible Macroblock Ordering (FMO) in H.264/AVC as an error-resilient tool incurs extra overhead bits that reduces coding efficiency at low bit rate. To improve coding efficiency, we present an improved frame-layer H.264/AVC rate control that takes into consideration the effects of using FMO for video transmission. In this paper, we propose a new header bits model, an enhanced frame complexity measure and a quantization parameter (QP) adjustment scheme. Simulation results show that the proposed method performed better than the existing frame layer rate control with FMO enabled using different number of slice groups.
{"title":"Improved FMO based H.264 frame layer rate control for low bit rate video transmission","authors":"R. Cajote, S. Aramvith","doi":"10.1109/PCS.2010.5702529","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702529","url":null,"abstract":"The use of Flexible Macroblock Ordering (FMO) in H.264/AVC as an error-resilient tool incurs extra overhead bits that reduces coding efficiency at low bit rate. To improve coding efficiency, we present an improved frame-layer H.264/AVC rate control that takes into consideration the effects of using FMO for video transmission. In this paper, we propose a new header bits model, an enhanced frame complexity measure and a quantization parameter (QP) adjustment scheme. Simulation results show that the proposed method performed better than the existing frame layer rate control with FMO enabled using different number of slice groups.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125113556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}