Pub Date : 2010-12-01DOI: 10.1109/PCS.2010.5702575
Lu Yang, M. O. Wildeboer, T. Yendo, M. P. Tehrani, T. Fujii, M. Tanimoto
View synthesis using depth maps is a well-known technique for exploiting the redundancy between multi-view videos. In this paper, we deal with the bitrates of view synthesis at the decoder side of FTV that would use compressed depth maps and views. Both inherent depth estimation error and coding distortion would degrade synthesis quality. The focus is to reduce bitrates required for generating the high-quality virtual view. We employ a reliable view synthesis method which is compared with standard MPEG view synthesis software. The experimental results show that the bitrates required for synthesizing high-quality virtual view could be reduced by utilizing our enhanced view synthesis technique to improve the PSNR at medium bitrates.
{"title":"Reducing bitrates of compressed video with enhanced view synthesis for FTV","authors":"Lu Yang, M. O. Wildeboer, T. Yendo, M. P. Tehrani, T. Fujii, M. Tanimoto","doi":"10.1109/PCS.2010.5702575","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702575","url":null,"abstract":"View synthesis using depth maps is a well-known technique for exploiting the redundancy between multi-view videos. In this paper, we deal with the bitrates of view synthesis at the decoder side of FTV that would use compressed depth maps and views. Both inherent depth estimation error and coding distortion would degrade synthesis quality. The focus is to reduce bitrates required for generating the high-quality virtual view. We employ a reliable view synthesis method which is compared with standard MPEG view synthesis software. The experimental results show that the bitrates required for synthesizing high-quality virtual view could be reduced by utilizing our enhanced view synthesis technique to improve the PSNR at medium bitrates.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127575055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PCS.2010.5702515
Yifei Jiang, Dandan Huan
Texture compression is a specialized form of still image compression employed in computer graphics systems to reduce memory bandwidth consumption. Modern texture compression schemes cannot generate satisfactory qualities for both alpha channel and color channel of texture images. We propose a novel texture compression scheme, named ImTC, based on the insight into the essential difference between transparency and color. ImTC defines new data formats and compresses the two channels flexibly. While keeping the same compression ratio as the de facto standard texture compression scheme, ImTC improves compression qualities of both channels. The average PSNR score of alpha channel is improved by about 0.2 dB, and that of color channel can be increased by 6.50 dB over a set of test images, which makes ImTC a better substitute for the standard scheme.
{"title":"Improved texture compression for S3TC","authors":"Yifei Jiang, Dandan Huan","doi":"10.1109/PCS.2010.5702515","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702515","url":null,"abstract":"Texture compression is a specialized form of still image compression employed in computer graphics systems to reduce memory bandwidth consumption. Modern texture compression schemes cannot generate satisfactory qualities for both alpha channel and color channel of texture images. We propose a novel texture compression scheme, named ImTC, based on the insight into the essential difference between transparency and color. ImTC defines new data formats and compresses the two channels flexibly. While keeping the same compression ratio as the de facto standard texture compression scheme, ImTC improves compression qualities of both channels. The average PSNR score of alpha channel is improved by about 0.2 dB, and that of color channel can be increased by 6.50 dB over a set of test images, which makes ImTC a better substitute for the standard scheme.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117262037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PCS.2010.5702467
Xiang Li, M. Wien, J. Ohm
Today, video applications on handheld devices become more and more popular. Due to limited computational capability of handheld devices, complexity constrained video coding draws much attention. In this paper, a medium-granularity computational complexity control (MGCC) is proposed for H.264/AVC. First, a large dynamic range in complexity is achieved by taking 16×16 motion estimation in a single reference frame as the basic computational unit. Then a high coding efficiency is obtained by an adaptive computation allocation at MB level. Simulations show that coarse-granularity methods cannot work when the normalized complexity is below 15%. In contrast, the proposed MGCC performs well even when the complexity is reduced to 8.8%. Moreover, an average gain of 0.3 dB over coarse-granularity methods in BD-PSNR is obtained for 11 sequences when the complexity is around 20%.
{"title":"Medium-granularity computational complexity control for H.264/AVC","authors":"Xiang Li, M. Wien, J. Ohm","doi":"10.1109/PCS.2010.5702467","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702467","url":null,"abstract":"Today, video applications on handheld devices become more and more popular. Due to limited computational capability of handheld devices, complexity constrained video coding draws much attention. In this paper, a medium-granularity computational complexity control (MGCC) is proposed for H.264/AVC. First, a large dynamic range in complexity is achieved by taking 16×16 motion estimation in a single reference frame as the basic computational unit. Then a high coding efficiency is obtained by an adaptive computation allocation at MB level. Simulations show that coarse-granularity methods cannot work when the normalized complexity is below 15%. In contrast, the proposed MGCC performs well even when the complexity is reduced to 8.8%. Moreover, an average gain of 0.3 dB over coarse-granularity methods in BD-PSNR is obtained for 11 sequences when the complexity is around 20%.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"229 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124532965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PCS.2010.5702449
T. Yamasaki, K. Aizawa
This paper discusses bit-rate assignments for vertices, color, reference frames, and target frames in the patch-based compression method for time-varying meshes (TVMs). TVMs are nonisomorphic 3D mesh sequences of the real-world objects generated from multiview images. Experimental results demonstrate that the bit rate for vertices greatly affects the visual quality of the rendered 3D model, whereas the bit rate for color does not contribute to quality improvement. Therefore, as many bits as possible should be assigned to vertices, with 8–10 bits per vertex (bpv) per frame being sufficient for color. For interframe coding, the visual quality is improved in proportion to the bit rate of both vertices and color. However, it is demonstrated that the use of fewer bits (5∼6 bpv) is sufficient to achieve a visual quality that matches the intraframe visual quality.
{"title":"Bit allocation of vertices and colors for patch-based coding in time-varying meshes","authors":"T. Yamasaki, K. Aizawa","doi":"10.1109/PCS.2010.5702449","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702449","url":null,"abstract":"This paper discusses bit-rate assignments for vertices, color, reference frames, and target frames in the patch-based compression method for time-varying meshes (TVMs). TVMs are nonisomorphic 3D mesh sequences of the real-world objects generated from multiview images. Experimental results demonstrate that the bit rate for vertices greatly affects the visual quality of the rendered 3D model, whereas the bit rate for color does not contribute to quality improvement. Therefore, as many bits as possible should be assigned to vertices, with 8–10 bits per vertex (bpv) per frame being sufficient for color. For interframe coding, the visual quality is improved in proportion to the bit rate of both vertices and color. However, it is demonstrated that the use of fewer bits (5∼6 bpv) is sufficient to achieve a visual quality that matches the intraframe visual quality.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123220500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PCS.2010.5702484
T. Richter
In a recent work [16], the author proposed to study the performance of still image quality indices such as the SSIM by using them as objective function of rate allocation algorithms. The outcome of that work was not only a multi-scale SSIM optimal JPEG 2000 implementation, but also a first-order approximation of the MS-SSIM that is surprisingly similar to more traditional contrast-sensitivity and visual masking based approaches. It will be seen in this work that the only difference between the latter works and the MS-SSIM index is the choice of the exponent of the masking term, and furthermore, that a slight modification of the SSIM definition reproducing the traditional exponent is able to improve the performance of the index at or below the visual threshold. It is hence demonstrated that the duality of quality indices and rate allocation helps to improve both the visual performance of the compression codec and the performance of the index.
{"title":"On the duality of rate allocation and quality indices","authors":"T. Richter","doi":"10.1109/PCS.2010.5702484","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702484","url":null,"abstract":"In a recent work [16], the author proposed to study the performance of still image quality indices such as the SSIM by using them as objective function of rate allocation algorithms. The outcome of that work was not only a multi-scale SSIM optimal JPEG 2000 implementation, but also a first-order approximation of the MS-SSIM that is surprisingly similar to more traditional contrast-sensitivity and visual masking based approaches. It will be seen in this work that the only difference between the latter works and the MS-SSIM index is the choice of the exponent of the masking term, and furthermore, that a slight modification of the SSIM definition reproducing the traditional exponent is able to improve the performance of the index at or below the visual threshold. It is hence demonstrated that the duality of quality indices and rate allocation helps to improve both the visual performance of the compression codec and the performance of the index.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130777349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PCS.2010.5702557
K. Sakomizu, T. Yamasaki, Satoshi Nakagawa, T. Nishi
This paper presents a real-time system of distributed video coding (DVC). DVC is a current video compression paradigm. The decoding process of DVC is normally complex, which causes difficulty in real-time implementation. To address this problem, we propose a new configuration of DVC with three methods: simple rate control without the feedback channel, simple transmitting of dynamic range and simple bidirectional motion estimation to reduce complexity. Then we implement the system with parallelization techniques. We also develop the encoder for a low power processor. Experimental results show that the encoder on i.MX31 400 MHz could operates at about CIF 13 fps, and the decoder on Core 2 Quad 2.83 GHz operates at more than CIF 30 fps.
{"title":"A real-time system of distributed video coding","authors":"K. Sakomizu, T. Yamasaki, Satoshi Nakagawa, T. Nishi","doi":"10.1109/PCS.2010.5702557","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702557","url":null,"abstract":"This paper presents a real-time system of distributed video coding (DVC). DVC is a current video compression paradigm. The decoding process of DVC is normally complex, which causes difficulty in real-time implementation. To address this problem, we propose a new configuration of DVC with three methods: simple rate control without the feedback channel, simple transmitting of dynamic range and simple bidirectional motion estimation to reduce complexity. Then we implement the system with parallelization techniques. We also develop the encoder for a low power processor. Experimental results show that the encoder on i.MX31 400 MHz could operates at about CIF 13 fps, and the decoder on Core 2 Quad 2.83 GHz operates at more than CIF 30 fps.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128721470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PCS.2010.5702468
Feng Ye, Aidong Men, Bo Yang, Manman Fan, Kan Chang
This paper presents an improved feedback-assisted low complexity WZVC scheme. The performance of this scheme is improved by two enhancements: an improved mode-based key frame encoding and a 3DRS-assisted (three-dimensional recursive search assisted) motion estimation algorithm for WZ encoding. Experimental results show that our coding scheme can achieve significant gain compared to state-oft he-art TDWZ codec while still low encoding complexity.
{"title":"An improved Wyner-Ziv video coding with feedback channel","authors":"Feng Ye, Aidong Men, Bo Yang, Manman Fan, Kan Chang","doi":"10.1109/PCS.2010.5702468","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702468","url":null,"abstract":"This paper presents an improved feedback-assisted low complexity WZVC scheme. The performance of this scheme is improved by two enhancements: an improved mode-based key frame encoding and a 3DRS-assisted (three-dimensional recursive search assisted) motion estimation algorithm for WZ encoding. Experimental results show that our coding scheme can achieve significant gain compared to state-oft he-art TDWZ codec while still low encoding complexity.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126833012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PCS.2010.5702466
Hung-Wei Chen, Li-Wei Kang, Chun-Shien Lu
We address an important issue of fully low-cost and low-complex video compression for use in resource-extremely limited sensors/devices. Conventional motion estimation-based video compression or distributed video coding (DVC) techniques all rely on the high-cost mechanism, namely, sensing/sampling and compression are disjointedly performed, resulting in unnecessary consumption of resources. That is, most acquired raw video data will be discarded in the (possibly) complex compression stage. In this paper, we propose a dictionary learning-based distributed compressive video sensing (DCVS) framework to “directly” acquire compressed video data. Embedded in the compressive sensing (CS)-based single-pixel camera architecture, DCVS can compressively sense each video frame in a distributed manner. At DCVS decoder, video reconstruction can be formulated as an l1-minimization problem via solving the sparse coefficients with respect to some basis functions. We investigate adaptive dictionary/basis learning for each frame based on the training samples extracted from previous reconstructed neighboring frames and argue that much better basis can be obtained to represent the frame, compared to fixed basis-based representation and recent popular “CS-based DVC” approaches without relying on dictionary learning.
{"title":"Dictionary learning-based distributed compressive video sensing","authors":"Hung-Wei Chen, Li-Wei Kang, Chun-Shien Lu","doi":"10.1109/PCS.2010.5702466","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702466","url":null,"abstract":"We address an important issue of fully low-cost and low-complex video compression for use in resource-extremely limited sensors/devices. Conventional motion estimation-based video compression or distributed video coding (DVC) techniques all rely on the high-cost mechanism, namely, sensing/sampling and compression are disjointedly performed, resulting in unnecessary consumption of resources. That is, most acquired raw video data will be discarded in the (possibly) complex compression stage. In this paper, we propose a dictionary learning-based distributed compressive video sensing (DCVS) framework to “directly” acquire compressed video data. Embedded in the compressive sensing (CS)-based single-pixel camera architecture, DCVS can compressively sense each video frame in a distributed manner. At DCVS decoder, video reconstruction can be formulated as an l1-minimization problem via solving the sparse coefficients with respect to some basis functions. We investigate adaptive dictionary/basis learning for each frame based on the training samples extracted from previous reconstructed neighboring frames and argue that much better basis can be obtained to represent the frame, compared to fixed basis-based representation and recent popular “CS-based DVC” approaches without relying on dictionary learning.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126350243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PCS.2010.5702453
Woo-Shik Kim, Antonio Ortega, Jaejoon Lee, H. Wey
The objective of this work is to develop a new 3-D video coding system which can provide better coding efficiency with improved subjective quality as compared to existing 3-D video systems. We have analyzed the distortions that occur in rendered views generated using depth image based rendering (DIBR) and classified them in order to evaluate their impact on subjective quality. As a result, we found that depth map coding distortion leads to “erosion artifacts” at object boundaries, which lead to significant degradation in perceptual quality. To solve this problem, we propose a solution in which depth transition data is encoded and transmitted to the decoder. Depth transition data for a given pixel indicates the camera position for which this pixel's depth will change. A main reason to consider transmitting explicitly this information is that it can be used to improve view interpolation at many different intermediate camera positions. Simulation results show that the subjective quality can be significantly improved by reducing the effect of erosion artifacts, using our proposed depth transition data. Maximum PSNR gains of about 0.5 dB can also be observed.
{"title":"3-D video coding using depth transition data","authors":"Woo-Shik Kim, Antonio Ortega, Jaejoon Lee, H. Wey","doi":"10.1109/PCS.2010.5702453","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702453","url":null,"abstract":"The objective of this work is to develop a new 3-D video coding system which can provide better coding efficiency with improved subjective quality as compared to existing 3-D video systems. We have analyzed the distortions that occur in rendered views generated using depth image based rendering (DIBR) and classified them in order to evaluate their impact on subjective quality. As a result, we found that depth map coding distortion leads to “erosion artifacts” at object boundaries, which lead to significant degradation in perceptual quality. To solve this problem, we propose a solution in which depth transition data is encoded and transmitted to the decoder. Depth transition data for a given pixel indicates the camera position for which this pixel's depth will change. A main reason to consider transmitting explicitly this information is that it can be used to improve view interpolation at many different intermediate camera positions. Simulation results show that the subjective quality can be significantly improved by reducing the effect of erosion artifacts, using our proposed depth transition data. Maximum PSNR gains of about 0.5 dB can also be observed.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124260549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PCS.2010.5702459
Hao Chen, R. Hu, Zhongyuan Wang, Rui Zhong
Inter prediction based on block matching motion estimation is important for video coding. But this method suffers from the additional overhead in data rate representing the motion information that needs to be transmitted to the decoder. To solve this problem, we present an improved implicit motion information inter prediction algorithm for P slice in H.264/AVC based on the spatio-temporal adaptive localized learning (STALL) model. According to 4 × 4 block transform structure in H.264/AVC, we first adaptively choose nine spatial neighbors and nine temporal neighbors, and a localized 3D casual cube is designed as training window. By using these information, the model parameters could be adaptively computed based on the Least Square Prediction (LSP) method. Finally, we add a new inter prediction mode into H.264/AVC standard for P slice. The experimental results show that our algorithm improves encoding efficiency compared with H.264/AVC standard, with relatively increases in complexity.
{"title":"Inter prediction based on spatio-temporal adaptive localized learning model","authors":"Hao Chen, R. Hu, Zhongyuan Wang, Rui Zhong","doi":"10.1109/PCS.2010.5702459","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702459","url":null,"abstract":"Inter prediction based on block matching motion estimation is important for video coding. But this method suffers from the additional overhead in data rate representing the motion information that needs to be transmitted to the decoder. To solve this problem, we present an improved implicit motion information inter prediction algorithm for P slice in H.264/AVC based on the spatio-temporal adaptive localized learning (STALL) model. According to 4 × 4 block transform structure in H.264/AVC, we first adaptively choose nine spatial neighbors and nine temporal neighbors, and a localized 3D casual cube is designed as training window. By using these information, the model parameters could be adaptively computed based on the Least Square Prediction (LSP) method. Finally, we add a new inter prediction mode into H.264/AVC standard for P slice. The experimental results show that our algorithm improves encoding efficiency compared with H.264/AVC standard, with relatively increases in complexity.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117350256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}