Pub Date : 2013-11-01DOI: 10.1109/VCIP.2013.6706422
Hui Qu, Li Song, Gengjian Xue
The goal of video stabilization is to remove the unwanted camera motion and obtain stable versions. Theoretically, a good stabilization algorithm should remove the unwanted motion without the loss of image qualities. However, due to the lack of ground-truth video frames, the accurate performance evaluation of different algorithms is hard. Most existing evaluation techniques usually synthesize stable videos from shaking ones, but they are not effective enough. Different from previous methods, in this paper we propose a novel method which synthesize shaking videos from stable frames. Based on the synthetic shaking videos, we perform preliminary video stabilization performance assessment on three stabilization algorithms. Our shaking video synthesis method can not only give a benchmark for full-reference video stabilization performance assessment, but also provide a basis for exploring the theoretical bound of video stabilization which may help to improve existing stabilization algorithms.
{"title":"Shaking video synthesis for video stabilization performance assessment","authors":"Hui Qu, Li Song, Gengjian Xue","doi":"10.1109/VCIP.2013.6706422","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706422","url":null,"abstract":"The goal of video stabilization is to remove the unwanted camera motion and obtain stable versions. Theoretically, a good stabilization algorithm should remove the unwanted motion without the loss of image qualities. However, due to the lack of ground-truth video frames, the accurate performance evaluation of different algorithms is hard. Most existing evaluation techniques usually synthesize stable videos from shaking ones, but they are not effective enough. Different from previous methods, in this paper we propose a novel method which synthesize shaking videos from stable frames. Based on the synthetic shaking videos, we perform preliminary video stabilization performance assessment on three stabilization algorithms. Our shaking video synthesis method can not only give a benchmark for full-reference video stabilization performance assessment, but also provide a basis for exploring the theoretical bound of video stabilization which may help to improve existing stabilization algorithms.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122225887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/VCIP.2013.6706340
N. Chong, M. D. Wong, Y. Kho
The panoramic unwrapping of catadioptric omnidirectional view (COV) sensors have mostly relied on a precomputed mapping look-up table due to an expensive computational load that generally has its bottleneck occur at solving a sextic polynomial. However, this approach causes a limitation to the viewpoint dynamics as runtime modifications to the mapping values are not allowed in the implementation. In this paper, a parallel root-finding technique using Compute Unified Device Architecture (CUDA) platform is proposed. The proposed method enables on-the-fly computation of the mapping look-up table thus facilitate in a real-time viewpoint adjustable panoramic unwrapping. Experimental results showed that the proposed implementation incurred minimum computational load, and performed at 10.3 times and 2.3 times the speed of a current generation central processing unit (CPU) respectively on a single-core and multi-core environment.
{"title":"A parallel root-finding method for omnidirectional image unwrapping","authors":"N. Chong, M. D. Wong, Y. Kho","doi":"10.1109/VCIP.2013.6706340","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706340","url":null,"abstract":"The panoramic unwrapping of catadioptric omnidirectional view (COV) sensors have mostly relied on a precomputed mapping look-up table due to an expensive computational load that generally has its bottleneck occur at solving a sextic polynomial. However, this approach causes a limitation to the viewpoint dynamics as runtime modifications to the mapping values are not allowed in the implementation. In this paper, a parallel root-finding technique using Compute Unified Device Architecture (CUDA) platform is proposed. The proposed method enables on-the-fly computation of the mapping look-up table thus facilitate in a real-time viewpoint adjustable panoramic unwrapping. Experimental results showed that the proposed implementation incurred minimum computational load, and performed at 10.3 times and 2.3 times the speed of a current generation central processing unit (CPU) respectively on a single-core and multi-core environment.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115848525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/VCIP.2013.6706438
Chuang Gan, Zengchang Qin, Jia Xu, T. Wan
Contemporary video search and categorization are non-trivial tasks due to the massively increasing amount and content variety of videos. We put forward the study of visual saliency models in video. Such a model is employed to identify salient objects from the image background. Starting from the observation that motion information in video often attracts more human attention compared to static images, we devise a region contrast based saliency detection model using spatial-temporal cues (RCST). We introduce and study four saliency principles to realize the RCST. This generalizes the previous static image for saliency computational model to video. We conduct experiments on a publicly available video segmentation database where our method significantly outperforms seven state-of-the-art methods with respect to PR curve, ROC curve and visual comparison.
{"title":"Salient object detection in image sequences via spatial-temporal cue","authors":"Chuang Gan, Zengchang Qin, Jia Xu, T. Wan","doi":"10.1109/VCIP.2013.6706438","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706438","url":null,"abstract":"Contemporary video search and categorization are non-trivial tasks due to the massively increasing amount and content variety of videos. We put forward the study of visual saliency models in video. Such a model is employed to identify salient objects from the image background. Starting from the observation that motion information in video often attracts more human attention compared to static images, we devise a region contrast based saliency detection model using spatial-temporal cues (RCST). We introduce and study four saliency principles to realize the RCST. This generalizes the previous static image for saliency computational model to video. We conduct experiments on a publicly available video segmentation database where our method significantly outperforms seven state-of-the-art methods with respect to PR curve, ROC curve and visual comparison.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121120919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/VCIP.2013.6706387
Boyu Zhang, Jiafeng Liu, Xianglong Tang
Focusing on the video text detection, which is challenging and with wide potential applications, a novel stroke width feature is proposed and a system which detects text regions based on multi-scale corner detection is implemented in this paper. In our system, candidate text regions are generated by applying morphologic operation based on corner points detected in different scales, and non-text regions are filtered by combining proposed stroke width feature with some simple geometric properties. Moreover, there is a new multi-instance semi-supervised learning strategy being proposed in this paper considering the unknown contrast parameter in stroke width extraction. Experiments taken on video frames from different kinds of video shots prove that the proposed approach is both efficient and accurate for video text detection.
{"title":"Multi-scale video text detection based on corner and stroke width verification","authors":"Boyu Zhang, Jiafeng Liu, Xianglong Tang","doi":"10.1109/VCIP.2013.6706387","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706387","url":null,"abstract":"Focusing on the video text detection, which is challenging and with wide potential applications, a novel stroke width feature is proposed and a system which detects text regions based on multi-scale corner detection is implemented in this paper. In our system, candidate text regions are generated by applying morphologic operation based on corner points detected in different scales, and non-text regions are filtered by combining proposed stroke width feature with some simple geometric properties. Moreover, there is a new multi-instance semi-supervised learning strategy being proposed in this paper considering the unknown contrast parameter in stroke width extraction. Experiments taken on video frames from different kinds of video shots prove that the proposed approach is both efficient and accurate for video text detection.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121827915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/VCIP.2013.6706455
S. Bucak, A. Saxena, Abhishek Nagar, Felix C. A. Fernandes, Kong-Posh Bhat
The objective in developing compact descriptors for visual image search is building an image retrieval system that works efficiently and effectively under bandwidth and memory constraints. Selecting local descriptors to be processed, and sending them to the server for matching is an integral part of such a system. One such image search and retrieval system is the Compact Descriptors for Visual Search (CDVS) standardization test model being developed by MPEG which has an efficient local descriptor selection criteria. However, all the existing selection parameters in CDVS are based on low-level features. In this paper, we propose two “mid-level” local descriptor selection criteria: Visual Meaning Score (VMS), and Visual Vocabulary Score (VVS) which can be seamlessly integrated into the existing CDVS framework. A mid-level criteria explicitly allows selection of local descriptors closer to a given set of images. Both VMS and VVS are based on visual words (patches) of images, and provide significant gains over the current CDVS standard in terms of matching accuracy, and have very low implementation cost.
{"title":"Mid-level feature based local descriptor selection for image search","authors":"S. Bucak, A. Saxena, Abhishek Nagar, Felix C. A. Fernandes, Kong-Posh Bhat","doi":"10.1109/VCIP.2013.6706455","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706455","url":null,"abstract":"The objective in developing compact descriptors for visual image search is building an image retrieval system that works efficiently and effectively under bandwidth and memory constraints. Selecting local descriptors to be processed, and sending them to the server for matching is an integral part of such a system. One such image search and retrieval system is the Compact Descriptors for Visual Search (CDVS) standardization test model being developed by MPEG which has an efficient local descriptor selection criteria. However, all the existing selection parameters in CDVS are based on low-level features. In this paper, we propose two “mid-level” local descriptor selection criteria: Visual Meaning Score (VMS), and Visual Vocabulary Score (VVS) which can be seamlessly integrated into the existing CDVS framework. A mid-level criteria explicitly allows selection of local descriptors closer to a given set of images. Both VMS and VVS are based on visual words (patches) of images, and provide significant gains over the current CDVS standard in terms of matching accuracy, and have very low implementation cost.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114875313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/VCIP.2013.6706397
W. Zhao, X. D. Zhao, W. M. Liu, X. L. Tang
This paper aims to present a long-term background memory framework, which is capable of memorizing long period background in video and rapidly adapting to the changes of background. Based on Gaussian mixture model (GMM), this framework enables an accurate identification of long period background appearances and presents a perfect solution to numerous typical problems on foreground detection. The experimental results with various benchmark sequences quantitatively and qualitatively demonstrate that the proposed algorithm outperforms many GMM-based methods for foreground detection, as well as other representative approaches.
{"title":"Long-term background memory based on Gaussian mixture model","authors":"W. Zhao, X. D. Zhao, W. M. Liu, X. L. Tang","doi":"10.1109/VCIP.2013.6706397","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706397","url":null,"abstract":"This paper aims to present a long-term background memory framework, which is capable of memorizing long period background in video and rapidly adapting to the changes of background. Based on Gaussian mixture model (GMM), this framework enables an accurate identification of long period background appearances and presents a perfect solution to numerous typical problems on foreground detection. The experimental results with various benchmark sequences quantitatively and qualitatively demonstrate that the proposed algorithm outperforms many GMM-based methods for foreground detection, as well as other representative approaches.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117186549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/VCIP.2013.6706378
Y. Xuan, Ling-yu Duan, Tiejun Huang
Coming with the rapid spread of Intelligent terminals with camera, mobile visual search techniques have undergone a revolution, where visual information can be easily browsed and retrieved upon simply capturing a query photo. However, most existing work targets at compact description of natural scene image statistics, while dealing with line drawing images retains an open problem. This paper presents a unified framework of line drawing problems in mobile visual search. We propose a compact description of line drawing image named Local Inner-Distance Shape Context (LISC) which is robust to the distortion and occlusion and enjoys scale and rotation invariance. Together with an innovative compression scheme using JBIG2 to reduce query delivery latency, our framework works well on both a self-built dataset and MPEG- 7 CE Shape-1 dataset. Promising results on both datasets show significant improvement over state-of-the-art algorithms.
随着带摄像头的智能终端的迅速普及,移动视觉搜索技术发生了一场革命,只需拍摄一张查询照片就可以轻松浏览和检索视觉信息。然而,大多数现有的工作都是针对自然场景图像统计的紧凑描述,而处理线条绘制图像仍然是一个开放的问题。本文提出了移动视觉搜索中线条绘制问题的统一框架。我们提出了一种紧凑的线条图像描述方法,称为局部内距离形状上下文(LISC),该方法对扭曲和遮挡具有鲁棒性,并且具有尺度和旋转不变性。结合使用JBIG2的创新压缩方案来减少查询交付延迟,我们的框架在自建数据集和MPEG- 7 CE Shape-1数据集上都能很好地工作。在这两个数据集上的令人鼓舞的结果表明,与最先进的算法相比,有了显著的改进。
{"title":"A local shape descriptor for mobile linedrawing retrieval","authors":"Y. Xuan, Ling-yu Duan, Tiejun Huang","doi":"10.1109/VCIP.2013.6706378","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706378","url":null,"abstract":"Coming with the rapid spread of Intelligent terminals with camera, mobile visual search techniques have undergone a revolution, where visual information can be easily browsed and retrieved upon simply capturing a query photo. However, most existing work targets at compact description of natural scene image statistics, while dealing with line drawing images retains an open problem. This paper presents a unified framework of line drawing problems in mobile visual search. We propose a compact description of line drawing image named Local Inner-Distance Shape Context (LISC) which is robust to the distortion and occlusion and enjoys scale and rotation invariance. Together with an innovative compression scheme using JBIG2 to reduce query delivery latency, our framework works well on both a self-built dataset and MPEG- 7 CE Shape-1 dataset. Promising results on both datasets show significant improvement over state-of-the-art algorithms.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"1630 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129265165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/VCIP.2013.6706375
J. Wu, Guo-Shiang Lin, Hsiao-Ting Hsu, You-Peng Liao, Kai-Che Liu, W. Lie
In this paper, we present a quality enhancement scheme for endoscopic images. Traditional algorithms might be able to enhance the image contrast, but possible over-enhancement also lead to bad overall visual quality which prevents surgeons from accurate examination or operations of instruments in Minimal Invasive Surgery (MIS). Our proposed scheme integrates the well-known retinex algorithm with a pseudo-HDR (High Dynamic Range) synthesis process, designed to compose of three parts: multiscale retinex with gamma correction (MSR-G), local brightness range expansion (brightness diversity), and bilateral-filter-based HDR image fusion. Experiment results demonstrate that the proposed scheme is able to enhance image details and keep the overall visual quality good as well, with respect to other existing methods.
{"title":"Quality enhancement based on retinex and pseudo-HDR synthesis algorithms for endoscopic images","authors":"J. Wu, Guo-Shiang Lin, Hsiao-Ting Hsu, You-Peng Liao, Kai-Che Liu, W. Lie","doi":"10.1109/VCIP.2013.6706375","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706375","url":null,"abstract":"In this paper, we present a quality enhancement scheme for endoscopic images. Traditional algorithms might be able to enhance the image contrast, but possible over-enhancement also lead to bad overall visual quality which prevents surgeons from accurate examination or operations of instruments in Minimal Invasive Surgery (MIS). Our proposed scheme integrates the well-known retinex algorithm with a pseudo-HDR (High Dynamic Range) synthesis process, designed to compose of three parts: multiscale retinex with gamma correction (MSR-G), local brightness range expansion (brightness diversity), and bilateral-filter-based HDR image fusion. Experiment results demonstrate that the proposed scheme is able to enhance image details and keep the overall visual quality good as well, with respect to other existing methods.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"159 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132380689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/VCIP.2013.6706372
Xiaoliang Zhu, N. Zhang, Xiaopeng Fan, Ruiqin Xiong, Debin Zhao
One important problem in distributed video coding is to estimate the variance of the correlation noise between the video signal and its decoder side information. This variance is hard to estimate due to the lack of the motion vectors at the encoder side. In this paper, we first propose a linear model to estimate this variance by referring the zero motion prediction at the encoder based on a Markov field assumption. Furthermore, not only the prediction noise from the video signal itself but also the additional noise due to wireless transmission is considered in this paper. We applied our correlation estimation method in our recent distributed wireless visual communication framework called DCAST. The experimental results show that the proposed method improves the video PSNR by 0.5-1.5dB while avoiding motion estimation at encoder.
{"title":"Correlation estimation for distributed wireless video communication","authors":"Xiaoliang Zhu, N. Zhang, Xiaopeng Fan, Ruiqin Xiong, Debin Zhao","doi":"10.1109/VCIP.2013.6706372","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706372","url":null,"abstract":"One important problem in distributed video coding is to estimate the variance of the correlation noise between the video signal and its decoder side information. This variance is hard to estimate due to the lack of the motion vectors at the encoder side. In this paper, we first propose a linear model to estimate this variance by referring the zero motion prediction at the encoder based on a Markov field assumption. Furthermore, not only the prediction noise from the video signal itself but also the additional noise due to wireless transmission is considered in this paper. We applied our correlation estimation method in our recent distributed wireless visual communication framework called DCAST. The experimental results show that the proposed method improves the video PSNR by 0.5-1.5dB while avoiding motion estimation at encoder.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123455940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/VCIP.2013.6706360
Y. Matsuo, T. Misu, S. Iwamura, S. Sakaida
We propose a novel ultra high-definition video coding method with bit-depth reduction before encoding procedure and bit-depth reconstruction after decoding procedure. The bit-depth reduction is performed by Lloyd-Max quantization; considering ultra high-definition video noise reduction for high coding efficiency and gradation conservation for pseudo-contour prevention. The bit-depth reconstruction is carried out accurately using side information which is determined by comparing a local-decoded bit-depth reconstructed image and an original image on encoder side. Experiments show that the proposed method has a pseudo-contour prevention effect and a better PSNR in comparison with conventional video coding methods.
{"title":"Ultra high-definition video coding using bit-depth reduction with image noise reduction and pseudo-contour prevention","authors":"Y. Matsuo, T. Misu, S. Iwamura, S. Sakaida","doi":"10.1109/VCIP.2013.6706360","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706360","url":null,"abstract":"We propose a novel ultra high-definition video coding method with bit-depth reduction before encoding procedure and bit-depth reconstruction after decoding procedure. The bit-depth reduction is performed by Lloyd-Max quantization; considering ultra high-definition video noise reduction for high coding efficiency and gradation conservation for pseudo-contour prevention. The bit-depth reconstruction is carried out accurately using side information which is determined by comparing a local-decoded bit-depth reconstructed image and an original image on encoder side. Experiments show that the proposed method has a pseudo-contour prevention effect and a better PSNR in comparison with conventional video coding methods.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"223 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123724341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}