首页 > 最新文献

2014 IEEE Visual Communications and Image Processing Conference最新文献

英文 中文
A novel objective quality assessment method for perceptual video coding in conversational scenarios 会话场景下感知视频编码的一种新的客观质量评估方法
Pub Date : 2014-12-01 DOI: 10.1109/VCIP.2014.7051496
Mai Xu, Jingze Zhang, Yuan Ma, Zulin Wang
Recently, numerous perceptual video coding approaches have been proposed to use face as ROI regions, for improving perceived visual quality of compressed conversational videos. However, there exists no objective metric, specialized for efficiently evaluating the perceived visual quality of compressed conversational videos. This paper thus proposes an efficient objective quality assessment method, namely Gaussian mixture model based PSNR (GMM-PSNR), for conversational videos. First, eye tracking experiments, together with a face extraction technique, were carried out to identify importance of the regions of background, face, and facial features, through eye fixation points. Next, assuming that the distribution of some eye fixation points obeys Gaussian mixture model, an importance weight map is generated by introducing a new term, eye fixation points/pixel(efp/p). Finally, GMM-PSNR is computed by assigning different penalties to the distortion of each pixel in a video frame, according to the generated weight map. The experimental results show the effectiveness of our GMM-PSNR by investigating its correlation with subjective quality on several test video sequences.
近年来,人们提出了许多使用人脸作为ROI区域的感知视频编码方法,以提高压缩会话视频的感知视觉质量。然而,目前还没有专门用于有效评估压缩会话视频感知视觉质量的客观指标。为此,本文提出了一种有效的、客观的会话视频质量评价方法,即基于高斯混合模型的PSNR (GMM-PSNR)。首先,采用眼动追踪实验,结合人脸提取技术,通过人眼注视点识别背景、人脸和面部特征区域的重要性;接下来,假设部分注视点的分布服从高斯混合模型,通过引入一个新的术语eye注视点/像素(efp/p),生成一个重要权重图。最后,根据生成的权重图,通过对视频帧中每个像素的失真分配不同的惩罚来计算GMM-PSNR。在多个测试视频序列上研究了GMM-PSNR与主观质量的相关性,结果表明了该方法的有效性。
{"title":"A novel objective quality assessment method for perceptual video coding in conversational scenarios","authors":"Mai Xu, Jingze Zhang, Yuan Ma, Zulin Wang","doi":"10.1109/VCIP.2014.7051496","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051496","url":null,"abstract":"Recently, numerous perceptual video coding approaches have been proposed to use face as ROI regions, for improving perceived visual quality of compressed conversational videos. However, there exists no objective metric, specialized for efficiently evaluating the perceived visual quality of compressed conversational videos. This paper thus proposes an efficient objective quality assessment method, namely Gaussian mixture model based PSNR (GMM-PSNR), for conversational videos. First, eye tracking experiments, together with a face extraction technique, were carried out to identify importance of the regions of background, face, and facial features, through eye fixation points. Next, assuming that the distribution of some eye fixation points obeys Gaussian mixture model, an importance weight map is generated by introducing a new term, eye fixation points/pixel(efp/p). Finally, GMM-PSNR is computed by assigning different penalties to the distortion of each pixel in a video frame, according to the generated weight map. The experimental results show the effectiveness of our GMM-PSNR by investigating its correlation with subjective quality on several test video sequences.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121196149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Sample adaptive offset in AVS2 video standard AVS2视频标准中的样本自适应偏移
Pub Date : 2014-12-01 DOI: 10.1109/VCIP.2014.7051506
Jing Chen, Sunil Lee, E. Alshina, Yinji Piao
AVS2 video standard is the next-generation video coding standard under the development of Audio Video coding Standard (AVS) workgroup of China. In this paper, the design of Sample Adaptive Offset (SAO) in AVS2 is presented. Considering the implementation issues, a shifted structure in which the SAO parameter region is shifted from the Largest Coding Unit (LCU) to the upper-left is adopted to make the SAO parameter region consistent with the processing region in implementation. Moreover, the category dependent offset is introduced in the edge type based on the statistical results to improve the offset coding and non-consecutive offset bands are adopted in the band type to optimize offset bands. The test results show that SAO achieves on average 0.3% to 1.4% luma coding gain in AVS2 common test conditions.
AVS2视频标准是中国音视频编码标准(AVS)工作组开发的下一代视频编码标准。本文介绍了AVS2中采样自适应偏移(SAO)的设计。考虑到实现问题,为了使SAO参数区域与实现中的处理区域保持一致,采用了将SAO参数区域从最大编码单元(LCU)移至左上角的移位结构。在统计结果的基础上,在边缘类型中引入类别相关偏移来改进偏移编码,在频带类型中采用非连续偏移带来优化偏移带。测试结果表明,在AVS2常用测试条件下,SAO平均实现了0.3% ~ 1.4%的亮度编码增益。
{"title":"Sample adaptive offset in AVS2 video standard","authors":"Jing Chen, Sunil Lee, E. Alshina, Yinji Piao","doi":"10.1109/VCIP.2014.7051506","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051506","url":null,"abstract":"AVS2 video standard is the next-generation video coding standard under the development of Audio Video coding Standard (AVS) workgroup of China. In this paper, the design of Sample Adaptive Offset (SAO) in AVS2 is presented. Considering the implementation issues, a shifted structure in which the SAO parameter region is shifted from the Largest Coding Unit (LCU) to the upper-left is adopted to make the SAO parameter region consistent with the processing region in implementation. Moreover, the category dependent offset is introduced in the edge type based on the statistical results to improve the offset coding and non-consecutive offset bands are adopted in the band type to optimize offset bands. The test results show that SAO achieves on average 0.3% to 1.4% luma coding gain in AVS2 common test conditions.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122445993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fast algorithm of coding unit depth decision for HEVC intra coding HEVC内编码中编码单元深度的快速判定算法
Pub Date : 2014-12-01 DOI: 10.1109/VCIP.2014.7051605
Xiaofeng Huang, Huizhu Jia, Kaijin Wei, Jie Liu, Chuang Zhu, Zhengguang Lv, Don Xie
The emerging high efficiency video coding standard (HEVC) achieves significantly better coding efficiency than all existing video coding standards. The quad tree structured coding unit (CU) is adopted in HEVC to improve the compression efficiency, but this causes a very high computational complexity because it exhausts all the combinations of the prediction unit (PU) and transform unit (TU) in every CU attempt. In order to alleviate the computational burden in HEVC intra coding, a fast CU depth decision algorithm is proposed in this paper. The CU texture complexity and the correlation between the current CU and neighbouring CUs are adaptively taken into consideration for the decision of the CU split and the CU depth search range. Experimental results show that the proposed scheme provides 39.3% encoder time savings on average compared to the default encoding scheme in HM-RExt-13.0 with only 0.6% BDBR penalty in coding performance.
新兴的高效视频编码标准(HEVC)的编码效率明显优于现有的所有视频编码标准。为了提高压缩效率,HEVC采用了四叉树结构编码单元(CU),但由于每次尝试CU时都会耗尽预测单元(PU)和变换单元(TU)的所有组合,因此计算复杂度非常高。为了减轻HEVC码内编码的计算量,提出了一种快速的CU深度决策算法。自适应地考虑了CU纹理复杂度和当前CU与相邻CU之间的相关性来决定CU的分割和CU的深度搜索范围。实验结果表明,与hm - ext -13.0的默认编码方案相比,该方案平均节省39.3%的编码器时间,编码性能仅损失0.6%的BDBR。
{"title":"Fast algorithm of coding unit depth decision for HEVC intra coding","authors":"Xiaofeng Huang, Huizhu Jia, Kaijin Wei, Jie Liu, Chuang Zhu, Zhengguang Lv, Don Xie","doi":"10.1109/VCIP.2014.7051605","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051605","url":null,"abstract":"The emerging high efficiency video coding standard (HEVC) achieves significantly better coding efficiency than all existing video coding standards. The quad tree structured coding unit (CU) is adopted in HEVC to improve the compression efficiency, but this causes a very high computational complexity because it exhausts all the combinations of the prediction unit (PU) and transform unit (TU) in every CU attempt. In order to alleviate the computational burden in HEVC intra coding, a fast CU depth decision algorithm is proposed in this paper. The CU texture complexity and the correlation between the current CU and neighbouring CUs are adaptively taken into consideration for the decision of the CU split and the CU depth search range. Experimental results show that the proposed scheme provides 39.3% encoder time savings on average compared to the default encoding scheme in HM-RExt-13.0 with only 0.6% BDBR penalty in coding performance.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133485906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Insights into the role of feedbacks in the tracking loop of a modular fall-detection algorithm 深入了解反馈在模块跌落检测算法的跟踪回路中的作用
Pub Date : 2014-12-01 DOI: 10.1109/VCIP.2014.7051592
L. Boulard, E. Baccaglini, R. Scopigno
In this paper we propose an innovative video-based architecture aimed to monitor elderly people. It is based on cheap devices and open-source libraries and preliminary tests demonstrate that it manages to achieve a significant performance. The overall architecture of the system and its implementation are shortly discussed from the point of view of the composing functional blocks, also analyzing the effects of loopbacks on the effectiveness of the algorithm.
在本文中,我们提出了一种创新的基于视频的架构,旨在监控老年人。它基于廉价的设备和开源库,初步测试表明它能够实现显著的性能。从组成功能块的角度简要讨论了系统的总体架构及其实现,并分析了环回对算法有效性的影响。
{"title":"Insights into the role of feedbacks in the tracking loop of a modular fall-detection algorithm","authors":"L. Boulard, E. Baccaglini, R. Scopigno","doi":"10.1109/VCIP.2014.7051592","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051592","url":null,"abstract":"In this paper we propose an innovative video-based architecture aimed to monitor elderly people. It is based on cheap devices and open-source libraries and preliminary tests demonstrate that it manages to achieve a significant performance. The overall architecture of the system and its implementation are shortly discussed from the point of view of the composing functional blocks, also analyzing the effects of loopbacks on the effectiveness of the algorithm.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"11 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114102632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fusion side information based on feature and motion extraction for distributed multiview video coding 分布式多视点视频编码中基于特征和运动提取的侧信息融合
Pub Date : 2014-12-01 DOI: 10.1109/VCIP.2014.7051594
Hui Yin, Mengyao Sun, Yumei Wang, Yu Liu
In distributed multiview video coding (DMVC), the quality of side information (SI) is crucial for decoding and the reconstruction of the Wyner-Ziv (WZ) frames. Generally, its quality is influenced by two main reasons. One reason is that the moving object of the WZ frames can be easily misestimated because of fast motion. The other is that the background around the moving object is also easily misestimated because of occlusion. According to these reasons, a novel SI fusion method is proposed which exploits different schemes to reconstruct different parts complementarity. Motion detection is performed to extract the moving object which can be predicted by utilizing both temporary correlations and spatial correlations. As for background around the moving object, temporary correlations are utilized to predict it. It is noteworthy that the prediction method used in this paper is based on a feature based global motion model. The experiment results show high precision quality of the SI of the WZ frames and significant improvement in rate distortion (RD) performance especially for the sequence with fast moving objects.
在分布式多视点视频编码(DMVC)中,侧信息(SI)的质量对wner - ziv (WZ)帧的解码和重建至关重要。一般来说,它的质量受到两个主要原因的影响。其中一个原因是WZ帧的运动对象由于快速运动很容易被错误估计。二是运动物体周围的背景也容易因为遮挡而被错误估计。基于这些原因,提出了一种利用不同方案重构不同部件互补性的SI融合方法。通过运动检测提取运动目标,利用临时相关性和空间相关性对运动目标进行预测。对于运动物体周围的背景,利用临时相关性进行预测。值得注意的是,本文使用的预测方法是基于基于特征的全局运动模型。实验结果表明,WZ帧的SI具有较高的精度质量,特别是对于具有快速运动目标的序列,其率失真(RD)性能得到了显著改善。
{"title":"Fusion side information based on feature and motion extraction for distributed multiview video coding","authors":"Hui Yin, Mengyao Sun, Yumei Wang, Yu Liu","doi":"10.1109/VCIP.2014.7051594","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051594","url":null,"abstract":"In distributed multiview video coding (DMVC), the quality of side information (SI) is crucial for decoding and the reconstruction of the Wyner-Ziv (WZ) frames. Generally, its quality is influenced by two main reasons. One reason is that the moving object of the WZ frames can be easily misestimated because of fast motion. The other is that the background around the moving object is also easily misestimated because of occlusion. According to these reasons, a novel SI fusion method is proposed which exploits different schemes to reconstruct different parts complementarity. Motion detection is performed to extract the moving object which can be predicted by utilizing both temporary correlations and spatial correlations. As for background around the moving object, temporary correlations are utilized to predict it. It is noteworthy that the prediction method used in this paper is based on a feature based global motion model. The experiment results show high precision quality of the SI of the WZ frames and significant improvement in rate distortion (RD) performance especially for the sequence with fast moving objects.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114269955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Accurate image noise level estimation by high order polynomial local surface approximation and statistical inference 采用高阶多项式局部表面近似和统计推断方法精确估计图像噪声水平
Pub Date : 2014-12-01 DOI: 10.1109/VCIP.2014.7051581
Tingting Kou, Lei Yang, Y. Wan
Image noise level estimation is an important step in many image processing tasks such as denoising, compression and segmentation. Although recently proposed SVD and PCA approaches have produced the most accurate estimates so far, these linear subspace-based methods still suffer from signal contamination from the clean signal content, especially in the low noise situation. In addition, the common performance evaluation procedure currently in use treats test images as noise-free. This omits the noise already in those test images and invariably incurs a bias. In this paper we make two contributions. First, we propose a new noise level estimation method using nonlinear local surface approximation. In this method, we first approximate image noise-free content in each block using a high degree polynomial. Then the block residual variances, which follow chi squared distribution, are sorted and the upper quantile of a carefully chosen size is used for estimation. Secondly, we propose a new performance evaluation procedure that is free from the influence of the noise already present in the test images. Experimental results show that it has much improved performance than typical state-of-the-art methods in terms of both estimation accuracy and stability.
图像噪声水平估计是图像去噪、压缩和分割等许多图像处理任务的重要步骤。尽管最近提出的SVD和PCA方法产生了迄今为止最准确的估计,但这些基于线性子空间的方法仍然受到干净信号内容的信号污染,特别是在低噪声情况下。此外,目前使用的常见性能评估程序将测试图像视为无噪声。这忽略了那些测试图像中已经存在的噪声,并且总是会产生偏差。在本文中,我们做了两个贡献。首先,我们提出了一种新的基于非线性局部曲面近似的噪声级估计方法。在该方法中,我们首先使用高次多项式近似每个块中的图像无噪声内容。然后对遵循卡方分布的块残差进行排序,并使用精心选择的大小的上分位数进行估计。其次,我们提出了一种新的性能评估程序,该程序不受测试图像中已经存在的噪声的影响。实验结果表明,该方法在估计精度和稳定性方面都比目前常用的方法有很大的提高。
{"title":"Accurate image noise level estimation by high order polynomial local surface approximation and statistical inference","authors":"Tingting Kou, Lei Yang, Y. Wan","doi":"10.1109/VCIP.2014.7051581","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051581","url":null,"abstract":"Image noise level estimation is an important step in many image processing tasks such as denoising, compression and segmentation. Although recently proposed SVD and PCA approaches have produced the most accurate estimates so far, these linear subspace-based methods still suffer from signal contamination from the clean signal content, especially in the low noise situation. In addition, the common performance evaluation procedure currently in use treats test images as noise-free. This omits the noise already in those test images and invariably incurs a bias. In this paper we make two contributions. First, we propose a new noise level estimation method using nonlinear local surface approximation. In this method, we first approximate image noise-free content in each block using a high degree polynomial. Then the block residual variances, which follow chi squared distribution, are sorted and the upper quantile of a carefully chosen size is used for estimation. Secondly, we propose a new performance evaluation procedure that is free from the influence of the noise already present in the test images. Experimental results show that it has much improved performance than typical state-of-the-art methods in terms of both estimation accuracy and stability.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114723348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A proposed accelerated image copy-move forgery detection 提出了一种加速图像复制-移动伪造检测方法
Pub Date : 2014-12-01 DOI: 10.1109/VCIP.2014.7051552
Sondos M. Fadl, N. Semary
Image forgery detection is currently one of the interested research fields of image processing. Copy-Move (CM) forgery is one of the frequently used techniques. In this paper, we propose a method which is efficient and fast for detect copy-move regions. The proposed method accelerates block matching strategy. Firstly, the image is divided into fixed-size overlapping blocks then discrete cosine transform is applied to each block to represent its features. Fast k-means clustering technique is used to cluster the blocks into different classes. Zigzag scanning is performed to reduce the length of each block feature vector. The feature vectors of each cluster blocks are lexicographically sorted by radix sort, correlation between each nearby blocks indicates their similarity. The experimental results demonstrate that the proposed method can detect the duplicated regions efficiently, and reduce processing time up to 50% of other previous works.
图像伪造检测是当前图像处理领域的研究热点之一。复制-移动(CM)伪造是一种常用的伪造技术。本文提出了一种高效、快速的复制移动区域检测方法。该方法加速了块匹配策略。首先将图像分割成固定大小的重叠块,然后对每个块进行离散余弦变换来表示其特征;采用快速k-means聚类技术将数据块聚为不同的类。进行锯齿形扫描以减少每个块特征向量的长度。每个聚类块的特征向量按字典顺序按基数排序,相邻块之间的相关性表示它们的相似性。实验结果表明,该方法可以有效地检测出重复区域,并将处理时间缩短了50%。
{"title":"A proposed accelerated image copy-move forgery detection","authors":"Sondos M. Fadl, N. Semary","doi":"10.1109/VCIP.2014.7051552","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051552","url":null,"abstract":"Image forgery detection is currently one of the interested research fields of image processing. Copy-Move (CM) forgery is one of the frequently used techniques. In this paper, we propose a method which is efficient and fast for detect copy-move regions. The proposed method accelerates block matching strategy. Firstly, the image is divided into fixed-size overlapping blocks then discrete cosine transform is applied to each block to represent its features. Fast k-means clustering technique is used to cluster the blocks into different classes. Zigzag scanning is performed to reduce the length of each block feature vector. The feature vectors of each cluster blocks are lexicographically sorted by radix sort, correlation between each nearby blocks indicates their similarity. The experimental results demonstrate that the proposed method can detect the duplicated regions efficiently, and reduce processing time up to 50% of other previous works.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114739927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Avoiding weak parameters in secret image sharing 在秘密图像共享中避免弱参数
Pub Date : 2014-12-01 DOI: 10.1109/VCIP.2014.7051617
M. Mohanty, C. Gehrmann, P. Atrey
Secret image sharing is a popular image hiding scheme that typically uses (3, 3, n) multi-secret sharing to hide the colors of a secret image. The use of (3, 3, n) multi-secret sharing, however, can lead to information loss. In this paper, we study this loss of information from an image perspective, and show that one-third of the color values of the secret image can be leaked when the sum of any two selected share numbers is equal to the considered prime number in the secret sharing. Furthermore, we show that if the selected share numbers do not satisfy this condition (for example, when the value of each of the selected share number is less than the half of the value of the prime number), then the colors of the secret image are not leaked. In this case, a noise-like image is reconstructed from the knowledge of less than three shares.
秘密图像共享是一种流行的图像隐藏方案,通常使用(3,3,n)多秘密共享来隐藏秘密图像的颜色。然而,使用(3,3,n)多秘密共享会导致信息丢失。本文从图像的角度研究了这一信息丢失问题,并证明了当所选的任意两个共享数之和等于秘密共享中考虑的素数时,秘密图像的颜色值有三分之一可能被泄露。进一步,我们证明,如果选择的共享号不满足这个条件(例如,当每个选择的共享号的值小于素数值的一半时),则秘密图像的颜色不会泄露。在这种情况下,从少于3个共享的知识重建一个类噪声图像。
{"title":"Avoiding weak parameters in secret image sharing","authors":"M. Mohanty, C. Gehrmann, P. Atrey","doi":"10.1109/VCIP.2014.7051617","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051617","url":null,"abstract":"Secret image sharing is a popular image hiding scheme that typically uses (3, 3, n) multi-secret sharing to hide the colors of a secret image. The use of (3, 3, n) multi-secret sharing, however, can lead to information loss. In this paper, we study this loss of information from an image perspective, and show that one-third of the color values of the secret image can be leaked when the sum of any two selected share numbers is equal to the considered prime number in the secret sharing. Furthermore, we show that if the selected share numbers do not satisfy this condition (for example, when the value of each of the selected share number is less than the half of the value of the prime number), then the colors of the secret image are not leaked. In this case, a noise-like image is reconstructed from the knowledge of less than three shares.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117156917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Scalar-quantization-based multi-layer data hiding for video coding applications 基于标量量化的视频编码多层数据隐藏
Pub Date : 2014-12-01 DOI: 10.1109/VCIP.2014.7051554
Alexey Filippov, Vasily Rufitskiy, V. Potapov
In this paper, we present a novel data-hiding method that does not interfere with other data-hiding techniques (e.g., sign bits hiding) that are already included into state-of-the-art coding standards such as HEVC/H.265. One of the main features that are inherent to the proposed technique is its orientation on hierarchically-structured units (e.g., a hierarchy in HEVC/H.265 that includes coding, prediction and transform units). As shown in the paper, this method provides higher coding gain when applied to scalar-quantized values. Finally, we present experimental results that confirm the high RD-performance of this technique in comparison with explicit signaling and discuss its suitability for HEVC-compatible watermarking.
在本文中,我们提出了一种新的数据隐藏方法,它不会干扰其他数据隐藏技术(例如,符号位隐藏),这些技术已经包含在最先进的编码标准中,如HEVC/H.265。所提出的技术固有的主要特征之一是它对分层结构单元(例如HEVC/H中的层次结构)的定向。265,其中包括编码、预测和变换单元)。如本文所示,该方法在应用于标量量化值时提供了更高的编码增益。最后,我们给出的实验结果证实了该技术与显式信令相比具有较高的rd性能,并讨论了其对hevc兼容水印的适用性。
{"title":"Scalar-quantization-based multi-layer data hiding for video coding applications","authors":"Alexey Filippov, Vasily Rufitskiy, V. Potapov","doi":"10.1109/VCIP.2014.7051554","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051554","url":null,"abstract":"In this paper, we present a novel data-hiding method that does not interfere with other data-hiding techniques (e.g., sign bits hiding) that are already included into state-of-the-art coding standards such as HEVC/H.265. One of the main features that are inherent to the proposed technique is its orientation on hierarchically-structured units (e.g., a hierarchy in HEVC/H.265 that includes coding, prediction and transform units). As shown in the paper, this method provides higher coding gain when applied to scalar-quantized values. Finally, we present experimental results that confirm the high RD-performance of this technique in comparison with explicit signaling and discuss its suitability for HEVC-compatible watermarking.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117306691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A new convex optimization-based two-pass rate control method for object coding in AVS 基于凸优化的AVS目标编码双通过率控制新方法
Pub Date : 2014-12-01 DOI: 10.1109/VCIP.2014.7051627
X. Yao, S. Chan
This paper proposed a new convex-optimization-based two-pass rate control method for object coding of China's audio video coding standard (AVS). The algorithm adopts a two-pass methodology to overcome the important interdependency problem between rate control and rate distortion optimization. An exponential model is used to describe the rate-distortion behavior of the codec so as to perform frame-level and object-level rate control under the two-pass framework. Convex programming is utilized to solve for the resultant optimal bit allocation problem. Moreover, the region-of-interest (ROI) functionality is also realized at the object-level. The good performance and effectiveness of this method are illustrated using experimental results.
针对中国音视频编码标准(AVS)的目标编码,提出了一种基于凸优化的双通率控制方法。该算法采用两步算法,克服了速率控制和速率失真优化之间的相互依赖问题。采用指数模型来描述编解码器的速率失真行为,从而实现帧级和对象级的速率控制。利用凸规划方法求解由此产生的最优位分配问题。此外,感兴趣区域(ROI)功能也在对象级实现。实验结果说明了该方法的良好性能和有效性。
{"title":"A new convex optimization-based two-pass rate control method for object coding in AVS","authors":"X. Yao, S. Chan","doi":"10.1109/VCIP.2014.7051627","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051627","url":null,"abstract":"This paper proposed a new convex-optimization-based two-pass rate control method for object coding of China's audio video coding standard (AVS). The algorithm adopts a two-pass methodology to overcome the important interdependency problem between rate control and rate distortion optimization. An exponential model is used to describe the rate-distortion behavior of the codec so as to perform frame-level and object-level rate control under the two-pass framework. Convex programming is utilized to solve for the resultant optimal bit allocation problem. Moreover, the region-of-interest (ROI) functionality is also realized at the object-level. The good performance and effectiveness of this method are illustrated using experimental results.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"259 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124189343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2014 IEEE Visual Communications and Image Processing Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1