首页 > 最新文献

2016 IEEE 18th International Workshop on Multimedia Signal Processing (MMSP)最新文献

英文 中文
Joint optimization of resource allocation and workload scheduling for cloud based multimedia services 基于云的多媒体业务资源分配和工作负载调度的联合优化
Pub Date : 2016-09-01 DOI: 10.1109/MMSP.2016.7813406
Xiaoming Nan, Yifeng He, L. Guan
With the development of cloud technology, cloud computing has been increasingly used as distributed platforms for multimedia services. However there are two fundamental challenges for service providers: one is resource allocation, and the other is workload scheduling. Due to the rapidly varying workload and strict response time requirement, it is difficult to optimally allocate virtual machines (VMs) and assign workload. In this paper, we study the resource allocation and workload scheduling problem for cloud based multimedia services. Specifically, we introduce a queuing model to quantify the resource demands and service performance, and a directed acyclic graph (DAG) model to characterize the precedence constraints among jobs. Based on the proposed models, we jointly optimize the allocated VMs and the assigned workload to minimize the total resource cost under the response time constraints. Since the formulated problem is mixed integer non-linear programming, a heuristic is proposed to efficiently allocate resources for practical services. Experimental results show that the proposed scheme can effectively allocate VMs and schedule workload to achieve the minimal resource cost.
随着云技术的发展,云计算越来越多地被用作多媒体业务的分布式平台。然而,服务提供者面临两个基本挑战:一个是资源分配,另一个是工作负载调度。由于快速变化的工作负载和严格的响应时间要求,很难优化分配虚拟机和工作负载。本文研究了基于云的多媒体业务的资源分配和工作负载调度问题。具体来说,我们引入了一个队列模型来量化资源需求和服务性能,并引入了一个有向无环图(DAG)模型来表征作业之间的优先约束。基于所提出的模型,我们共同优化分配的虚拟机和分配的工作负载,以在响应时间约束下最小化总资源成本。由于所表述的问题是混合整数非线性规划问题,提出了一种启发式方法来有效地为实际服务分配资源。实验结果表明,该方案可以有效地分配虚拟机和调度工作负载,达到最小的资源成本。
{"title":"Joint optimization of resource allocation and workload scheduling for cloud based multimedia services","authors":"Xiaoming Nan, Yifeng He, L. Guan","doi":"10.1109/MMSP.2016.7813406","DOIUrl":"https://doi.org/10.1109/MMSP.2016.7813406","url":null,"abstract":"With the development of cloud technology, cloud computing has been increasingly used as distributed platforms for multimedia services. However there are two fundamental challenges for service providers: one is resource allocation, and the other is workload scheduling. Due to the rapidly varying workload and strict response time requirement, it is difficult to optimally allocate virtual machines (VMs) and assign workload. In this paper, we study the resource allocation and workload scheduling problem for cloud based multimedia services. Specifically, we introduce a queuing model to quantify the resource demands and service performance, and a directed acyclic graph (DAG) model to characterize the precedence constraints among jobs. Based on the proposed models, we jointly optimize the allocated VMs and the assigned workload to minimize the total resource cost under the response time constraints. Since the formulated problem is mixed integer non-linear programming, a heuristic is proposed to efficiently allocate resources for practical services. Experimental results show that the proposed scheme can effectively allocate VMs and schedule workload to achieve the minimal resource cost.","PeriodicalId":113192,"journal":{"name":"2016 IEEE 18th International Workshop on Multimedia Signal Processing (MMSP)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133852079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Single-input-multiple-ouput transcoding for video streaming 视频流的单输入多输出转码
Pub Date : 2016-09-01 DOI: 10.1109/MMSP.2016.7813357
Chengzhi Wang, Bo Li, Jie Wang, Hao Zhang, Hao Chen, Yiling Xu, Zhan Ma
In this work, a single input multiple output (SIMO) transcoding architecture is proposed. SIMO will benefit the mobile edge computing (such as HTTP Live Streaming requiring multiple copies of the video streams at different quality levels) without resorting to the legacy transcoding that video stream is completed decoded and encoded multiple times without exploring the compressed information. Leveraging the information encoded in the existing video streams, we could reduce the search candidates when transcoding the high quality bitstream to other versions with reduced quality level. As the first step, we have demonstrated the SIMO idea with bit rate shaping (i.e., bit rate transcoding) only scenario. It has shown more than 2x complexity reduction without quality loss using the common test conditions.
在这项工作中,提出了一种单输入多输出(SIMO)转码架构。SIMO将有利于移动边缘计算(例如HTTP Live Streaming需要不同质量级别的视频流的多个副本),而无需诉诸传统的转码,即视频流在不探索压缩信息的情况下完成多次解码和编码。利用现有视频流中编码的信息,在将高质量的码流转码到质量水平较低的其他版本时,可以减少搜索候选项。作为第一步,我们用比特率整形(即比特率转码)场景演示了SIMO思想。在普通测试条件下,复杂度降低了2倍以上,且没有质量损失。
{"title":"Single-input-multiple-ouput transcoding for video streaming","authors":"Chengzhi Wang, Bo Li, Jie Wang, Hao Zhang, Hao Chen, Yiling Xu, Zhan Ma","doi":"10.1109/MMSP.2016.7813357","DOIUrl":"https://doi.org/10.1109/MMSP.2016.7813357","url":null,"abstract":"In this work, a single input multiple output (SIMO) transcoding architecture is proposed. SIMO will benefit the mobile edge computing (such as HTTP Live Streaming requiring multiple copies of the video streams at different quality levels) without resorting to the legacy transcoding that video stream is completed decoded and encoded multiple times without exploring the compressed information. Leveraging the information encoded in the existing video streams, we could reduce the search candidates when transcoding the high quality bitstream to other versions with reduced quality level. As the first step, we have demonstrated the SIMO idea with bit rate shaping (i.e., bit rate transcoding) only scenario. It has shown more than 2x complexity reduction without quality loss using the common test conditions.","PeriodicalId":113192,"journal":{"name":"2016 IEEE 18th International Workshop on Multimedia Signal Processing (MMSP)","volume":"162 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132335401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Coding unit splitting early termination for fast HEVC intra coding based on global and directional gradients 基于全局梯度和方向梯度的快速HEVC内编码的编码单元拆分早期终止
Pub Date : 2016-09-01 DOI: 10.1109/MMSP.2016.7813356
Mohammadreza Jamali, S. Coulombe
High efficiency video coding (HEVC) doubles the compression ratio as compared to H.264/AVC, for the same quality. To achieve this improved coding performance, HEVC presents a new content-adaptive approach to split a frame into coding units (CUs), along with an increased number of prediction modes, which results in significant computational complexity. To lower this complexity with intra coding, in this paper, we develop a new method based on global and directional gradients to terminate the CU splitting procedure early and prevent processing of unnecessary depths. The global and directional gradients determine if the unit is predicted with high accuracy at the current level, and where that's the case, the CU is deemed to be non-split. Experimental results show that the proposed method reduces the encoding time by 52% on average, with a small quality loss of 0.07 dB (BD-PSNR) for all-intra scenarios, as compared to the HEVC reference implementation, HM 15.0.
高效视频编码(HEVC)的压缩比是H.264/AVC的两倍,但质量相同。为了实现这种改进的编码性能,HEVC提出了一种新的内容自适应方法,将帧拆分为编码单元(cu),同时增加了预测模式的数量,这导致了显著的计算复杂性。为了降低这种复杂性,本文提出了一种基于全局梯度和方向梯度的方法来尽早终止CU分割过程,并防止处理不必要的深度。全局梯度和方向梯度决定了当前水平的单元预测精度是否很高,如果是这种情况,则认为CU是非分裂的。实验结果表明,与HEVC参考实现HM 15.0相比,该方法平均减少了52%的编码时间,在所有帧内场景下的质量损失仅为0.07 dB (BD-PSNR)。
{"title":"Coding unit splitting early termination for fast HEVC intra coding based on global and directional gradients","authors":"Mohammadreza Jamali, S. Coulombe","doi":"10.1109/MMSP.2016.7813356","DOIUrl":"https://doi.org/10.1109/MMSP.2016.7813356","url":null,"abstract":"High efficiency video coding (HEVC) doubles the compression ratio as compared to H.264/AVC, for the same quality. To achieve this improved coding performance, HEVC presents a new content-adaptive approach to split a frame into coding units (CUs), along with an increased number of prediction modes, which results in significant computational complexity. To lower this complexity with intra coding, in this paper, we develop a new method based on global and directional gradients to terminate the CU splitting procedure early and prevent processing of unnecessary depths. The global and directional gradients determine if the unit is predicted with high accuracy at the current level, and where that's the case, the CU is deemed to be non-split. Experimental results show that the proposed method reduces the encoding time by 52% on average, with a small quality loss of 0.07 dB (BD-PSNR) for all-intra scenarios, as compared to the HEVC reference implementation, HM 15.0.","PeriodicalId":113192,"journal":{"name":"2016 IEEE 18th International Workshop on Multimedia Signal Processing (MMSP)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125244816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Human gesture recognition via bag of angles for 3D virtual city planning in CAVE environment 基于角度袋的人类手势识别在CAVE环境下的三维虚拟城市规划
Pub Date : 2016-09-01 DOI: 10.1109/MMSP.2016.7813380
Nour El-Din El-Madany, Yifeng He, L. Guan
Cave Automatic Virtual Environment (CAVE) provides an immersive virtual environment for 3D city planning. However, the user in the cave has to wear wearable markers for the interaction with the 3D models. To develop a natural interaction, depth camera like Kinect has to be considered. In this paper, we propose a new skeleton joint representation called Bag of Angles (BoA) for human gesture recognition. We evaluated our proposed BoA representation on two dataset UTD-MHAD and UTD-MHAD-KinectV2. The evaluation results demonstrated that the proposed BoA representation can achieve a higher recognition accuracy compared to the other existing representation methods.
Cave自动虚拟环境(Cave Automatic Virtual Environment)为三维城市规划提供了一个沉浸式的虚拟环境。然而,洞穴中的用户必须佩戴可穿戴标记才能与3D模型进行交互。为了开发自然的互动,必须考虑像Kinect这样的深度摄像头。在本文中,我们提出了一种新的骨骼关节表示,称为角度袋(BoA),用于人体手势识别。我们在两个数据集UTD-MHAD和UTD-MHAD- kinectv2上评估了我们提出的BoA表示。评价结果表明,与现有的BoA表示方法相比,本文提出的BoA表示方法具有更高的识别精度。
{"title":"Human gesture recognition via bag of angles for 3D virtual city planning in CAVE environment","authors":"Nour El-Din El-Madany, Yifeng He, L. Guan","doi":"10.1109/MMSP.2016.7813380","DOIUrl":"https://doi.org/10.1109/MMSP.2016.7813380","url":null,"abstract":"Cave Automatic Virtual Environment (CAVE) provides an immersive virtual environment for 3D city planning. However, the user in the cave has to wear wearable markers for the interaction with the 3D models. To develop a natural interaction, depth camera like Kinect has to be considered. In this paper, we propose a new skeleton joint representation called Bag of Angles (BoA) for human gesture recognition. We evaluated our proposed BoA representation on two dataset UTD-MHAD and UTD-MHAD-KinectV2. The evaluation results demonstrated that the proposed BoA representation can achieve a higher recognition accuracy compared to the other existing representation methods.","PeriodicalId":113192,"journal":{"name":"2016 IEEE 18th International Workshop on Multimedia Signal Processing (MMSP)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115543412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Low-power distributed sparse recovery testbed on wireless sensor networks 无线传感器网络低功耗分布式稀疏恢复试验台
Pub Date : 2016-09-01 DOI: 10.1109/MMSP.2016.7813404
R. R. D. Lucia, S. Fosson, E. Magli
Recently, distributed algorithms have been proposed for the recovery of sparse signals in networked systems, e.g. wireless sensor networks. Such algorithms allow large networks to operate autonomously without the need of a fusion center, and are very appealing for smart sensing problems employing low-power devices. They exploit local communications, where each node of the network updates its estimates of the sensed signal also based on the correlated information received from neighboring nodes. In the literature, theoretical results and numerical simulations have been presented to prove convergence of such methods to accurate estimates. Their implementation, however, raises some concerns in terms of power consumption due to iterative inter-node communications, data storage, computation capabilities, global synchronization, and faulty communications. On the other hand, despite these potential issues, practical implementations on real sensor networks have not been demonstrated yet. In this paper we fill this gap and describe a successful implementation of a class of randomized, distributed algorithms on a real low-power wireless sensor network test bed with very scarce computational capabilities. We consider a distributed compressed sensing problem and we show how to cope with the issues mentioned above. Our tests on synthetic and real signals show that distributed compressed sensing can successfully operate in a real-world environment.
最近,分布式算法被提出用于网络系统中稀疏信号的恢复,例如无线传感器网络。这种算法允许大型网络在不需要融合中心的情况下自主运行,并且对于使用低功耗设备的智能传感问题非常有吸引力。它们利用本地通信,其中网络的每个节点也根据从邻近节点接收到的相关信息更新其对感知信号的估计。在文献中,已经提出理论结果和数值模拟来证明这些方法收敛于准确的估计。然而,由于迭代节点间通信、数据存储、计算能力、全局同步和故障通信,它们的实现在功耗方面引起了一些关注。另一方面,尽管存在这些潜在问题,但在真实传感器网络上的实际实现尚未得到证明。在本文中,我们填补了这一空白,并描述了在计算能力非常有限的实际低功耗无线传感器网络测试台上成功实现一类随机分布算法。我们考虑一个分布式压缩感知问题,并展示如何处理上面提到的问题。我们对合成信号和真实信号的测试表明,分布式压缩感知可以成功地在现实环境中运行。
{"title":"Low-power distributed sparse recovery testbed on wireless sensor networks","authors":"R. R. D. Lucia, S. Fosson, E. Magli","doi":"10.1109/MMSP.2016.7813404","DOIUrl":"https://doi.org/10.1109/MMSP.2016.7813404","url":null,"abstract":"Recently, distributed algorithms have been proposed for the recovery of sparse signals in networked systems, e.g. wireless sensor networks. Such algorithms allow large networks to operate autonomously without the need of a fusion center, and are very appealing for smart sensing problems employing low-power devices. They exploit local communications, where each node of the network updates its estimates of the sensed signal also based on the correlated information received from neighboring nodes. In the literature, theoretical results and numerical simulations have been presented to prove convergence of such methods to accurate estimates. Their implementation, however, raises some concerns in terms of power consumption due to iterative inter-node communications, data storage, computation capabilities, global synchronization, and faulty communications. On the other hand, despite these potential issues, practical implementations on real sensor networks have not been demonstrated yet. In this paper we fill this gap and describe a successful implementation of a class of randomized, distributed algorithms on a real low-power wireless sensor network test bed with very scarce computational capabilities. We consider a distributed compressed sensing problem and we show how to cope with the issues mentioned above. Our tests on synthetic and real signals show that distributed compressed sensing can successfully operate in a real-world environment.","PeriodicalId":113192,"journal":{"name":"2016 IEEE 18th International Workshop on Multimedia Signal Processing (MMSP)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114185599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Temporally consistent high frame-rate upsampling with motion sparsification 时间一致的高帧率上采样与运动稀疏化
Pub Date : 2016-09-01 DOI: 10.1109/MMSP.2016.7813394
Dominic Rüfenacht, D. Taubman
This paper continues our work on occlusion-aware temporal frame interpolation (TFI) that employs piecewise-smooth motion with sharp motion boundaries. In this work, we propose a triangular mesh sparsification algorithm, which allows to trade off computational complexity with reconstruction quality. Furthermore, we propose a method to create a background motion layer in regions that get disoccluded between the two reference frames, which is used to get temporally consistent interpolations among frames interpolated between the two reference frames. Experimental results on a large data set show the proposed mesh sparsification is able to reduce the processing time by 75%, with a minor drop in PSNR of 0.02 dB. The proposed TFI scheme outperforms various state-of-the-art TFI methods in terms of quality of the interpolated frames, while having the lowest processing times. Further experiments on challenging synthetic sequences highlight the temporal consistency in traditionally difficult regions of disocclusion.
本文继续我们的工作,在闭塞感知时间帧插值(TFI),采用分段平滑运动与尖锐的运动边界。在这项工作中,我们提出了一种三角形网格稀疏化算法,该算法可以权衡计算复杂性和重建质量。此外,我们还提出了一种在两帧参考帧之间被解除遮挡的区域中创建背景运动层的方法,该方法用于在两帧参考帧之间插值的帧之间获得时间一致的插值。在大型数据集上的实验结果表明,所提出的网格稀疏化能够将处理时间缩短75%,PSNR小幅下降0.02 dB。提出的TFI方案在插值帧的质量方面优于各种最先进的TFI方法,同时具有最低的处理时间。对具有挑战性的合成序列的进一步实验突出了传统上困难区域的时间一致性。
{"title":"Temporally consistent high frame-rate upsampling with motion sparsification","authors":"Dominic Rüfenacht, D. Taubman","doi":"10.1109/MMSP.2016.7813394","DOIUrl":"https://doi.org/10.1109/MMSP.2016.7813394","url":null,"abstract":"This paper continues our work on occlusion-aware temporal frame interpolation (TFI) that employs piecewise-smooth motion with sharp motion boundaries. In this work, we propose a triangular mesh sparsification algorithm, which allows to trade off computational complexity with reconstruction quality. Furthermore, we propose a method to create a background motion layer in regions that get disoccluded between the two reference frames, which is used to get temporally consistent interpolations among frames interpolated between the two reference frames. Experimental results on a large data set show the proposed mesh sparsification is able to reduce the processing time by 75%, with a minor drop in PSNR of 0.02 dB. The proposed TFI scheme outperforms various state-of-the-art TFI methods in terms of quality of the interpolated frames, while having the lowest processing times. Further experiments on challenging synthetic sequences highlight the temporal consistency in traditionally difficult regions of disocclusion.","PeriodicalId":113192,"journal":{"name":"2016 IEEE 18th International Workshop on Multimedia Signal Processing (MMSP)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127252900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Adaptive enhancement filtering for motion compensation 运动补偿的自适应增强滤波
Pub Date : 2016-09-01 DOI: 10.1109/MMSP.2016.7813402
Xiaoyu Xiu, Yuwen He, Yan Ye
This paper proposes an enhanced motion compensated prediction algorithm for hybrid video coding. The algorithm is built upon the concept of applying adaptive enhancement filtering at the motion compensation stage. For luma, a high-pass filter is applied to the motion compensated prediction signal to recover distorted high-frequency information. For chroma, cross-plane filters are applied to enhance the motion compensated signals by restoring the blurred edges and textures of the chroma planes using the high-frequency information of the luma plane. To verify the effectiveness, the proposed algorithm is implemented on the HM Key Technology Area (HM-KTA) 1.0 platform. Experimental results show that compared to the anchor, the proposed algorithm achieves average Bjentegaard delta (BD) rate savings of 0.4%, 8.8% and 7.4% for Y, Cb and Cr components, respectively.
提出了一种用于混合视频编码的增强运动补偿预测算法。该算法基于在运动补偿阶段应用自适应增强滤波的概念。对于luma,对运动补偿的预测信号进行高通滤波,恢复失真的高频信息。对于色度,交叉平面滤波器通过利用亮度平面的高频信息恢复色度平面的模糊边缘和纹理来增强运动补偿信号。为了验证该算法的有效性,在HM Key Technology Area (HM- kta) 1.0平台上实现了该算法。实验结果表明,与锚定算法相比,该算法在Y、Cb和Cr分量上分别实现了0.4%、8.8%和7.4%的平均Bjentegaard delta (BD)速率节约。
{"title":"Adaptive enhancement filtering for motion compensation","authors":"Xiaoyu Xiu, Yuwen He, Yan Ye","doi":"10.1109/MMSP.2016.7813402","DOIUrl":"https://doi.org/10.1109/MMSP.2016.7813402","url":null,"abstract":"This paper proposes an enhanced motion compensated prediction algorithm for hybrid video coding. The algorithm is built upon the concept of applying adaptive enhancement filtering at the motion compensation stage. For luma, a high-pass filter is applied to the motion compensated prediction signal to recover distorted high-frequency information. For chroma, cross-plane filters are applied to enhance the motion compensated signals by restoring the blurred edges and textures of the chroma planes using the high-frequency information of the luma plane. To verify the effectiveness, the proposed algorithm is implemented on the HM Key Technology Area (HM-KTA) 1.0 platform. Experimental results show that compared to the anchor, the proposed algorithm achieves average Bjentegaard delta (BD) rate savings of 0.4%, 8.8% and 7.4% for Y, Cb and Cr components, respectively.","PeriodicalId":113192,"journal":{"name":"2016 IEEE 18th International Workshop on Multimedia Signal Processing (MMSP)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125034606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Slice-based parallelization in HEVC encoding: Realizing the potential through efficient load balancing HEVC编码中基于片的并行化:通过高效负载平衡实现潜力
Pub Date : 2016-09-01 DOI: 10.1109/MMSP.2016.7813354
M. Koziri, Panos K. Papadopoulos, Nikos Tziritas, Antonios N. Dadaliaris, Thanasis Loukopoulos, S. Khan
The new video coding standard HEVC (High Efficiency Video Coding) offers the desired compression performance in the era of HDTV and UHDTV, as it achieves nearly 50% bit rate saving compared to H.264/AVC. To leverage the involved computational overhead, HEVC offers three parallelization potentials namely: wavefront parallelization, tile-based and slice-based. In this paper we study slice-based parallelization of HEVC using OpenMP on the encoding part. In particular we delve on the problem of proper slice sizing to reduce load imbalances among threads. Capitalizing on existing ideas for H.264/AVC we develop a fast dynamic approach to decide on load distribution and compare it against an alternative in the HEVC literature. Through experiments with commonly used video sequences, we highlight the merits and drawbacks of the tested heuristics. We then improve upon them for the case of Low-Delay by exploiting GOP structure. The resulting algorithm is shown to clearly outperform its counterparts achieving less than 10% load imbalance in many cases.
新的视频编码标准HEVC(高效率视频编码)提供了HDTV和UHDTV时代所需的压缩性能,因为它与H.264/AVC相比节省了近50%的比特率。为了利用所涉及的计算开销,HEVC提供了三种并行化潜力,即:波前并行化、基于瓷砖的并行化和基于切片的并行化。本文在编码部分研究了基于切片的HEVC并行化。我们特别研究了适当的切片大小问题,以减少线程之间的负载不平衡。利用H.264/AVC的现有思想,我们开发了一种快速动态的方法来决定负载分配,并将其与HEVC文献中的替代方案进行比较。通过对常用视频序列的实验,我们突出了所测试的启发式算法的优点和缺点。然后,我们通过利用GOP结构对低延迟的情况进行改进。结果表明,在许多情况下,该算法明显优于同类算法,实现了小于10%的负载不平衡。
{"title":"Slice-based parallelization in HEVC encoding: Realizing the potential through efficient load balancing","authors":"M. Koziri, Panos K. Papadopoulos, Nikos Tziritas, Antonios N. Dadaliaris, Thanasis Loukopoulos, S. Khan","doi":"10.1109/MMSP.2016.7813354","DOIUrl":"https://doi.org/10.1109/MMSP.2016.7813354","url":null,"abstract":"The new video coding standard HEVC (High Efficiency Video Coding) offers the desired compression performance in the era of HDTV and UHDTV, as it achieves nearly 50% bit rate saving compared to H.264/AVC. To leverage the involved computational overhead, HEVC offers three parallelization potentials namely: wavefront parallelization, tile-based and slice-based. In this paper we study slice-based parallelization of HEVC using OpenMP on the encoding part. In particular we delve on the problem of proper slice sizing to reduce load imbalances among threads. Capitalizing on existing ideas for H.264/AVC we develop a fast dynamic approach to decide on load distribution and compare it against an alternative in the HEVC literature. Through experiments with commonly used video sequences, we highlight the merits and drawbacks of the tested heuristics. We then improve upon them for the case of Low-Delay by exploiting GOP structure. The resulting algorithm is shown to clearly outperform its counterparts achieving less than 10% load imbalance in many cases.","PeriodicalId":113192,"journal":{"name":"2016 IEEE 18th International Workshop on Multimedia Signal Processing (MMSP)","volume":"50 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126899765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Novel UEP product code scheme with protograph-based linear permutation and iterative decoding for scalable image transmission 基于原型线性排列和迭代解码的可扩展图像传输UEP产品编码方案
Pub Date : 2016-09-01 DOI: 10.1109/MMSP.2016.7813407
Huihui Wu, S. Dumitrescu
This paper introduces a linear permutation module before the inner encoder of the iteratively decoded product coding structure, for the transmission of scalable bit streams over error-prone channels1. This can improve the error correction ability of the inner code when some source bits are known from the preceding outer code decoding stages. The product code consists of a protograph low-density parity-check code (inner code) and Reed-Solomon (RS) codes of various strengths (outer code). Further, an algorithm relying on protograph-based extrinsic information transfer analysis is devised to design good base matrices from which the linear permutations are constructed. In addition, an analytical formula for the expected fidelity of the reconstructed sequence is derived and utilized in the optimization of the RS codes redundancy assignment. The experimental results reveal that the proposed approach consistently outperforms the scheme without the linear permutation module, reaching peak improvements of 1.98 dB and 1.30 dB over binary symmetric channels (BSC) and additive white Gaussian noise (AWGN) channels, respectively.
本文介绍了迭代译码产品编码结构的内编码器前的线性置换模块,用于在易出错信道上传输可扩展的比特流1。当从前面的外部码解码阶段知道一些源比特时,这可以提高内部码的纠错能力。产品代码由原始的低密度奇偶校验码(内部代码)和各种强度的RS (Reed-Solomon)码(外部代码)组成。进一步,设计了一种基于原生图的外部信息传递分析的算法来设计良好的基矩阵,并由此构造线性排列。此外,导出了重构序列期望保真度的解析公式,并将其用于RS码冗余分配的优化。实验结果表明,该方法优于无线性排列模块的方案,在二元对称信道(BSC)和加性高斯白噪声(AWGN)信道上分别达到1.98 dB和1.30 dB的峰值改进。
{"title":"Novel UEP product code scheme with protograph-based linear permutation and iterative decoding for scalable image transmission","authors":"Huihui Wu, S. Dumitrescu","doi":"10.1109/MMSP.2016.7813407","DOIUrl":"https://doi.org/10.1109/MMSP.2016.7813407","url":null,"abstract":"This paper introduces a linear permutation module before the inner encoder of the iteratively decoded product coding structure, for the transmission of scalable bit streams over error-prone channels1. This can improve the error correction ability of the inner code when some source bits are known from the preceding outer code decoding stages. The product code consists of a protograph low-density parity-check code (inner code) and Reed-Solomon (RS) codes of various strengths (outer code). Further, an algorithm relying on protograph-based extrinsic information transfer analysis is devised to design good base matrices from which the linear permutations are constructed. In addition, an analytical formula for the expected fidelity of the reconstructed sequence is derived and utilized in the optimization of the RS codes redundancy assignment. The experimental results reveal that the proposed approach consistently outperforms the scheme without the linear permutation module, reaching peak improvements of 1.98 dB and 1.30 dB over binary symmetric channels (BSC) and additive white Gaussian noise (AWGN) channels, respectively.","PeriodicalId":113192,"journal":{"name":"2016 IEEE 18th International Workshop on Multimedia Signal Processing (MMSP)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127630162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multi-image super-resolution using a locally adaptive denoising-based refinement 基于局部自适应去噪的多图像超分辨率细化
Pub Date : 2016-09-01 DOI: 10.1109/MMSP.2016.7813343
M. Bätz, Ján Koloda, Andrea Eichenseer, André Kaup
Spatial resolution enhancement is of particular interest in many applications such as entertainment, surveillance, or automotive systems. Besides using a more expensive, higher resolution sensor, it is also possible to apply super-resolution techniques on the low resolution content. Super-resolution methods can be basically classified into single-image and multi-image super-resolution. In this paper, we propose the integration of a novel locally adaptive de noising-based refinement step as an intermediate processing step in a multi-image super-resolution framework. The idea is to be capable of removing reconstruction artifacts while preserving the details in areas of interest such as text. Simulation results show an average gain in luminance PSNR of up to 0.2 dB and 0.3 dB for an up scaling of 2 and 4, respectively. The objective results are substantiated by the visual impression.
空间分辨率增强在娱乐、监视或汽车系统等许多应用中具有特别的意义。除了使用更昂贵、更高分辨率的传感器外,还可以在低分辨率内容上应用超分辨率技术。超分辨率方法基本上可以分为单图像超分辨率和多图像超分辨率。在本文中,我们提出了一种新的基于局部自适应去噪的细化步骤作为多图像超分辨率框架的中间处理步骤。这个想法是能够在删除重建工件的同时保留感兴趣区域(如文本)的细节。仿真结果表明,当放大倍数为2和4时,亮度PSNR的平均增益分别可达0.2 dB和0.3 dB。客观结果为视觉印象所证实。
{"title":"Multi-image super-resolution using a locally adaptive denoising-based refinement","authors":"M. Bätz, Ján Koloda, Andrea Eichenseer, André Kaup","doi":"10.1109/MMSP.2016.7813343","DOIUrl":"https://doi.org/10.1109/MMSP.2016.7813343","url":null,"abstract":"Spatial resolution enhancement is of particular interest in many applications such as entertainment, surveillance, or automotive systems. Besides using a more expensive, higher resolution sensor, it is also possible to apply super-resolution techniques on the low resolution content. Super-resolution methods can be basically classified into single-image and multi-image super-resolution. In this paper, we propose the integration of a novel locally adaptive de noising-based refinement step as an intermediate processing step in a multi-image super-resolution framework. The idea is to be capable of removing reconstruction artifacts while preserving the details in areas of interest such as text. Simulation results show an average gain in luminance PSNR of up to 0.2 dB and 0.3 dB for an up scaling of 2 and 4, respectively. The objective results are substantiated by the visual impression.","PeriodicalId":113192,"journal":{"name":"2016 IEEE 18th International Workshop on Multimedia Signal Processing (MMSP)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114980629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
2016 IEEE 18th International Workshop on Multimedia Signal Processing (MMSP)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1