首页 > 最新文献

2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)最新文献

英文 中文
Profiling Actions for Sport Video Summarization: An attention signal analysis 分析运动视频摘要的动作:注意信号分析
Pub Date : 2020-09-21 DOI: 10.1109/MMSP48831.2020.9287062
Melissa Sanabria, F. Precioso, Thomas Menguy
Currently, in broadcast companies many human operators select which actions should belong to the summary based on multiple rules they have built upon their own experience using different sources of information. These rules define the different profiles of actions of interest that help the operator to generate better customized summaries. Most of these profiles do not directly rely on broadcast video content but rather exploit metadata describing the course of the match. In this paper, we show how the signals produced by the attention layer of a recurrent neural network can be seen as a learned representation of these action profiles and provide a new tool to support operators’ work. The results in soccer matches show the capacity of our approach to transfer knowledge between datasets from different broadcasting companies, from different leagues, and the ability of the attention layer to learn meaningful action profiles.
目前,在广播公司中,许多人工操作员根据自己使用不同信息源的经验建立的多个规则来选择哪些动作应该属于摘要。这些规则定义感兴趣的操作的不同概要文件,帮助操作员生成更好的自定义摘要。这些配置文件中的大多数并不直接依赖于广播视频内容,而是利用描述比赛过程的元数据。在本文中,我们展示了递归神经网络的注意层产生的信号如何被视为这些动作轮廓的学习表示,并提供了一种支持操作员工作的新工具。足球比赛的结果显示了我们的方法在来自不同广播公司、不同联赛的数据集之间转移知识的能力,以及注意力层学习有意义的动作概况的能力。
{"title":"Profiling Actions for Sport Video Summarization: An attention signal analysis","authors":"Melissa Sanabria, F. Precioso, Thomas Menguy","doi":"10.1109/MMSP48831.2020.9287062","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287062","url":null,"abstract":"Currently, in broadcast companies many human operators select which actions should belong to the summary based on multiple rules they have built upon their own experience using different sources of information. These rules define the different profiles of actions of interest that help the operator to generate better customized summaries. Most of these profiles do not directly rely on broadcast video content but rather exploit metadata describing the course of the match. In this paper, we show how the signals produced by the attention layer of a recurrent neural network can be seen as a learned representation of these action profiles and provide a new tool to support operators’ work. The results in soccer matches show the capacity of our approach to transfer knowledge between datasets from different broadcasting companies, from different leagues, and the ability of the attention layer to learn meaningful action profiles.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125857816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
MMSP 2020 TOC
Pub Date : 2020-09-21 DOI: 10.1109/mmsp48831.2020.9287087
{"title":"MMSP 2020 TOC","authors":"","doi":"10.1109/mmsp48831.2020.9287087","DOIUrl":"https://doi.org/10.1109/mmsp48831.2020.9287087","url":null,"abstract":"","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131694735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing Rate-Distortion Performance of Motion Compensated Wavelet Lifting with Denoised Prediction and Update 基于去噪预测和更新的运动补偿小波提升率失真性能优化
Pub Date : 2020-09-21 DOI: 10.1109/MMSP48831.2020.9287070
Daniela Lanz, A. Kaup
Efficient lossless coding of medical volume data with temporal axis can be achieved by motion compensated wavelet lifting. As side benefit, a scalable bit stream is generated, which allows for displaying the data at different resolution layers, highly demanded for telemedicine applications. Additionally, the similarity of the temporal base layer to the input sequence is preserved by the use of motion compensated temporal filtering. However, for medical sequences the overall rate is increased due to the specific noise characteristics of the data. The use of denoising filters inside the lifting structure can improve the compression efficiency significantly without endangering the property of perfect reconstruction. However, the design of an optimum filter is a crucial task. In this paper, we present a new method for selecting the optimal filter strength for a certain denoising filter in a rate-distortion sense. This allows to minimize the required rate based on a single input parameter for the encoder to control the requested distortion of the temporal base layer.
采用运动补偿小波提升的方法,可以实现具有时间轴的医学体数据的高效无损编码。附带的好处是,生成了一个可扩展的比特流,允许以不同的分辨率层显示数据,这是远程医疗应用非常需要的。此外,通过使用运动补偿时间滤波来保持时间基层与输入序列的相似性。然而,对于医疗序列,由于数据的特定噪声特性,总体速率增加。在提升结构内部使用去噪滤波器,可以在不影响完美重构性能的前提下显著提高压缩效率。然而,优化滤波器的设计是一项至关重要的任务。本文提出了一种在率失真情况下对某一去噪滤波器选择最优滤波强度的新方法。这允许基于编码器的单个输入参数最小化所需的速率,以控制时间基层的请求失真。
{"title":"Optimizing Rate-Distortion Performance of Motion Compensated Wavelet Lifting with Denoised Prediction and Update","authors":"Daniela Lanz, A. Kaup","doi":"10.1109/MMSP48831.2020.9287070","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287070","url":null,"abstract":"Efficient lossless coding of medical volume data with temporal axis can be achieved by motion compensated wavelet lifting. As side benefit, a scalable bit stream is generated, which allows for displaying the data at different resolution layers, highly demanded for telemedicine applications. Additionally, the similarity of the temporal base layer to the input sequence is preserved by the use of motion compensated temporal filtering. However, for medical sequences the overall rate is increased due to the specific noise characteristics of the data. The use of denoising filters inside the lifting structure can improve the compression efficiency significantly without endangering the property of perfect reconstruction. However, the design of an optimum filter is a crucial task. In this paper, we present a new method for selecting the optimal filter strength for a certain denoising filter in a rate-distortion sense. This allows to minimize the required rate based on a single input parameter for the encoder to control the requested distortion of the temporal base layer.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133152222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Merging of MOS of Large Image Databases for No-reference Image Visual Quality Assessment 面向无参考图像视觉质量评价的大型图像数据库MOS合并
Pub Date : 2020-09-21 DOI: 10.1109/MMSP48831.2020.9287141
Aki Kaipio, Mykola Ponomarenko, K. Egiazarian
For training of no-reference image visual quality metrics large specialized image databases are used. For images of the databases mean opinion scores (MOS) are experimentally obtained collecting judgments of many observers. MOS of a given image reflects an averaged human perception of visual quality of the image. Each database has its own unknown scale of MOS values depending on unique content of the database. For training of no-reference metrics based on convolutional networks usually only one selected database is used, because all MOS values on input of training loss function should be in the same scale. In this paper, a simple and effective method of merging of several large databases into one database with transforming of their MOS into one scale is proposed. Accuracy of the proposed method is analyzed. Merged MOS is used for practical training of no-reference metric. Better effectiveness of the training is shown in comparative analysis.
为了训练无参考图像的视觉质量指标,使用了大型专业图像数据库。对于数据库的图像,平均意见分数(MOS)是通过收集许多观察者的判断而实验得到的。给定图像的MOS反映了人类对图像视觉质量的平均感知。每个数据库都有自己未知的MOS值,这取决于数据库的独特内容。对于基于卷积网络的无参考指标的训练,通常只选择一个数据库,因为训练损失函数输入上的所有MOS值必须在同一尺度上。本文提出了一种简单有效的将多个大型数据库合并为一个数据库并将其MOS转换为一个尺度的方法。分析了该方法的精度。将合并MOS用于无参考度量的实际训练。对比分析表明,该培训具有较好的效果。
{"title":"Merging of MOS of Large Image Databases for No-reference Image Visual Quality Assessment","authors":"Aki Kaipio, Mykola Ponomarenko, K. Egiazarian","doi":"10.1109/MMSP48831.2020.9287141","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287141","url":null,"abstract":"For training of no-reference image visual quality metrics large specialized image databases are used. For images of the databases mean opinion scores (MOS) are experimentally obtained collecting judgments of many observers. MOS of a given image reflects an averaged human perception of visual quality of the image. Each database has its own unknown scale of MOS values depending on unique content of the database. For training of no-reference metrics based on convolutional networks usually only one selected database is used, because all MOS values on input of training loss function should be in the same scale. In this paper, a simple and effective method of merging of several large databases into one database with transforming of their MOS into one scale is proposed. Accuracy of the proposed method is analyzed. Merged MOS is used for practical training of no-reference metric. Better effectiveness of the training is shown in comparative analysis.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132441975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Audio-Fingerprinting via Dictionary Learning 通过字典学习音频指纹识别
Pub Date : 2020-09-21 DOI: 10.1109/MMSP48831.2020.9287073
Christina Saravanos, D. Ampeliotis, K. Berberidis
In recent years, several successful schemes have been proposed to solve the song identification problem. These techniques aim to construct a signal’s audio-fingerprint by either employing conventional signal processing techniques or by computing its sparse representation in the time-frequency domain. This paper proposes a new audio-fingerprinting scheme which is able to construct a unique and concise representation of an audio signal by applying a dictionary, which is learnt here via the well-known K-SVD algorithm applied on a song database. The promising results which emerged while conducting the experiments suggested that, not only the proposed approach preformed rather well in its attempt to identify the signal content of several audio clips –even in cases this content had been distorted by noise - but also surpassed the recognition rate of a Shazam-based paradigm.
近年来,已经提出了几种成功的方案来解决歌曲识别问题。这些技术的目的是通过采用传统的信号处理技术或通过计算其在时频域的稀疏表示来构建信号的音频指纹。本文提出了一种新的音频指纹识别方案,该方案能够通过应用于歌曲数据库的著名K-SVD算法来学习字典,从而构建音频信号的唯一且简洁的表示。在进行实验时出现的令人鼓舞的结果表明,所提出的方法不仅在试图识别几个音频片段的信号内容方面表现得相当好——即使这些内容被噪声扭曲了——而且还超过了基于shazam的范式的识别率。
{"title":"Audio-Fingerprinting via Dictionary Learning","authors":"Christina Saravanos, D. Ampeliotis, K. Berberidis","doi":"10.1109/MMSP48831.2020.9287073","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287073","url":null,"abstract":"In recent years, several successful schemes have been proposed to solve the song identification problem. These techniques aim to construct a signal’s audio-fingerprint by either employing conventional signal processing techniques or by computing its sparse representation in the time-frequency domain. This paper proposes a new audio-fingerprinting scheme which is able to construct a unique and concise representation of an audio signal by applying a dictionary, which is learnt here via the well-known K-SVD algorithm applied on a song database. The promising results which emerged while conducting the experiments suggested that, not only the proposed approach preformed rather well in its attempt to identify the signal content of several audio clips –even in cases this content had been distorted by noise - but also surpassed the recognition rate of a Shazam-based paradigm.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115435908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Low Complexity Long Short-Term Memory Based Voice Activity Detection 基于低复杂度长短期记忆的语音活动检测
Pub Date : 2020-09-21 DOI: 10.1109/MMSP48831.2020.9287142
Ruiting Yang, Jie Liu, Xiang Deng, Zhuochao Zheng
Voice Activity Detection (VAD) plays an important role in audio processing, but it is also a common challenge when a voice signal is corrupted with strong and transient noise. In this paper, an accurate and causal VAD module using a long short-term memory (LSTM) deep neural network is proposed. A set of features including Gammatone cepstral coefficients (GTCC) and selected spectral features are used. The low complex structure allows it can be easily implemented in speech processing algorithms and applications. With carefully pre-processing and labeling the collected training data in the classes of speech or non-speech and training on the LSTM net, experiments show the proposed VAD is able to distinguish speech from different types of noisy background effectively. Its robustness against changes including varying frame length, moving speech sources and speaking in different languages, are further investigated.
语音活动检测(VAD)在音频处理中起着重要的作用,但当语音信号被强烈的瞬态噪声破坏时,它也是一个常见的挑战。本文提出了一种基于长短期记忆(LSTM)深度神经网络的精确因果VAD模块。使用了一组特征,包括伽玛酮倒谱系数(GTCC)和选定的光谱特征。低复杂度的结构使得它可以很容易地实现在语音处理算法和应用中。通过对采集到的训练数据进行预处理和标记,将其分为语音类和非语音类,并在LSTM网络上进行训练,实验表明,所提出的VAD能够有效地从不同类型的噪声背景中区分语音。进一步研究了该算法对不同帧长、移动语音源和不同语言的鲁棒性。
{"title":"A Low Complexity Long Short-Term Memory Based Voice Activity Detection","authors":"Ruiting Yang, Jie Liu, Xiang Deng, Zhuochao Zheng","doi":"10.1109/MMSP48831.2020.9287142","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287142","url":null,"abstract":"Voice Activity Detection (VAD) plays an important role in audio processing, but it is also a common challenge when a voice signal is corrupted with strong and transient noise. In this paper, an accurate and causal VAD module using a long short-term memory (LSTM) deep neural network is proposed. A set of features including Gammatone cepstral coefficients (GTCC) and selected spectral features are used. The low complex structure allows it can be easily implemented in speech processing algorithms and applications. With carefully pre-processing and labeling the collected training data in the classes of speech or non-speech and training on the LSTM net, experiments show the proposed VAD is able to distinguish speech from different types of noisy background effectively. Its robustness against changes including varying frame length, moving speech sources and speaking in different languages, are further investigated.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123968631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On Verification of Blur and Sharpness Metrics for No-reference Image Visual Quality Assessment 无参考图像视觉质量评价中模糊和锐度度量的验证
Pub Date : 2020-09-21 DOI: 10.1109/MMSP48831.2020.9287110
Sheyda Ghanbaralizadeh Bahnemiri, Mykola Ponomarenko, K. Egiazarian
Natural images may contain regions with different levels of blur affecting image visual quality. No-reference image visual quality metrics should be able to effectively evaluate both blur and sharpness levels on a given image. In this paper, we propose a large image database BlurSet to verify this ability. BlurSet contains 5000 grayscale images of size 128×128 pixels with different levels of Gaussian blur and unsharp mask. For each image, a scalar value indicating the level of blur and the level of sharpness is provided. Several image quality assessment criteria are presented to evaluate how a given metric can estimate the level of blur/sharpness on BlurSet. An extensive comparative analysis of different no-reference metrics is carried out. Reachable levels of the quality criteria are evaluated using the proposed blur/sharpness convolutional neural network (BSCNN).
自然图像可能包含影响图像视觉质量的不同程度模糊的区域。无参考图像视觉质量指标应该能够有效地评估给定图像上的模糊和清晰度水平。在本文中,我们提出了一个大型图像数据库BlurSet来验证这种能力。BlurSet包含5000张灰度图像,大小为128×128像素,具有不同程度的高斯模糊和不锐利的蒙版。对于每个图像,提供一个标量值,表示模糊程度和清晰度水平。提出了几个图像质量评估标准,以评估给定度量如何估计模糊/清晰度水平。对不同的无参考指标进行了广泛的比较分析。使用提出的模糊/清晰度卷积神经网络(BSCNN)评估质量标准的可达水平。
{"title":"On Verification of Blur and Sharpness Metrics for No-reference Image Visual Quality Assessment","authors":"Sheyda Ghanbaralizadeh Bahnemiri, Mykola Ponomarenko, K. Egiazarian","doi":"10.1109/MMSP48831.2020.9287110","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287110","url":null,"abstract":"Natural images may contain regions with different levels of blur affecting image visual quality. No-reference image visual quality metrics should be able to effectively evaluate both blur and sharpness levels on a given image. In this paper, we propose a large image database BlurSet to verify this ability. BlurSet contains 5000 grayscale images of size 128×128 pixels with different levels of Gaussian blur and unsharp mask. For each image, a scalar value indicating the level of blur and the level of sharpness is provided. Several image quality assessment criteria are presented to evaluate how a given metric can estimate the level of blur/sharpness on BlurSet. An extensive comparative analysis of different no-reference metrics is carried out. Reachable levels of the quality criteria are evaluated using the proposed blur/sharpness convolutional neural network (BSCNN).","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124937997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Eye Movement State Trajectory Estimator based on Ancestor Sampling 基于祖先抽样的眼动状态轨迹估计
Pub Date : 2020-09-21 DOI: 10.1109/MMSP48831.2020.9287155
S. Malladi, J. Mukhopadhyay, M. Larabi, S. Chaudhury
Human gaze dynamics mainly concern about the sequence of the occurrence of three eye movements: fixations, saccades, and microsaccades. In this paper, we correlate them as three different states to velocities of eye movements. We build a state trajectory estimator based on ancestor sampling (ST EAS) model, which captures the features of the human temporal gaze pattern to identify the kind of visual stimuli. We used a gaze dataset of 72 viewers watching 60 video clips which are equally split into four visual categories. Uniformly sampled velocity vectors from the training set, are used to find the best suitable parameters of the proposed statistical model. Then, the optimized model is used for both gaze data classification and video retrieval on the test set. We observed 93.265% of classification accuracy and a mean reciprocal rank of 0.888 for video retrieval on the test set. Hence, this model can be used for viewer independent video indexing for providing viewers an easier way to navigate through the contents.
人类注视动力学主要关注三种眼球运动的发生顺序:注视、扫视和微扫视。在本文中,我们将它们作为三种不同的状态与眼动速度相关联。我们建立了一个基于祖先采样(ST EAS)模型的状态轨迹估计器,该模型捕获了人类时间凝视模式的特征,以识别视觉刺激的类型。我们使用了一个由72名观众观看的60个视频片段组成的凝视数据集,这些视频片段被平均分为四个视觉类别。从训练集中均匀采样速度向量,以找到最合适的统计模型参数。然后,将优化后的模型用于注视数据分类和测试集上的视频检索。我们观察到在测试集上视频检索的分类准确率为93.265%,平均倒数秩为0.888。因此,这个模型可以用于独立于观看者的视频索引,为观看者提供一种更简单的方式来浏览内容。
{"title":"Eye Movement State Trajectory Estimator based on Ancestor Sampling","authors":"S. Malladi, J. Mukhopadhyay, M. Larabi, S. Chaudhury","doi":"10.1109/MMSP48831.2020.9287155","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287155","url":null,"abstract":"Human gaze dynamics mainly concern about the sequence of the occurrence of three eye movements: fixations, saccades, and microsaccades. In this paper, we correlate them as three different states to velocities of eye movements. We build a state trajectory estimator based on ancestor sampling (ST EAS) model, which captures the features of the human temporal gaze pattern to identify the kind of visual stimuli. We used a gaze dataset of 72 viewers watching 60 video clips which are equally split into four visual categories. Uniformly sampled velocity vectors from the training set, are used to find the best suitable parameters of the proposed statistical model. Then, the optimized model is used for both gaze data classification and video retrieval on the test set. We observed 93.265% of classification accuracy and a mean reciprocal rank of 0.888 for video retrieval on the test set. Hence, this model can be used for viewer independent video indexing for providing viewers an easier way to navigate through the contents.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131813970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Coarse Representation of Frames Oriented Video Coding By Leveraging Cuboidal Partitioning of Image Data 利用图像数据的立方体分割的面向帧的视频编码的粗表示
Pub Date : 2020-09-21 DOI: 10.1109/MMSP48831.2020.9287138
Ashek Ahmmed, M. Paul, Manzur Murshed, D. Taubman
Video coding algorithms attempt to minimize the significant commonality that exists within a video sequence. Each new video coding standard contains tools that can perform this task more efficiently compared to its predecessors. In this work, we form a coarse representation of the current frame by minimizing commonality within that frame while preserving important structural properties of the frame. The building blocks of this coarse representation are rectangular regions called cuboids, which are computationally simple and has a compact description. Then we propose to employ the coarse frame as an additional source for predictive coding of the current frame. Experimental results show an improvement in bit rate savings over a reference codec for HEVC, with minor increase in the codec computational complexity.
视频编码算法试图最小化存在于视频序列中的显著共性。每一个新的视频编码标准都包含了比之前的标准更有效地执行这一任务的工具。在这项工作中,我们通过最小化框架内的共性,同时保留框架的重要结构属性,形成当前框架的粗略表示。这种粗糙表示的构建块是称为长方体的矩形区域,它计算简单并且具有紧凑的描述。然后,我们建议使用粗帧作为当前帧的预测编码的附加源。实验结果表明,与HEVC的参考编解码器相比,该编解码器在比特率节省方面有所改善,但编解码器的计算复杂度略有增加。
{"title":"A Coarse Representation of Frames Oriented Video Coding By Leveraging Cuboidal Partitioning of Image Data","authors":"Ashek Ahmmed, M. Paul, Manzur Murshed, D. Taubman","doi":"10.1109/MMSP48831.2020.9287138","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287138","url":null,"abstract":"Video coding algorithms attempt to minimize the significant commonality that exists within a video sequence. Each new video coding standard contains tools that can perform this task more efficiently compared to its predecessors. In this work, we form a coarse representation of the current frame by minimizing commonality within that frame while preserving important structural properties of the frame. The building blocks of this coarse representation are rectangular regions called cuboids, which are computationally simple and has a compact description. Then we propose to employ the coarse frame as an additional source for predictive coding of the current frame. Experimental results show an improvement in bit rate savings over a reference codec for HEVC, with minor increase in the codec computational complexity.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132078071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
MMSP 2020 Breaker Page MMSP 2020断路器页面
Pub Date : 2020-09-21 DOI: 10.1109/mmsp48831.2020.9287118
{"title":"MMSP 2020 Breaker Page","authors":"","doi":"10.1109/mmsp48831.2020.9287118","DOIUrl":"https://doi.org/10.1109/mmsp48831.2020.9287118","url":null,"abstract":"","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127262599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1