首页 > 最新文献

2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)最新文献

英文 中文
MMSP 2020 TOC
Pub Date : 2020-09-21 DOI: 10.1109/mmsp48831.2020.9287087
{"title":"MMSP 2020 TOC","authors":"","doi":"10.1109/mmsp48831.2020.9287087","DOIUrl":"https://doi.org/10.1109/mmsp48831.2020.9287087","url":null,"abstract":"","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131694735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Motion JPEG Decoding via Iterative Thresholding and Motion-Compensated Deflickering 基于迭代阈值和运动补偿的运动JPEG解码
Pub Date : 2020-09-21 DOI: 10.1109/MMSP48831.2020.9287147
E. Belyaev, Linlin Bie, J. Korhonen
This paper studies the problem of decoding video sequences compressed by Motion JPEG (M-JPEG) at the best possible perceived video quality. We consider decoding of M-JPEG video as signal recovery from incomplete measurements known in compressive sensing. We take all quantized nonzero Discrete Cosine Transform (DCT) coefficients as measurements and the remaining zero coefficients as data that should be recovered. The output video is reconstructed via iterative thresholding algorithm, where Video Block Matching and 4-D filtering (VBM4D) is used as thresholding operator. To reduce non-linearities in the measurements caused by the quantization in JPEG, we propose to apply spatio-temporal pre-filtering before measurements calculation and recovery. Since temporal inconsistencies of the residual coding artifacts lead to strong flickering in recovered video, we also propose to apply motion-compensated deflickering filter as a post-filter. Experimental results show that the proposed approach provides 0.44–0.51 dB average improvement in Peak Signal to Noise Ratio (PSNR), as well as lower flickering level compared to the state-of-the-art method based on Coefficient Graph Laplacians (COGL). We have also conducted a subjective comparison study, indicating that the proposed approach outperforms state-of-the-art methods in terms of subjective video quality.
本文研究了运动JPEG (M-JPEG)压缩视频序列的解码问题,以获得最佳的感知视频质量。我们认为M-JPEG视频的解码是压缩感知中已知的不完整测量的信号恢复。我们将所有量化的非零离散余弦变换(DCT)系数作为测量值,将剩余的零系数作为需要恢复的数据。以视频块匹配和四维滤波(video Block Matching and 4-D filtering, VBM4D)作为阈值算子,通过迭代阈值算法重构输出视频。为了减少JPEG中量化引起的测量非线性,我们提出在测量计算和恢复之前应用时空预滤波。由于残余编码伪影的时间不一致性导致恢复视频中强烈的闪烁,我们还建议应用运动补偿的闪烁滤波器作为后滤波器。实验结果表明,与基于系数图拉普拉斯算子(COGL)的方法相比,该方法的峰值信噪比(PSNR)平均提高了0.44 ~ 0.51 dB,且闪烁水平较低。我们还进行了一项主观比较研究,表明所提出的方法在主观视频质量方面优于最先进的方法。
{"title":"Motion JPEG Decoding via Iterative Thresholding and Motion-Compensated Deflickering","authors":"E. Belyaev, Linlin Bie, J. Korhonen","doi":"10.1109/MMSP48831.2020.9287147","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287147","url":null,"abstract":"This paper studies the problem of decoding video sequences compressed by Motion JPEG (M-JPEG) at the best possible perceived video quality. We consider decoding of M-JPEG video as signal recovery from incomplete measurements known in compressive sensing. We take all quantized nonzero Discrete Cosine Transform (DCT) coefficients as measurements and the remaining zero coefficients as data that should be recovered. The output video is reconstructed via iterative thresholding algorithm, where Video Block Matching and 4-D filtering (VBM4D) is used as thresholding operator. To reduce non-linearities in the measurements caused by the quantization in JPEG, we propose to apply spatio-temporal pre-filtering before measurements calculation and recovery. Since temporal inconsistencies of the residual coding artifacts lead to strong flickering in recovered video, we also propose to apply motion-compensated deflickering filter as a post-filter. Experimental results show that the proposed approach provides 0.44–0.51 dB average improvement in Peak Signal to Noise Ratio (PSNR), as well as lower flickering level compared to the state-of-the-art method based on Coefficient Graph Laplacians (COGL). We have also conducted a subjective comparison study, indicating that the proposed approach outperforms state-of-the-art methods in terms of subjective video quality.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130513961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Video Coding for Machines with Feature-Based Rate-Distortion Optimization 基于特征率失真优化的机器视频编码
Pub Date : 2020-09-21 DOI: 10.1109/MMSP48831.2020.9287136
Kristian Fischer, Fabian Brand, Christian Herglotz, A. Kaup
Common state-of-the-art video codecs are optimized to deliver a low bitrate by providing a certain quality for the final human observer, which is achieved by rate-distortion optimization (RDO). But, with the steady improvement of neural networks solving computer vision tasks, more and more multimedia data is not observed by humans anymore, but directly analyzed by neural networks. In this paper, we propose a standard-compliant feature-based RDO (FRDO) that is designed to increase the coding performance, when the decoded frame is analyzed by a neural network in a video coding for machine scenario. To that extent, we replace the pixel-based distortion metrics in conventional RDO of VTM-8.0 with distortion metrics calculated in the feature space created by the first layers of a neural network. Throughout several tests with the segmentation network Mask R-CNN and single images from the Cityscapes dataset, we compare the proposed FRDO and its hybrid version HFRDO with different distortion measures in the feature space against the conventional RDO. With HFRDO, up to 5.49% bitrate can be saved compared to the VTM-8.0 implementation in terms of Bjøntegaard Delta Rate and using the weighted average precision as quality metric. Additionally, allowing the encoder to vary the quantization parameter results in coding gains for the proposed HFRDO of up 9.95% compared to conventional VTM.
常见的最先进的视频编解码器经过优化,通过为最终的人类观察者提供一定的质量来提供低比特率,这是通过率失真优化(RDO)实现的。但是,随着神经网络解决计算机视觉任务能力的不断提高,越来越多的多媒体数据不再由人类观察,而是由神经网络直接分析。在本文中,我们提出了一种符合标准的基于特征的RDO (FRDO),旨在提高机器视频编码场景中解码帧的神经网络分析编码性能。在这种程度上,我们将VTM-8.0的传统RDO中基于像素的失真度量替换为在神经网络的第一层创建的特征空间中计算的失真度量。通过对分割网络Mask R-CNN和来自cityscape数据集的单个图像的多次测试,我们将提出的FRDO及其混合版本HFRDO与传统的RDO在特征空间中具有不同的失真措施进行了比较。与VTM-8.0相比,HFRDO在Bjøntegaard Delta Rate方面可以节省5.49%的比特率,并使用加权平均精度作为质量指标。此外,允许编码器改变量化参数,与传统VTM相比,所提出的HFRDO的编码增益高达9.95%。
{"title":"Video Coding for Machines with Feature-Based Rate-Distortion Optimization","authors":"Kristian Fischer, Fabian Brand, Christian Herglotz, A. Kaup","doi":"10.1109/MMSP48831.2020.9287136","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287136","url":null,"abstract":"Common state-of-the-art video codecs are optimized to deliver a low bitrate by providing a certain quality for the final human observer, which is achieved by rate-distortion optimization (RDO). But, with the steady improvement of neural networks solving computer vision tasks, more and more multimedia data is not observed by humans anymore, but directly analyzed by neural networks. In this paper, we propose a standard-compliant feature-based RDO (FRDO) that is designed to increase the coding performance, when the decoded frame is analyzed by a neural network in a video coding for machine scenario. To that extent, we replace the pixel-based distortion metrics in conventional RDO of VTM-8.0 with distortion metrics calculated in the feature space created by the first layers of a neural network. Throughout several tests with the segmentation network Mask R-CNN and single images from the Cityscapes dataset, we compare the proposed FRDO and its hybrid version HFRDO with different distortion measures in the feature space against the conventional RDO. With HFRDO, up to 5.49% bitrate can be saved compared to the VTM-8.0 implementation in terms of Bjøntegaard Delta Rate and using the weighted average precision as quality metric. Additionally, allowing the encoder to vary the quantization parameter results in coding gains for the proposed HFRDO of up 9.95% compared to conventional VTM.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130530238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Merging of MOS of Large Image Databases for No-reference Image Visual Quality Assessment 面向无参考图像视觉质量评价的大型图像数据库MOS合并
Pub Date : 2020-09-21 DOI: 10.1109/MMSP48831.2020.9287141
Aki Kaipio, Mykola Ponomarenko, K. Egiazarian
For training of no-reference image visual quality metrics large specialized image databases are used. For images of the databases mean opinion scores (MOS) are experimentally obtained collecting judgments of many observers. MOS of a given image reflects an averaged human perception of visual quality of the image. Each database has its own unknown scale of MOS values depending on unique content of the database. For training of no-reference metrics based on convolutional networks usually only one selected database is used, because all MOS values on input of training loss function should be in the same scale. In this paper, a simple and effective method of merging of several large databases into one database with transforming of their MOS into one scale is proposed. Accuracy of the proposed method is analyzed. Merged MOS is used for practical training of no-reference metric. Better effectiveness of the training is shown in comparative analysis.
为了训练无参考图像的视觉质量指标,使用了大型专业图像数据库。对于数据库的图像,平均意见分数(MOS)是通过收集许多观察者的判断而实验得到的。给定图像的MOS反映了人类对图像视觉质量的平均感知。每个数据库都有自己未知的MOS值,这取决于数据库的独特内容。对于基于卷积网络的无参考指标的训练,通常只选择一个数据库,因为训练损失函数输入上的所有MOS值必须在同一尺度上。本文提出了一种简单有效的将多个大型数据库合并为一个数据库并将其MOS转换为一个尺度的方法。分析了该方法的精度。将合并MOS用于无参考度量的实际训练。对比分析表明,该培训具有较好的效果。
{"title":"Merging of MOS of Large Image Databases for No-reference Image Visual Quality Assessment","authors":"Aki Kaipio, Mykola Ponomarenko, K. Egiazarian","doi":"10.1109/MMSP48831.2020.9287141","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287141","url":null,"abstract":"For training of no-reference image visual quality metrics large specialized image databases are used. For images of the databases mean opinion scores (MOS) are experimentally obtained collecting judgments of many observers. MOS of a given image reflects an averaged human perception of visual quality of the image. Each database has its own unknown scale of MOS values depending on unique content of the database. For training of no-reference metrics based on convolutional networks usually only one selected database is used, because all MOS values on input of training loss function should be in the same scale. In this paper, a simple and effective method of merging of several large databases into one database with transforming of their MOS into one scale is proposed. Accuracy of the proposed method is analyzed. Merged MOS is used for practical training of no-reference metric. Better effectiveness of the training is shown in comparative analysis.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132441975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Audio-Fingerprinting via Dictionary Learning 通过字典学习音频指纹识别
Pub Date : 2020-09-21 DOI: 10.1109/MMSP48831.2020.9287073
Christina Saravanos, D. Ampeliotis, K. Berberidis
In recent years, several successful schemes have been proposed to solve the song identification problem. These techniques aim to construct a signal’s audio-fingerprint by either employing conventional signal processing techniques or by computing its sparse representation in the time-frequency domain. This paper proposes a new audio-fingerprinting scheme which is able to construct a unique and concise representation of an audio signal by applying a dictionary, which is learnt here via the well-known K-SVD algorithm applied on a song database. The promising results which emerged while conducting the experiments suggested that, not only the proposed approach preformed rather well in its attempt to identify the signal content of several audio clips –even in cases this content had been distorted by noise - but also surpassed the recognition rate of a Shazam-based paradigm.
近年来,已经提出了几种成功的方案来解决歌曲识别问题。这些技术的目的是通过采用传统的信号处理技术或通过计算其在时频域的稀疏表示来构建信号的音频指纹。本文提出了一种新的音频指纹识别方案,该方案能够通过应用于歌曲数据库的著名K-SVD算法来学习字典,从而构建音频信号的唯一且简洁的表示。在进行实验时出现的令人鼓舞的结果表明,所提出的方法不仅在试图识别几个音频片段的信号内容方面表现得相当好——即使这些内容被噪声扭曲了——而且还超过了基于shazam的范式的识别率。
{"title":"Audio-Fingerprinting via Dictionary Learning","authors":"Christina Saravanos, D. Ampeliotis, K. Berberidis","doi":"10.1109/MMSP48831.2020.9287073","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287073","url":null,"abstract":"In recent years, several successful schemes have been proposed to solve the song identification problem. These techniques aim to construct a signal’s audio-fingerprint by either employing conventional signal processing techniques or by computing its sparse representation in the time-frequency domain. This paper proposes a new audio-fingerprinting scheme which is able to construct a unique and concise representation of an audio signal by applying a dictionary, which is learnt here via the well-known K-SVD algorithm applied on a song database. The promising results which emerged while conducting the experiments suggested that, not only the proposed approach preformed rather well in its attempt to identify the signal content of several audio clips –even in cases this content had been distorted by noise - but also surpassed the recognition rate of a Shazam-based paradigm.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115435908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Low Complexity Long Short-Term Memory Based Voice Activity Detection 基于低复杂度长短期记忆的语音活动检测
Pub Date : 2020-09-21 DOI: 10.1109/MMSP48831.2020.9287142
Ruiting Yang, Jie Liu, Xiang Deng, Zhuochao Zheng
Voice Activity Detection (VAD) plays an important role in audio processing, but it is also a common challenge when a voice signal is corrupted with strong and transient noise. In this paper, an accurate and causal VAD module using a long short-term memory (LSTM) deep neural network is proposed. A set of features including Gammatone cepstral coefficients (GTCC) and selected spectral features are used. The low complex structure allows it can be easily implemented in speech processing algorithms and applications. With carefully pre-processing and labeling the collected training data in the classes of speech or non-speech and training on the LSTM net, experiments show the proposed VAD is able to distinguish speech from different types of noisy background effectively. Its robustness against changes including varying frame length, moving speech sources and speaking in different languages, are further investigated.
语音活动检测(VAD)在音频处理中起着重要的作用,但当语音信号被强烈的瞬态噪声破坏时,它也是一个常见的挑战。本文提出了一种基于长短期记忆(LSTM)深度神经网络的精确因果VAD模块。使用了一组特征,包括伽玛酮倒谱系数(GTCC)和选定的光谱特征。低复杂度的结构使得它可以很容易地实现在语音处理算法和应用中。通过对采集到的训练数据进行预处理和标记,将其分为语音类和非语音类,并在LSTM网络上进行训练,实验表明,所提出的VAD能够有效地从不同类型的噪声背景中区分语音。进一步研究了该算法对不同帧长、移动语音源和不同语言的鲁棒性。
{"title":"A Low Complexity Long Short-Term Memory Based Voice Activity Detection","authors":"Ruiting Yang, Jie Liu, Xiang Deng, Zhuochao Zheng","doi":"10.1109/MMSP48831.2020.9287142","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287142","url":null,"abstract":"Voice Activity Detection (VAD) plays an important role in audio processing, but it is also a common challenge when a voice signal is corrupted with strong and transient noise. In this paper, an accurate and causal VAD module using a long short-term memory (LSTM) deep neural network is proposed. A set of features including Gammatone cepstral coefficients (GTCC) and selected spectral features are used. The low complex structure allows it can be easily implemented in speech processing algorithms and applications. With carefully pre-processing and labeling the collected training data in the classes of speech or non-speech and training on the LSTM net, experiments show the proposed VAD is able to distinguish speech from different types of noisy background effectively. Its robustness against changes including varying frame length, moving speech sources and speaking in different languages, are further investigated.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123968631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On Verification of Blur and Sharpness Metrics for No-reference Image Visual Quality Assessment 无参考图像视觉质量评价中模糊和锐度度量的验证
Pub Date : 2020-09-21 DOI: 10.1109/MMSP48831.2020.9287110
Sheyda Ghanbaralizadeh Bahnemiri, Mykola Ponomarenko, K. Egiazarian
Natural images may contain regions with different levels of blur affecting image visual quality. No-reference image visual quality metrics should be able to effectively evaluate both blur and sharpness levels on a given image. In this paper, we propose a large image database BlurSet to verify this ability. BlurSet contains 5000 grayscale images of size 128×128 pixels with different levels of Gaussian blur and unsharp mask. For each image, a scalar value indicating the level of blur and the level of sharpness is provided. Several image quality assessment criteria are presented to evaluate how a given metric can estimate the level of blur/sharpness on BlurSet. An extensive comparative analysis of different no-reference metrics is carried out. Reachable levels of the quality criteria are evaluated using the proposed blur/sharpness convolutional neural network (BSCNN).
自然图像可能包含影响图像视觉质量的不同程度模糊的区域。无参考图像视觉质量指标应该能够有效地评估给定图像上的模糊和清晰度水平。在本文中,我们提出了一个大型图像数据库BlurSet来验证这种能力。BlurSet包含5000张灰度图像,大小为128×128像素,具有不同程度的高斯模糊和不锐利的蒙版。对于每个图像,提供一个标量值,表示模糊程度和清晰度水平。提出了几个图像质量评估标准,以评估给定度量如何估计模糊/清晰度水平。对不同的无参考指标进行了广泛的比较分析。使用提出的模糊/清晰度卷积神经网络(BSCNN)评估质量标准的可达水平。
{"title":"On Verification of Blur and Sharpness Metrics for No-reference Image Visual Quality Assessment","authors":"Sheyda Ghanbaralizadeh Bahnemiri, Mykola Ponomarenko, K. Egiazarian","doi":"10.1109/MMSP48831.2020.9287110","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287110","url":null,"abstract":"Natural images may contain regions with different levels of blur affecting image visual quality. No-reference image visual quality metrics should be able to effectively evaluate both blur and sharpness levels on a given image. In this paper, we propose a large image database BlurSet to verify this ability. BlurSet contains 5000 grayscale images of size 128×128 pixels with different levels of Gaussian blur and unsharp mask. For each image, a scalar value indicating the level of blur and the level of sharpness is provided. Several image quality assessment criteria are presented to evaluate how a given metric can estimate the level of blur/sharpness on BlurSet. An extensive comparative analysis of different no-reference metrics is carried out. Reachable levels of the quality criteria are evaluated using the proposed blur/sharpness convolutional neural network (BSCNN).","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124937997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Eye Movement State Trajectory Estimator based on Ancestor Sampling 基于祖先抽样的眼动状态轨迹估计
Pub Date : 2020-09-21 DOI: 10.1109/MMSP48831.2020.9287155
S. Malladi, J. Mukhopadhyay, M. Larabi, S. Chaudhury
Human gaze dynamics mainly concern about the sequence of the occurrence of three eye movements: fixations, saccades, and microsaccades. In this paper, we correlate them as three different states to velocities of eye movements. We build a state trajectory estimator based on ancestor sampling (ST EAS) model, which captures the features of the human temporal gaze pattern to identify the kind of visual stimuli. We used a gaze dataset of 72 viewers watching 60 video clips which are equally split into four visual categories. Uniformly sampled velocity vectors from the training set, are used to find the best suitable parameters of the proposed statistical model. Then, the optimized model is used for both gaze data classification and video retrieval on the test set. We observed 93.265% of classification accuracy and a mean reciprocal rank of 0.888 for video retrieval on the test set. Hence, this model can be used for viewer independent video indexing for providing viewers an easier way to navigate through the contents.
人类注视动力学主要关注三种眼球运动的发生顺序:注视、扫视和微扫视。在本文中,我们将它们作为三种不同的状态与眼动速度相关联。我们建立了一个基于祖先采样(ST EAS)模型的状态轨迹估计器,该模型捕获了人类时间凝视模式的特征,以识别视觉刺激的类型。我们使用了一个由72名观众观看的60个视频片段组成的凝视数据集,这些视频片段被平均分为四个视觉类别。从训练集中均匀采样速度向量,以找到最合适的统计模型参数。然后,将优化后的模型用于注视数据分类和测试集上的视频检索。我们观察到在测试集上视频检索的分类准确率为93.265%,平均倒数秩为0.888。因此,这个模型可以用于独立于观看者的视频索引,为观看者提供一种更简单的方式来浏览内容。
{"title":"Eye Movement State Trajectory Estimator based on Ancestor Sampling","authors":"S. Malladi, J. Mukhopadhyay, M. Larabi, S. Chaudhury","doi":"10.1109/MMSP48831.2020.9287155","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287155","url":null,"abstract":"Human gaze dynamics mainly concern about the sequence of the occurrence of three eye movements: fixations, saccades, and microsaccades. In this paper, we correlate them as three different states to velocities of eye movements. We build a state trajectory estimator based on ancestor sampling (ST EAS) model, which captures the features of the human temporal gaze pattern to identify the kind of visual stimuli. We used a gaze dataset of 72 viewers watching 60 video clips which are equally split into four visual categories. Uniformly sampled velocity vectors from the training set, are used to find the best suitable parameters of the proposed statistical model. Then, the optimized model is used for both gaze data classification and video retrieval on the test set. We observed 93.265% of classification accuracy and a mean reciprocal rank of 0.888 for video retrieval on the test set. Hence, this model can be used for viewer independent video indexing for providing viewers an easier way to navigate through the contents.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131813970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Coarse Representation of Frames Oriented Video Coding By Leveraging Cuboidal Partitioning of Image Data 利用图像数据的立方体分割的面向帧的视频编码的粗表示
Pub Date : 2020-09-21 DOI: 10.1109/MMSP48831.2020.9287138
Ashek Ahmmed, M. Paul, Manzur Murshed, D. Taubman
Video coding algorithms attempt to minimize the significant commonality that exists within a video sequence. Each new video coding standard contains tools that can perform this task more efficiently compared to its predecessors. In this work, we form a coarse representation of the current frame by minimizing commonality within that frame while preserving important structural properties of the frame. The building blocks of this coarse representation are rectangular regions called cuboids, which are computationally simple and has a compact description. Then we propose to employ the coarse frame as an additional source for predictive coding of the current frame. Experimental results show an improvement in bit rate savings over a reference codec for HEVC, with minor increase in the codec computational complexity.
视频编码算法试图最小化存在于视频序列中的显著共性。每一个新的视频编码标准都包含了比之前的标准更有效地执行这一任务的工具。在这项工作中,我们通过最小化框架内的共性,同时保留框架的重要结构属性,形成当前框架的粗略表示。这种粗糙表示的构建块是称为长方体的矩形区域,它计算简单并且具有紧凑的描述。然后,我们建议使用粗帧作为当前帧的预测编码的附加源。实验结果表明,与HEVC的参考编解码器相比,该编解码器在比特率节省方面有所改善,但编解码器的计算复杂度略有增加。
{"title":"A Coarse Representation of Frames Oriented Video Coding By Leveraging Cuboidal Partitioning of Image Data","authors":"Ashek Ahmmed, M. Paul, Manzur Murshed, D. Taubman","doi":"10.1109/MMSP48831.2020.9287138","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287138","url":null,"abstract":"Video coding algorithms attempt to minimize the significant commonality that exists within a video sequence. Each new video coding standard contains tools that can perform this task more efficiently compared to its predecessors. In this work, we form a coarse representation of the current frame by minimizing commonality within that frame while preserving important structural properties of the frame. The building blocks of this coarse representation are rectangular regions called cuboids, which are computationally simple and has a compact description. Then we propose to employ the coarse frame as an additional source for predictive coding of the current frame. Experimental results show an improvement in bit rate savings over a reference codec for HEVC, with minor increase in the codec computational complexity.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132078071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
MMSP 2020 Breaker Page MMSP 2020断路器页面
Pub Date : 2020-09-21 DOI: 10.1109/mmsp48831.2020.9287118
{"title":"MMSP 2020 Breaker Page","authors":"","doi":"10.1109/mmsp48831.2020.9287118","DOIUrl":"https://doi.org/10.1109/mmsp48831.2020.9287118","url":null,"abstract":"","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127262599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1