Pub Date : 2020-09-21DOI: 10.1109/MMSP48831.2020.9287062
Melissa Sanabria, F. Precioso, Thomas Menguy
Currently, in broadcast companies many human operators select which actions should belong to the summary based on multiple rules they have built upon their own experience using different sources of information. These rules define the different profiles of actions of interest that help the operator to generate better customized summaries. Most of these profiles do not directly rely on broadcast video content but rather exploit metadata describing the course of the match. In this paper, we show how the signals produced by the attention layer of a recurrent neural network can be seen as a learned representation of these action profiles and provide a new tool to support operators’ work. The results in soccer matches show the capacity of our approach to transfer knowledge between datasets from different broadcasting companies, from different leagues, and the ability of the attention layer to learn meaningful action profiles.
{"title":"Profiling Actions for Sport Video Summarization: An attention signal analysis","authors":"Melissa Sanabria, F. Precioso, Thomas Menguy","doi":"10.1109/MMSP48831.2020.9287062","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287062","url":null,"abstract":"Currently, in broadcast companies many human operators select which actions should belong to the summary based on multiple rules they have built upon their own experience using different sources of information. These rules define the different profiles of actions of interest that help the operator to generate better customized summaries. Most of these profiles do not directly rely on broadcast video content but rather exploit metadata describing the course of the match. In this paper, we show how the signals produced by the attention layer of a recurrent neural network can be seen as a learned representation of these action profiles and provide a new tool to support operators’ work. The results in soccer matches show the capacity of our approach to transfer knowledge between datasets from different broadcasting companies, from different leagues, and the ability of the attention layer to learn meaningful action profiles.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125857816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-21DOI: 10.1109/mmsp48831.2020.9287087
{"title":"MMSP 2020 TOC","authors":"","doi":"10.1109/mmsp48831.2020.9287087","DOIUrl":"https://doi.org/10.1109/mmsp48831.2020.9287087","url":null,"abstract":"","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131694735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-21DOI: 10.1109/MMSP48831.2020.9287070
Daniela Lanz, A. Kaup
Efficient lossless coding of medical volume data with temporal axis can be achieved by motion compensated wavelet lifting. As side benefit, a scalable bit stream is generated, which allows for displaying the data at different resolution layers, highly demanded for telemedicine applications. Additionally, the similarity of the temporal base layer to the input sequence is preserved by the use of motion compensated temporal filtering. However, for medical sequences the overall rate is increased due to the specific noise characteristics of the data. The use of denoising filters inside the lifting structure can improve the compression efficiency significantly without endangering the property of perfect reconstruction. However, the design of an optimum filter is a crucial task. In this paper, we present a new method for selecting the optimal filter strength for a certain denoising filter in a rate-distortion sense. This allows to minimize the required rate based on a single input parameter for the encoder to control the requested distortion of the temporal base layer.
{"title":"Optimizing Rate-Distortion Performance of Motion Compensated Wavelet Lifting with Denoised Prediction and Update","authors":"Daniela Lanz, A. Kaup","doi":"10.1109/MMSP48831.2020.9287070","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287070","url":null,"abstract":"Efficient lossless coding of medical volume data with temporal axis can be achieved by motion compensated wavelet lifting. As side benefit, a scalable bit stream is generated, which allows for displaying the data at different resolution layers, highly demanded for telemedicine applications. Additionally, the similarity of the temporal base layer to the input sequence is preserved by the use of motion compensated temporal filtering. However, for medical sequences the overall rate is increased due to the specific noise characteristics of the data. The use of denoising filters inside the lifting structure can improve the compression efficiency significantly without endangering the property of perfect reconstruction. However, the design of an optimum filter is a crucial task. In this paper, we present a new method for selecting the optimal filter strength for a certain denoising filter in a rate-distortion sense. This allows to minimize the required rate based on a single input parameter for the encoder to control the requested distortion of the temporal base layer.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133152222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-21DOI: 10.1109/MMSP48831.2020.9287141
Aki Kaipio, Mykola Ponomarenko, K. Egiazarian
For training of no-reference image visual quality metrics large specialized image databases are used. For images of the databases mean opinion scores (MOS) are experimentally obtained collecting judgments of many observers. MOS of a given image reflects an averaged human perception of visual quality of the image. Each database has its own unknown scale of MOS values depending on unique content of the database. For training of no-reference metrics based on convolutional networks usually only one selected database is used, because all MOS values on input of training loss function should be in the same scale. In this paper, a simple and effective method of merging of several large databases into one database with transforming of their MOS into one scale is proposed. Accuracy of the proposed method is analyzed. Merged MOS is used for practical training of no-reference metric. Better effectiveness of the training is shown in comparative analysis.
{"title":"Merging of MOS of Large Image Databases for No-reference Image Visual Quality Assessment","authors":"Aki Kaipio, Mykola Ponomarenko, K. Egiazarian","doi":"10.1109/MMSP48831.2020.9287141","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287141","url":null,"abstract":"For training of no-reference image visual quality metrics large specialized image databases are used. For images of the databases mean opinion scores (MOS) are experimentally obtained collecting judgments of many observers. MOS of a given image reflects an averaged human perception of visual quality of the image. Each database has its own unknown scale of MOS values depending on unique content of the database. For training of no-reference metrics based on convolutional networks usually only one selected database is used, because all MOS values on input of training loss function should be in the same scale. In this paper, a simple and effective method of merging of several large databases into one database with transforming of their MOS into one scale is proposed. Accuracy of the proposed method is analyzed. Merged MOS is used for practical training of no-reference metric. Better effectiveness of the training is shown in comparative analysis.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132441975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-21DOI: 10.1109/MMSP48831.2020.9287073
Christina Saravanos, D. Ampeliotis, K. Berberidis
In recent years, several successful schemes have been proposed to solve the song identification problem. These techniques aim to construct a signal’s audio-fingerprint by either employing conventional signal processing techniques or by computing its sparse representation in the time-frequency domain. This paper proposes a new audio-fingerprinting scheme which is able to construct a unique and concise representation of an audio signal by applying a dictionary, which is learnt here via the well-known K-SVD algorithm applied on a song database. The promising results which emerged while conducting the experiments suggested that, not only the proposed approach preformed rather well in its attempt to identify the signal content of several audio clips –even in cases this content had been distorted by noise - but also surpassed the recognition rate of a Shazam-based paradigm.
{"title":"Audio-Fingerprinting via Dictionary Learning","authors":"Christina Saravanos, D. Ampeliotis, K. Berberidis","doi":"10.1109/MMSP48831.2020.9287073","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287073","url":null,"abstract":"In recent years, several successful schemes have been proposed to solve the song identification problem. These techniques aim to construct a signal’s audio-fingerprint by either employing conventional signal processing techniques or by computing its sparse representation in the time-frequency domain. This paper proposes a new audio-fingerprinting scheme which is able to construct a unique and concise representation of an audio signal by applying a dictionary, which is learnt here via the well-known K-SVD algorithm applied on a song database. The promising results which emerged while conducting the experiments suggested that, not only the proposed approach preformed rather well in its attempt to identify the signal content of several audio clips –even in cases this content had been distorted by noise - but also surpassed the recognition rate of a Shazam-based paradigm.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115435908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-21DOI: 10.1109/MMSP48831.2020.9287142
Ruiting Yang, Jie Liu, Xiang Deng, Zhuochao Zheng
Voice Activity Detection (VAD) plays an important role in audio processing, but it is also a common challenge when a voice signal is corrupted with strong and transient noise. In this paper, an accurate and causal VAD module using a long short-term memory (LSTM) deep neural network is proposed. A set of features including Gammatone cepstral coefficients (GTCC) and selected spectral features are used. The low complex structure allows it can be easily implemented in speech processing algorithms and applications. With carefully pre-processing and labeling the collected training data in the classes of speech or non-speech and training on the LSTM net, experiments show the proposed VAD is able to distinguish speech from different types of noisy background effectively. Its robustness against changes including varying frame length, moving speech sources and speaking in different languages, are further investigated.
{"title":"A Low Complexity Long Short-Term Memory Based Voice Activity Detection","authors":"Ruiting Yang, Jie Liu, Xiang Deng, Zhuochao Zheng","doi":"10.1109/MMSP48831.2020.9287142","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287142","url":null,"abstract":"Voice Activity Detection (VAD) plays an important role in audio processing, but it is also a common challenge when a voice signal is corrupted with strong and transient noise. In this paper, an accurate and causal VAD module using a long short-term memory (LSTM) deep neural network is proposed. A set of features including Gammatone cepstral coefficients (GTCC) and selected spectral features are used. The low complex structure allows it can be easily implemented in speech processing algorithms and applications. With carefully pre-processing and labeling the collected training data in the classes of speech or non-speech and training on the LSTM net, experiments show the proposed VAD is able to distinguish speech from different types of noisy background effectively. Its robustness against changes including varying frame length, moving speech sources and speaking in different languages, are further investigated.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123968631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-21DOI: 10.1109/MMSP48831.2020.9287110
Sheyda Ghanbaralizadeh Bahnemiri, Mykola Ponomarenko, K. Egiazarian
Natural images may contain regions with different levels of blur affecting image visual quality. No-reference image visual quality metrics should be able to effectively evaluate both blur and sharpness levels on a given image. In this paper, we propose a large image database BlurSet to verify this ability. BlurSet contains 5000 grayscale images of size 128×128 pixels with different levels of Gaussian blur and unsharp mask. For each image, a scalar value indicating the level of blur and the level of sharpness is provided. Several image quality assessment criteria are presented to evaluate how a given metric can estimate the level of blur/sharpness on BlurSet. An extensive comparative analysis of different no-reference metrics is carried out. Reachable levels of the quality criteria are evaluated using the proposed blur/sharpness convolutional neural network (BSCNN).
{"title":"On Verification of Blur and Sharpness Metrics for No-reference Image Visual Quality Assessment","authors":"Sheyda Ghanbaralizadeh Bahnemiri, Mykola Ponomarenko, K. Egiazarian","doi":"10.1109/MMSP48831.2020.9287110","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287110","url":null,"abstract":"Natural images may contain regions with different levels of blur affecting image visual quality. No-reference image visual quality metrics should be able to effectively evaluate both blur and sharpness levels on a given image. In this paper, we propose a large image database BlurSet to verify this ability. BlurSet contains 5000 grayscale images of size 128×128 pixels with different levels of Gaussian blur and unsharp mask. For each image, a scalar value indicating the level of blur and the level of sharpness is provided. Several image quality assessment criteria are presented to evaluate how a given metric can estimate the level of blur/sharpness on BlurSet. An extensive comparative analysis of different no-reference metrics is carried out. Reachable levels of the quality criteria are evaluated using the proposed blur/sharpness convolutional neural network (BSCNN).","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124937997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-21DOI: 10.1109/MMSP48831.2020.9287155
S. Malladi, J. Mukhopadhyay, M. Larabi, S. Chaudhury
Human gaze dynamics mainly concern about the sequence of the occurrence of three eye movements: fixations, saccades, and microsaccades. In this paper, we correlate them as three different states to velocities of eye movements. We build a state trajectory estimator based on ancestor sampling (ST EAS) model, which captures the features of the human temporal gaze pattern to identify the kind of visual stimuli. We used a gaze dataset of 72 viewers watching 60 video clips which are equally split into four visual categories. Uniformly sampled velocity vectors from the training set, are used to find the best suitable parameters of the proposed statistical model. Then, the optimized model is used for both gaze data classification and video retrieval on the test set. We observed 93.265% of classification accuracy and a mean reciprocal rank of 0.888 for video retrieval on the test set. Hence, this model can be used for viewer independent video indexing for providing viewers an easier way to navigate through the contents.
{"title":"Eye Movement State Trajectory Estimator based on Ancestor Sampling","authors":"S. Malladi, J. Mukhopadhyay, M. Larabi, S. Chaudhury","doi":"10.1109/MMSP48831.2020.9287155","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287155","url":null,"abstract":"Human gaze dynamics mainly concern about the sequence of the occurrence of three eye movements: fixations, saccades, and microsaccades. In this paper, we correlate them as three different states to velocities of eye movements. We build a state trajectory estimator based on ancestor sampling (ST EAS) model, which captures the features of the human temporal gaze pattern to identify the kind of visual stimuli. We used a gaze dataset of 72 viewers watching 60 video clips which are equally split into four visual categories. Uniformly sampled velocity vectors from the training set, are used to find the best suitable parameters of the proposed statistical model. Then, the optimized model is used for both gaze data classification and video retrieval on the test set. We observed 93.265% of classification accuracy and a mean reciprocal rank of 0.888 for video retrieval on the test set. Hence, this model can be used for viewer independent video indexing for providing viewers an easier way to navigate through the contents.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131813970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-21DOI: 10.1109/MMSP48831.2020.9287138
Ashek Ahmmed, M. Paul, Manzur Murshed, D. Taubman
Video coding algorithms attempt to minimize the significant commonality that exists within a video sequence. Each new video coding standard contains tools that can perform this task more efficiently compared to its predecessors. In this work, we form a coarse representation of the current frame by minimizing commonality within that frame while preserving important structural properties of the frame. The building blocks of this coarse representation are rectangular regions called cuboids, which are computationally simple and has a compact description. Then we propose to employ the coarse frame as an additional source for predictive coding of the current frame. Experimental results show an improvement in bit rate savings over a reference codec for HEVC, with minor increase in the codec computational complexity.
{"title":"A Coarse Representation of Frames Oriented Video Coding By Leveraging Cuboidal Partitioning of Image Data","authors":"Ashek Ahmmed, M. Paul, Manzur Murshed, D. Taubman","doi":"10.1109/MMSP48831.2020.9287138","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287138","url":null,"abstract":"Video coding algorithms attempt to minimize the significant commonality that exists within a video sequence. Each new video coding standard contains tools that can perform this task more efficiently compared to its predecessors. In this work, we form a coarse representation of the current frame by minimizing commonality within that frame while preserving important structural properties of the frame. The building blocks of this coarse representation are rectangular regions called cuboids, which are computationally simple and has a compact description. Then we propose to employ the coarse frame as an additional source for predictive coding of the current frame. Experimental results show an improvement in bit rate savings over a reference codec for HEVC, with minor increase in the codec computational complexity.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132078071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-21DOI: 10.1109/mmsp48831.2020.9287118
{"title":"MMSP 2020 Breaker Page","authors":"","doi":"10.1109/mmsp48831.2020.9287118","DOIUrl":"https://doi.org/10.1109/mmsp48831.2020.9287118","url":null,"abstract":"","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127262599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}