Pub Date : 2020-09-21DOI: 10.1109/MMSP48831.2020.9287116
Abhinau K. Venkataramanan, Chengyang Wu, A. Bovik
Many algorithms have been developed to evaluate the perceptual quality of images and videos, based on models of picture statistics and visual perception. These algorithms attempt to capture user experience better than simple metrics like the peak signal-to-noise ratio (PSNR) and are widely utilized on streaming service platforms and in social networking applications to improve users’ Quality of Experience. The growing demand for high-resolution streams and rapid increases in user-generated content (UGC) sharpens interest in the computation involved in carrying out perceptual quality measurements. In this direction, we propose a suite of methods to efficiently predict the structural similarity index (SSIM) of high-resolution videos distorted by scaling and compression, from computations performed at lower resolutions. We show the effectiveness of our algorithms by testing on a large corpus of videos and on subjective data.
{"title":"Optimizing Video Quality Estimation Across Resolutions","authors":"Abhinau K. Venkataramanan, Chengyang Wu, A. Bovik","doi":"10.1109/MMSP48831.2020.9287116","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287116","url":null,"abstract":"Many algorithms have been developed to evaluate the perceptual quality of images and videos, based on models of picture statistics and visual perception. These algorithms attempt to capture user experience better than simple metrics like the peak signal-to-noise ratio (PSNR) and are widely utilized on streaming service platforms and in social networking applications to improve users’ Quality of Experience. The growing demand for high-resolution streams and rapid increases in user-generated content (UGC) sharpens interest in the computation involved in carrying out perceptual quality measurements. In this direction, we propose a suite of methods to efficiently predict the structural similarity index (SSIM) of high-resolution videos distorted by scaling and compression, from computations performed at lower resolutions. We show the effectiveness of our algorithms by testing on a large corpus of videos and on subjective data.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133406451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-21DOI: 10.1109/MMSP48831.2020.9287090
Yujie Hu, Xiang Zhang, Yexin Li, Ran Tian
Tracking multiple objects is a challenging task in time-critical video analysis systems. In the popular tracking-by-detection framework, the core problems of a tracker are the quality of the employed input detections and the effectiveness of the data association. Towards this end, we propose a multiple object tracking method which employs a single object tracker to improve the results of unreliable detection and data association simultaneously. Besides, we utilize maximum weight clique graph algorithm to handle the optimal assignment in an online mode. In our method, a robust single object tracker is used to connect previous tracked objects to tackle the current noise detection and improve the data association as a motion cue. Furthermore, we use person re-identification network to learn the historical appearances of the tracklets in order to promote the tracker’s identification ability. We conduct extensive experiments on the MOT benchmark to demonstrate the effectiveness of our tracker.
{"title":"Online Multiple Object Tracking Using Single Object Tracker and Maximum Weight Clique Graph","authors":"Yujie Hu, Xiang Zhang, Yexin Li, Ran Tian","doi":"10.1109/MMSP48831.2020.9287090","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287090","url":null,"abstract":"Tracking multiple objects is a challenging task in time-critical video analysis systems. In the popular tracking-by-detection framework, the core problems of a tracker are the quality of the employed input detections and the effectiveness of the data association. Towards this end, we propose a multiple object tracking method which employs a single object tracker to improve the results of unreliable detection and data association simultaneously. Besides, we utilize maximum weight clique graph algorithm to handle the optimal assignment in an online mode. In our method, a robust single object tracker is used to connect previous tracked objects to tackle the current noise detection and improve the data association as a motion cue. Furthermore, we use person re-identification network to learn the historical appearances of the tracklets in order to promote the tracker’s identification ability. We conduct extensive experiments on the MOT benchmark to demonstrate the effectiveness of our tracker.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116765774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-21DOI: 10.1109/mmsp48831.2020.9287101
{"title":"MMSP 2020 List Reviewer Page","authors":"","doi":"10.1109/mmsp48831.2020.9287101","DOIUrl":"https://doi.org/10.1109/mmsp48831.2020.9287101","url":null,"abstract":"","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123858930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-21DOI: 10.1109/MMSP48831.2020.9287054
Christian Herglotz, Marco Bader, Kristian Fischer, A. Kaup
This paper presents optimal x265-encoder configurations and an enhanced optimization algorithm for minimizing the software decoding energy of HEVC-coded videos. We reach this goal with two contributions. First, we perform a detailed analysis on the influence of various encoder settings on the decoding energy. Second, we include an enhanced version of an algorithm called decoding-energy-rate-distortion optimization into x265, which we optimize for fast and efficient encoding. This algorithm introduces the estimated decoding energy as an additional optimization criterion into the rate-distortion cost function. We evaluate the extended encoder in terms of bitrate, distortion, and decoding energy, where we perform energy measurements to prove the superior energy efficiency. We find that the combination of the ‘fastdecoding’ tuning option of x265 with the enhanced decoding-energy-rate-distortion optimization leads to 27.2% and 26.0% of decoding energy savings for OpenHEVC and HM decoding, respectively. At the same time, compression efficiency losses of 38.2% and negligible decreases in encoder runtime of 0.39% can be observed.
{"title":"Decoding-Energy Optimal Video Encoding For x265","authors":"Christian Herglotz, Marco Bader, Kristian Fischer, A. Kaup","doi":"10.1109/MMSP48831.2020.9287054","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287054","url":null,"abstract":"This paper presents optimal x265-encoder configurations and an enhanced optimization algorithm for minimizing the software decoding energy of HEVC-coded videos. We reach this goal with two contributions. First, we perform a detailed analysis on the influence of various encoder settings on the decoding energy. Second, we include an enhanced version of an algorithm called decoding-energy-rate-distortion optimization into x265, which we optimize for fast and efficient encoding. This algorithm introduces the estimated decoding energy as an additional optimization criterion into the rate-distortion cost function. We evaluate the extended encoder in terms of bitrate, distortion, and decoding energy, where we perform energy measurements to prove the superior energy efficiency. We find that the combination of the ‘fastdecoding’ tuning option of x265 with the enhanced decoding-energy-rate-distortion optimization leads to 27.2% and 26.0% of decoding energy savings for OpenHEVC and HM decoding, respectively. At the same time, compression efficiency losses of 38.2% and negligible decreases in encoder runtime of 0.39% can be observed.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122272611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-21DOI: 10.1109/MMSP48831.2020.9287112
Samr Ali, N. Bouguila
Hidden Markov models (HMM) have recently risen as a key generative machine learning approach for time series data study and analysis. While early works focused only on applying HMMs for speech recognition, HMMs are now prominent in various fields such as video classification and genomics. In this paper, we develop a Maximum A Posteriori framework for learning the Generalized Dirichlet HMMs that have been proposed recently as an efficient way for modeling sequential proportional data. In contrast to the conventional Baum Welch algorithm, commonly used for learning HMMs, the proposed algorithm places priors for the learning of the desired parameters; hence, regularizing the estimation process. We validate our proposed approach on a challenging video processing application; namely, dynamic texture classification.
{"title":"On Maximum A Posteriori Approximation of Hidden Markov Models for Proportional Data","authors":"Samr Ali, N. Bouguila","doi":"10.1109/MMSP48831.2020.9287112","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287112","url":null,"abstract":"Hidden Markov models (HMM) have recently risen as a key generative machine learning approach for time series data study and analysis. While early works focused only on applying HMMs for speech recognition, HMMs are now prominent in various fields such as video classification and genomics. In this paper, we develop a Maximum A Posteriori framework for learning the Generalized Dirichlet HMMs that have been proposed recently as an efficient way for modeling sequential proportional data. In contrast to the conventional Baum Welch algorithm, commonly used for learning HMMs, the proposed algorithm places priors for the learning of the desired parameters; hence, regularizing the estimation process. We validate our proposed approach on a challenging video processing application; namely, dynamic texture classification.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127921950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-21DOI: 10.1109/MMSP48831.2020.9287144
Fangchen Feng, Azeddine Beghdadi
In this paper, we propose a new formulation for the blind source separation problem for audio signals with convolutive mixtures to improve the separation performance of Independent Vector Analysis (IVA). The proposed method benefits from both the recently investigated convolutive approximation model and the IVA approaches that take advantages of the cross-band information to avoid permutation alignment. We first exploit the link between the IVA and the Sparse Component Analysis (SCA) methods through the structured sparsity. We then propose a new framework by combining the convolutive narrowband approximation and the Windowed-Group-Lasso (WGL). The optimisation of the model is based on the alternating optimisation approach where the convolutive kernel and the source components are jointly optimised.
{"title":"Reverberant Audio Blind Source Separation via Local Convolutive Independent Vector Analysis","authors":"Fangchen Feng, Azeddine Beghdadi","doi":"10.1109/MMSP48831.2020.9287144","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287144","url":null,"abstract":"In this paper, we propose a new formulation for the blind source separation problem for audio signals with convolutive mixtures to improve the separation performance of Independent Vector Analysis (IVA). The proposed method benefits from both the recently investigated convolutive approximation model and the IVA approaches that take advantages of the cross-band information to avoid permutation alignment. We first exploit the link between the IVA and the Sparse Component Analysis (SCA) methods through the structured sparsity. We then propose a new framework by combining the convolutive narrowband approximation and the Windowed-Group-Lasso (WGL). The optimisation of the model is based on the alternating optimisation approach where the convolutive kernel and the source components are jointly optimised.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130845161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-21DOI: 10.1109/MMSP48831.2020.9287157
Naoto Iijima, Shoichi Koyama, H. Saruwatari
A method of binaural rendering from distributed microphone recordings that takes loudspeaker distance for measuring head-related transfer function (HRTF) into consideration is proposed. In general, to reproduce the binaural signals from the signals captured by multiple microphones in the recording area, the captured sound field is represented by plane-wave decomposition. Thus, HRTF is approximated as a transfer function from a plane-wave source in binaural rendering. To incorporate the distance in HRTF measurements, we propose a method based on the spherical-wave decomposition of a sound field, in which the HRTF is assumed to be measured from a point source. Result of experiments using HRTFs calculated by the boundary element method indicated that the accuracy of binaural signal reproduction by the proposed method based on the spherical-wave decomposition was higher than that by the plane-wave-decomposition-based method. We also evaluate the performance of signal conversion from distributed microphone measurements into binaural signals.
{"title":"Binaural Rendering From Distributed Microphone Signals Considering Loudspeaker Distance in Measurements","authors":"Naoto Iijima, Shoichi Koyama, H. Saruwatari","doi":"10.1109/MMSP48831.2020.9287157","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287157","url":null,"abstract":"A method of binaural rendering from distributed microphone recordings that takes loudspeaker distance for measuring head-related transfer function (HRTF) into consideration is proposed. In general, to reproduce the binaural signals from the signals captured by multiple microphones in the recording area, the captured sound field is represented by plane-wave decomposition. Thus, HRTF is approximated as a transfer function from a plane-wave source in binaural rendering. To incorporate the distance in HRTF measurements, we propose a method based on the spherical-wave decomposition of a sound field, in which the HRTF is assumed to be measured from a point source. Result of experiments using HRTFs calculated by the boundary element method indicated that the accuracy of binaural signal reproduction by the proposed method based on the spherical-wave decomposition was higher than that by the plane-wave-decomposition-based method. We also evaluate the performance of signal conversion from distributed microphone measurements into binaural signals.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115144293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-21DOI: 10.1109/MMSP48831.2020.9287061
Wajdi Ghezaiel, L. Brun, O. Lézoray
In real world applications, the performances of speaker identification systems degrade due to the reduction of both the amount and the quality of speech utterance. For that particular purpose, we propose a speaker identification system where short utterances with few training examples are used for person identification. Therefore, only a very small amount of data involving a sentence of 2-4 seconds is used. To achieve this, we propose a novel raw waveform end-to-end convolutional neural network (CNN) for text-independent speaker identification. We use wavelet scattering transform as a fixed initialization of the first layers of a CNN network, and learn the remaining layers in a supervised manner. The conducted experiments show that our hybrid architecture combining wavelet scattering transform and CNN can successfully perform efficient feature extraction for a speaker identification, even with a small number of short duration training samples.
{"title":"Wavelet Scattering Transform and CNN for Closed Set Speaker Identification","authors":"Wajdi Ghezaiel, L. Brun, O. Lézoray","doi":"10.1109/MMSP48831.2020.9287061","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287061","url":null,"abstract":"In real world applications, the performances of speaker identification systems degrade due to the reduction of both the amount and the quality of speech utterance. For that particular purpose, we propose a speaker identification system where short utterances with few training examples are used for person identification. Therefore, only a very small amount of data involving a sentence of 2-4 seconds is used. To achieve this, we propose a novel raw waveform end-to-end convolutional neural network (CNN) for text-independent speaker identification. We use wavelet scattering transform as a fixed initialization of the first layers of a CNN network, and learn the remaining layers in a supervised manner. The conducted experiments show that our hybrid architecture combining wavelet scattering transform and CNN can successfully perform efficient feature extraction for a speaker identification, even with a small number of short duration training samples.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129910242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-21DOI: 10.1109/MMSP48831.2020.9287071
Andy Regensky, Simon Grosche, Jürgen Seiler, A. Kaup
Frequency Selective Reconstruction (FSR) is a state-of-the-art algorithm for solving diverse image reconstruction tasks, where a subset of pixel values in the image is missing. However, it entails a high computational complexity due to its iterative, blockwise procedure to reconstruct the missing pixel values. Although the complexity of FSR can be considerably decreased by performing its computations in the frequency domain, the reconstruction procedure still takes multiple seconds up to multiple minutes depending on the parameterization. However, FSR has the potential for a massive parallelization greatly improving its reconstruction time. In this paper, we introduce a novel highly parallelized formulation of FSR adapted to the capabilities of modern GPUs and propose a considerably accelerated calculation of the inherent argmax calculation. Altogether, we achieve a 100-fold speed-up, which enables the usage of FSR for real-time applications.
{"title":"Real-Time Frequency Selective Reconstruction through Register-Based Argmax Calculation","authors":"Andy Regensky, Simon Grosche, Jürgen Seiler, A. Kaup","doi":"10.1109/MMSP48831.2020.9287071","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287071","url":null,"abstract":"Frequency Selective Reconstruction (FSR) is a state-of-the-art algorithm for solving diverse image reconstruction tasks, where a subset of pixel values in the image is missing. However, it entails a high computational complexity due to its iterative, blockwise procedure to reconstruct the missing pixel values. Although the complexity of FSR can be considerably decreased by performing its computations in the frequency domain, the reconstruction procedure still takes multiple seconds up to multiple minutes depending on the parameterization. However, FSR has the potential for a massive parallelization greatly improving its reconstruction time. In this paper, we introduce a novel highly parallelized formulation of FSR adapted to the capabilities of modern GPUs and propose a considerably accelerated calculation of the inherent argmax calculation. Altogether, we achieve a 100-fold speed-up, which enables the usage of FSR for real-time applications.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125353308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-21DOI: 10.1109/MMSP48831.2020.9287156
Silvan Mertes, Alice Baird, Dominik Schiller, Björn Schuller, E. André
In this paper, we introduce a novel framework to augment raw audio data for machine learning classification tasks. For the first part of our framework, we employ a generative adversarial network (GAN) to create new variants of the audio samples that are already existing in our source dataset for the classification task. In the second step, we then utilize an evolutionary algorithm to search the input domain space of the previously trained GAN, with respect to predefined characteristics of the generated audio. This way we are able to generate audio in a controlled manner that contributes to an improvement in classification performance of the original task. To validate our approach, we chose to test it on the task of soundscape classification. We show that our approach leads to a substantial improvement in classification results when compared to a training routine without data augmentation and training with uncontrolled data augmentation with GANs.
{"title":"An Evolutionary-based Generative Approach for Audio Data Augmentation","authors":"Silvan Mertes, Alice Baird, Dominik Schiller, Björn Schuller, E. André","doi":"10.1109/MMSP48831.2020.9287156","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287156","url":null,"abstract":"In this paper, we introduce a novel framework to augment raw audio data for machine learning classification tasks. For the first part of our framework, we employ a generative adversarial network (GAN) to create new variants of the audio samples that are already existing in our source dataset for the classification task. In the second step, we then utilize an evolutionary algorithm to search the input domain space of the previously trained GAN, with respect to predefined characteristics of the generated audio. This way we are able to generate audio in a controlled manner that contributes to an improvement in classification performance of the original task. To validate our approach, we chose to test it on the task of soundscape classification. We show that our approach leads to a substantial improvement in classification results when compared to a training routine without data augmentation and training with uncontrolled data augmentation with GANs.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128642548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}