Pub Date : 2012-03-25DOI: 10.1109/ICASSP.2012.6288291
S. Chopra, S. Bangalore
We introduce a simple and novel method for the weakly supervised problem of Part-Of-Speech tagging with a dictionary. Our method involves training a connectionist network that simultaneously learns a distributed latent representation of the words, while maximizing the tagging accuracy. To compensate for the unavailability of true labels, we resort to training the model using a Curriculum: instead of random order, the model is trained using an ordered sequence of training samples, proceeding from “easier” to “harder” samples. On a standard test corpus, we show that without using any grammatical information, our model is able to outperform the standard EM algorithm in tagging accuracy, and its performance is comparable to other state-of-the-art models. We also show that curriculum learning for this setting significantly improves performance, both in terms of speed of convergence and in terms of generalization.
{"title":"Weakly supervised neural networks for Part-Of-Speech tagging","authors":"S. Chopra, S. Bangalore","doi":"10.1109/ICASSP.2012.6288291","DOIUrl":"https://doi.org/10.1109/ICASSP.2012.6288291","url":null,"abstract":"We introduce a simple and novel method for the weakly supervised problem of Part-Of-Speech tagging with a dictionary. Our method involves training a connectionist network that simultaneously learns a distributed latent representation of the words, while maximizing the tagging accuracy. To compensate for the unavailability of true labels, we resort to training the model using a Curriculum: instead of random order, the model is trained using an ordered sequence of training samples, proceeding from “easier” to “harder” samples. On a standard test corpus, we show that without using any grammatical information, our model is able to outperform the standard EM algorithm in tagging accuracy, and its performance is comparable to other state-of-the-art models. We also show that curriculum learning for this setting significantly improves performance, both in terms of speed of convergence and in terms of generalization.","PeriodicalId":6443,"journal":{"name":"2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"1 1","pages":"1965-1968"},"PeriodicalIF":0.0,"publicationDate":"2012-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77418289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-25DOI: 10.1109/ICASSP.2012.6287829
M. Souden, S. Araki, K. Kinoshita, T. Nakatani, H. Sawada
In this paper, we propose a new framework to separate multiple speech signals and reduce the additive acoustic noise using multiple microphones. In this framework, we start by formulating the minimum-mean-square error (MMSE) criterion to retrieve each of the desired speech signals from the observed mixtures of sounds and outline the importance of multi-speaker activity detection. The latter is modeled by introducing a latent variable whose posterior probability is computed via expectation maximization (EM) combining both the spatial and spectral cues of the multichannel speech observations. We experimentally demonstrate that the resulting joint blind source separation (BSS) and noise reduction solution performs remarkably well in reverberant and noisy environments.
{"title":"A multichannel MMSE-based framework for joint blind source separation and noise reduction","authors":"M. Souden, S. Araki, K. Kinoshita, T. Nakatani, H. Sawada","doi":"10.1109/ICASSP.2012.6287829","DOIUrl":"https://doi.org/10.1109/ICASSP.2012.6287829","url":null,"abstract":"In this paper, we propose a new framework to separate multiple speech signals and reduce the additive acoustic noise using multiple microphones. In this framework, we start by formulating the minimum-mean-square error (MMSE) criterion to retrieve each of the desired speech signals from the observed mixtures of sounds and outline the importance of multi-speaker activity detection. The latter is modeled by introducing a latent variable whose posterior probability is computed via expectation maximization (EM) combining both the spatial and spectral cues of the multichannel speech observations. We experimentally demonstrate that the resulting joint blind source separation (BSS) and noise reduction solution performs remarkably well in reverberant and noisy environments.","PeriodicalId":6443,"journal":{"name":"2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"86 1","pages":"109-112"},"PeriodicalIF":0.0,"publicationDate":"2012-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77518083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-25DOI: 10.1109/ICASSP.2012.6288198
Ali M Tawfiq, J. Abouei, K. Plataniotis
This work considers a CDMA-based Wireless Body Area Network (WBAN) where multiple biosensors communicate simultaneously to a central node in an asynchronous fashion. The main goal of this paper is to present an augmentation protocol for the physical layer of the IEEE 802.15.6 specifications with focus on the Multiple Access Interference (MAI) mitigation in a proactive WBAN. The proposed methodology uses a new set of orthogonal codes from the conventional Walsh-Hadamard matrix which has the special property of “cyclic orthogonality”. This property ensures that the asynchronous nature of the WBAN does not produce MAI amongst the multiple on-body sensors. The work investigates the optimality of such codes in WBANs from the link Bit Error Rate (BER) performance. We show that the proposed spreading codes outperform conventional non-cyclic orthogonal spreading codes in a practical Rayleigh fading environment.
{"title":"Cyclic orthogonal codes in CDMA-based asynchronous Wireless Body Area Networks","authors":"Ali M Tawfiq, J. Abouei, K. Plataniotis","doi":"10.1109/ICASSP.2012.6288198","DOIUrl":"https://doi.org/10.1109/ICASSP.2012.6288198","url":null,"abstract":"This work considers a CDMA-based Wireless Body Area Network (WBAN) where multiple biosensors communicate simultaneously to a central node in an asynchronous fashion. The main goal of this paper is to present an augmentation protocol for the physical layer of the IEEE 802.15.6 specifications with focus on the Multiple Access Interference (MAI) mitigation in a proactive WBAN. The proposed methodology uses a new set of orthogonal codes from the conventional Walsh-Hadamard matrix which has the special property of “cyclic orthogonality”. This property ensures that the asynchronous nature of the WBAN does not produce MAI amongst the multiple on-body sensors. The work investigates the optimality of such codes in WBANs from the link Bit Error Rate (BER) performance. We show that the proposed spreading codes outperform conventional non-cyclic orthogonal spreading codes in a practical Rayleigh fading environment.","PeriodicalId":6443,"journal":{"name":"2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"17 1","pages":"1593-1596"},"PeriodicalIF":0.0,"publicationDate":"2012-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77751658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-25DOI: 10.1109/ICASSP.2012.6287959
Jingshan Zhang, J. Dauwels, M. A. Vázquez, L. Waller
Novel efficient algorithms are developed to infer the phase of a complex optical field from a sequence of intensity images taken at different defocus distances. The non-linear observation model is approximated by a linear model. The complex optical field is inferred by iterative Kalman smoothing in the Fourier domain: forward and backward sweeps of Kalman recursions are alternated, and in each such sweep, the approximate linear model is refined. By limiting the number of iterations, one can trade off accuracy vs. complexity. The complexity of each iteration in the proposed algorithm is in the order of N logN, where N is the number of pixels per image. The storage required scales linearly with N. In contrast, the complexity of existing phase inference algorithms scales with N3 and the required storage with N2. The proposed algorithms may enable real-time estimation of optical fields from noisy intensity images.
{"title":"Efficient Gaussian inference algorithms for phase imaging","authors":"Jingshan Zhang, J. Dauwels, M. A. Vázquez, L. Waller","doi":"10.1109/ICASSP.2012.6287959","DOIUrl":"https://doi.org/10.1109/ICASSP.2012.6287959","url":null,"abstract":"Novel efficient algorithms are developed to infer the phase of a complex optical field from a sequence of intensity images taken at different defocus distances. The non-linear observation model is approximated by a linear model. The complex optical field is inferred by iterative Kalman smoothing in the Fourier domain: forward and backward sweeps of Kalman recursions are alternated, and in each such sweep, the approximate linear model is refined. By limiting the number of iterations, one can trade off accuracy vs. complexity. The complexity of each iteration in the proposed algorithm is in the order of N logN, where N is the number of pixels per image. The storage required scales linearly with N. In contrast, the complexity of existing phase inference algorithms scales with N3 and the required storage with N2. The proposed algorithms may enable real-time estimation of optical fields from noisy intensity images.","PeriodicalId":6443,"journal":{"name":"2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"52 1","pages":"617-620"},"PeriodicalIF":0.0,"publicationDate":"2012-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77853898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-25DOI: 10.1109/ICASSP.2012.6288977
Weibin Zhang, Pascale Fung
Full covariance acoustic models trained with limited training data generalize poorly to unseen test data due to a large number of free parameters. We propose to use sparse inverse covariance matrices to address this problem. Previous sparse inverse covariance methods never outperformed full covariance methods. We propose a method to automatically drive the structure of inverse covariance matrices to sparse during training. We use a new objective function by adding L1 regularization to the traditional objective function for maximum likelihood estimation. The graphic lasso method for the estimation of a sparse inverse covariance matrix is incorporated into the Expectation Maximization algorithm to learn parameters of HMM using the new objective function. Experimental results show that we only need about 25% of the parameters of the inverse covariance matrices to be nonzero in order to achieve the same performance of a full covariance system. Our proposed system using sparse inverse covariance Gaussians also significantly outperforms a system using full covariance Gaussians trained on limited data.
{"title":"Lowresource speech recognition with automatically learned sparse inverse covariance matrices","authors":"Weibin Zhang, Pascale Fung","doi":"10.1109/ICASSP.2012.6288977","DOIUrl":"https://doi.org/10.1109/ICASSP.2012.6288977","url":null,"abstract":"Full covariance acoustic models trained with limited training data generalize poorly to unseen test data due to a large number of free parameters. We propose to use sparse inverse covariance matrices to address this problem. Previous sparse inverse covariance methods never outperformed full covariance methods. We propose a method to automatically drive the structure of inverse covariance matrices to sparse during training. We use a new objective function by adding L1 regularization to the traditional objective function for maximum likelihood estimation. The graphic lasso method for the estimation of a sparse inverse covariance matrix is incorporated into the Expectation Maximization algorithm to learn parameters of HMM using the new objective function. Experimental results show that we only need about 25% of the parameters of the inverse covariance matrices to be nonzero in order to achieve the same performance of a full covariance system. Our proposed system using sparse inverse covariance Gaussians also significantly outperforms a system using full covariance Gaussians trained on limited data.","PeriodicalId":6443,"journal":{"name":"2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"24 1","pages":"4737-4740"},"PeriodicalIF":0.0,"publicationDate":"2012-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77870799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-25DOI: 10.1109/ICASSP.2012.6288850
Jinchao Yang, Chunyan Liang, L. Yang, Hongbin Suo, Junjie Wang, Yonghong Yan
In this study, we introduce a new factor analysis of Laplacian approach to speaker recognition under the support vector machine (SVM) framework. The Laplacian-projected supervector from our proposed Laplacian approach, which finds an embedding that preserves local information by locality preserving projections (LPP), is believed to contain speaker dependent information. The proposed method was compared with the state-of-the-art total variability approach on 2010 National Institute of Standards and Technology (NIST) Speaker Recognition Evaluation (SRE) corpus. According to the compared results, our proposed method is effective.
{"title":"Factor analysis of Laplacian approach for speaker recognition","authors":"Jinchao Yang, Chunyan Liang, L. Yang, Hongbin Suo, Junjie Wang, Yonghong Yan","doi":"10.1109/ICASSP.2012.6288850","DOIUrl":"https://doi.org/10.1109/ICASSP.2012.6288850","url":null,"abstract":"In this study, we introduce a new factor analysis of Laplacian approach to speaker recognition under the support vector machine (SVM) framework. The Laplacian-projected supervector from our proposed Laplacian approach, which finds an embedding that preserves local information by locality preserving projections (LPP), is believed to contain speaker dependent information. The proposed method was compared with the state-of-the-art total variability approach on 2010 National Institute of Standards and Technology (NIST) Speaker Recognition Evaluation (SRE) corpus. According to the compared results, our proposed method is effective.","PeriodicalId":6443,"journal":{"name":"2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"2 1","pages":"4221-4224"},"PeriodicalIF":0.0,"publicationDate":"2012-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77882875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-25DOI: 10.1109/ICASSP.2012.6288035
Jing-Ming Guo, Yun-Fu Liu, B. Lai, Peng-Hua Wang, Jiann-Der Lee
In this paper, a classified-based post-compensation algorithm for Color Filter Array (CFA) demosaicing is proposed. This technique can be used for improving the image quality of the interpolated results obtained by other CFA images. First, each pixel is classified according to its neighborhood texture variance and angle. Then, different Least-Mean-Square (LMS) filters are trained to adopt for dealing pixels of various characteristics. As documented in the experimental results, the proposed scheme can substantially boost the image quality; in addition, a better visual perceptual can be obtained. Notably, the proposed method can be considered as effective post-compensation by applying for any former schemes to yield an even better image quality.
{"title":"Classified-Filter-based Post-Compensation Interpolation for Color Filter Array demosaicing","authors":"Jing-Ming Guo, Yun-Fu Liu, B. Lai, Peng-Hua Wang, Jiann-Der Lee","doi":"10.1109/ICASSP.2012.6288035","DOIUrl":"https://doi.org/10.1109/ICASSP.2012.6288035","url":null,"abstract":"In this paper, a classified-based post-compensation algorithm for Color Filter Array (CFA) demosaicing is proposed. This technique can be used for improving the image quality of the interpolated results obtained by other CFA images. First, each pixel is classified according to its neighborhood texture variance and angle. Then, different Least-Mean-Square (LMS) filters are trained to adopt for dealing pixels of various characteristics. As documented in the experimental results, the proposed scheme can substantially boost the image quality; in addition, a better visual perceptual can be obtained. Notably, the proposed method can be considered as effective post-compensation by applying for any former schemes to yield an even better image quality.","PeriodicalId":6443,"journal":{"name":"2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"23 1","pages":"921-924"},"PeriodicalIF":0.0,"publicationDate":"2012-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80027731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-25DOI: 10.1109/ICASSP.2012.6288546
Nikhil Kundargi, A. Tewfik
In this paper we study the inferential use of goodness of fit tests in a non-parametric setting. The utility of such tests will be demonstrated for the test case of spectrum sensing applications in cognitive radios. For the first time, we provide a comprehensive framework for decision fusion of a ensemble of goodness-of-fit testing procedures through an Ensemble Goodness-of-Fit test. Also, we introduce a generalized family of functionals and kernels called Φ-divergences which allow us to formulate goodness-of-fit tests that are parameterized by a single parameter s. The performance of these tests is simulated under gaussian and non-gaussian noise in a MIMO setting. We show that under uncertainty or non-gaussianity in the noise, the performance of non-parametric tests in general, and phi-divergence based goodness-of-fit tests in particular, is significantly superior to that of the energy detector with reduced implementation complexity. Especially important is the property that the false alarm rates of our proposed tests is maintained at a fixed level over a wide variation in the channel noise distributions.
{"title":"Inference using phi-divergence Goodness-of-Fit tests","authors":"Nikhil Kundargi, A. Tewfik","doi":"10.1109/ICASSP.2012.6288546","DOIUrl":"https://doi.org/10.1109/ICASSP.2012.6288546","url":null,"abstract":"In this paper we study the inferential use of goodness of fit tests in a non-parametric setting. The utility of such tests will be demonstrated for the test case of spectrum sensing applications in cognitive radios. For the first time, we provide a comprehensive framework for decision fusion of a ensemble of goodness-of-fit testing procedures through an Ensemble Goodness-of-Fit test. Also, we introduce a generalized family of functionals and kernels called Φ-divergences which allow us to formulate goodness-of-fit tests that are parameterized by a single parameter s. The performance of these tests is simulated under gaussian and non-gaussian noise in a MIMO setting. We show that under uncertainty or non-gaussianity in the noise, the performance of non-parametric tests in general, and phi-divergence based goodness-of-fit tests in particular, is significantly superior to that of the energy detector with reduced implementation complexity. Especially important is the property that the false alarm rates of our proposed tests is maintained at a fixed level over a wide variation in the channel noise distributions.","PeriodicalId":6443,"journal":{"name":"2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"22 1","pages":"3001-3004"},"PeriodicalIF":0.0,"publicationDate":"2012-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79043336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-25DOI: 10.1109/ICASSP.2012.6288854
S. Wenndt, Ronald L. Mitchell
Speaker recognition by machines can be quite good for large groups as seen in NIST speaker recognition evaluations. However, speaker recognition by machine can be fragile for changing environments. This research examines how robust humans are for recognizing familiar speakers in changing environments. Additionally, bandlimited noise was used to try to learn what frequency regions are important for human listeners to recognize familiar speakers.
{"title":"Familiar speaker recognition","authors":"S. Wenndt, Ronald L. Mitchell","doi":"10.1109/ICASSP.2012.6288854","DOIUrl":"https://doi.org/10.1109/ICASSP.2012.6288854","url":null,"abstract":"Speaker recognition by machines can be quite good for large groups as seen in NIST speaker recognition evaluations. However, speaker recognition by machine can be fragile for changing environments. This research examines how robust humans are for recognizing familiar speakers in changing environments. Additionally, bandlimited noise was used to try to learn what frequency regions are important for human listeners to recognize familiar speakers.","PeriodicalId":6443,"journal":{"name":"2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"115 1","pages":"4237-4240"},"PeriodicalIF":0.0,"publicationDate":"2012-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79122783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-25DOI: 10.1109/ICASSP.2012.6288768
Ying Xiong, Yue M. Lu
We present a blind estimation algorithm for multi-input and multi-output (MIMO) systems with sparse common support. Key to the proposed algorithm is a matrix generalization of the classical annihilating filter technique, which allows us to estimate the nonlinear parameters of the channels through an efficient and noniterative procedure. An attractive property of the proposed algorithm is that it only needs the sensor measurements at a narrow frequency band. By exploiting this feature, we can derive efficient sub-Nyquist sampling schemes which significantly reduce the number of samples that need to be retained at each sensor. Numerical simulations verify the accuracy of the proposed estimation algorithm and its robustness in the presence of noise.
{"title":"Blind estimation and low-rate sampling of sparse mimo systems with common support","authors":"Ying Xiong, Yue M. Lu","doi":"10.1109/ICASSP.2012.6288768","DOIUrl":"https://doi.org/10.1109/ICASSP.2012.6288768","url":null,"abstract":"We present a blind estimation algorithm for multi-input and multi-output (MIMO) systems with sparse common support. Key to the proposed algorithm is a matrix generalization of the classical annihilating filter technique, which allows us to estimate the nonlinear parameters of the channels through an efficient and noniterative procedure. An attractive property of the proposed algorithm is that it only needs the sensor measurements at a narrow frequency band. By exploiting this feature, we can derive efficient sub-Nyquist sampling schemes which significantly reduce the number of samples that need to be retained at each sensor. Numerical simulations verify the accuracy of the proposed estimation algorithm and its robustness in the presence of noise.","PeriodicalId":6443,"journal":{"name":"2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"114 1","pages":"3893-3896"},"PeriodicalIF":0.0,"publicationDate":"2012-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79342332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}