Pub Date : 2016-05-18DOI: 10.1109/ICASSP.2016.7472257
H. Naseri, V. Koivunen
In this paper a novel algorithm is proposed for joint synchronization and localization in ad hoc networks. The proposed algorithm is based on broadcast messaging, with number of messages linear to the number of nodes, versus quadratic for techniques based on two-way message exchange. The identifiability of network synchronization problem is improved by introducing localization constraints. Hence, the proposed algorithm does not require a full set of measurements. Numerical results are provided using a model based on wireless LAN specifications. In scenarios with missing data, the proposed algorithm significantly improves synchronization and localization performance compared to commonly used techniques.
{"title":"Cooperative joint synchronization and localization using time delay measurements","authors":"H. Naseri, V. Koivunen","doi":"10.1109/ICASSP.2016.7472257","DOIUrl":"https://doi.org/10.1109/ICASSP.2016.7472257","url":null,"abstract":"In this paper a novel algorithm is proposed for joint synchronization and localization in ad hoc networks. The proposed algorithm is based on broadcast messaging, with number of messages linear to the number of nodes, versus quadratic for techniques based on two-way message exchange. The identifiability of network synchronization problem is improved by introducing localization constraints. Hence, the proposed algorithm does not require a full set of measurements. Numerical results are provided using a model based on wireless LAN specifications. In scenarios with missing data, the proposed algorithm significantly improves synchronization and localization performance compared to commonly used techniques.","PeriodicalId":165321,"journal":{"name":"2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125636596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-05-18DOI: 10.1109/ICASSP.2016.7472863
Gita Babazadeh Eslamlou, A. Jung, N. Goertz, M. Fereydooni
We consider the problem of recovering a graph signal from noisy and incomplete information. In particular, we propose an approximate message passing based iterative method for graph signal recovery. The recovery of the graph signal is based on noisy signal values at a small number of randomly selected nodes. Our approach exploits the smoothness of typical graph signals occurring in many applications, such as wireless sensor networks or social network analysis. The graph signals are smooth in the sense that neighboring nodes have similar signal values. Methodologically, our algorithm is a new instance of the denoising based approximate message passing framework introduced recently by Metzler et. al. We validate the performance of the proposed recovery method via numerical experiments. In certain scenarios our algorithm outperforms existing methods.
{"title":"Graph signal recovery from incomplete and noisy information using approximate message passing","authors":"Gita Babazadeh Eslamlou, A. Jung, N. Goertz, M. Fereydooni","doi":"10.1109/ICASSP.2016.7472863","DOIUrl":"https://doi.org/10.1109/ICASSP.2016.7472863","url":null,"abstract":"We consider the problem of recovering a graph signal from noisy and incomplete information. In particular, we propose an approximate message passing based iterative method for graph signal recovery. The recovery of the graph signal is based on noisy signal values at a small number of randomly selected nodes. Our approach exploits the smoothness of typical graph signals occurring in many applications, such as wireless sensor networks or social network analysis. The graph signals are smooth in the sense that neighboring nodes have similar signal values. Methodologically, our algorithm is a new instance of the denoising based approximate message passing framework introduced recently by Metzler et. al. We validate the performance of the proposed recovery method via numerical experiments. In certain scenarios our algorithm outperforms existing methods.","PeriodicalId":165321,"journal":{"name":"2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131955095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-30DOI: 10.1109/ICASSP.2016.7472646
Pegah Ghahremani, J. Droppo, M. Seltzer
Deep neural networks (DNN) are a powerful tool for many large vocabulary continuous speech recognition (LVCSR) tasks. Training a very deep network is a challenging problem and pre-training techniques are needed in order to achieve the best results. In this paper, we propose a new type of network architecture, Linear Augmented Deep Neural Network (LA-DNN). This type of network augments each non-linear layer with a linear connection from layer input to layer output. The resulting LA-DNN model eliminates the need for pre-training, addresses the gradient vanishing problem for deep networks, has higher capacity in modeling linear transformations, trains significantly faster than normal DNN, and produces better acoustic models. The proposed model has been evaluated on TIMIT phoneme recognition and AMI speech recognition tasks. Experimental results show that the LA-DNN models can have 70% fewer parameters than a DNN, while still improving accuracy. On the TIMIT phoneme recognition task, the smaller LA-DNN model improves TIMIT phone accuracy by 2% absolute, and AMI word accuracy by 1.7% absolute.
{"title":"Linearly augmented deep neural network","authors":"Pegah Ghahremani, J. Droppo, M. Seltzer","doi":"10.1109/ICASSP.2016.7472646","DOIUrl":"https://doi.org/10.1109/ICASSP.2016.7472646","url":null,"abstract":"Deep neural networks (DNN) are a powerful tool for many large vocabulary continuous speech recognition (LVCSR) tasks. Training a very deep network is a challenging problem and pre-training techniques are needed in order to achieve the best results. In this paper, we propose a new type of network architecture, Linear Augmented Deep Neural Network (LA-DNN). This type of network augments each non-linear layer with a linear connection from layer input to layer output. The resulting LA-DNN model eliminates the need for pre-training, addresses the gradient vanishing problem for deep networks, has higher capacity in modeling linear transformations, trains significantly faster than normal DNN, and produces better acoustic models. The proposed model has been evaluated on TIMIT phoneme recognition and AMI speech recognition tasks. Experimental results show that the LA-DNN models can have 70% fewer parameters than a DNN, while still improving accuracy. On the TIMIT phoneme recognition task, the smaller LA-DNN model improves TIMIT phone accuracy by 2% absolute, and AMI word accuracy by 1.7% absolute.","PeriodicalId":165321,"journal":{"name":"2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121585491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-04DOI: 10.1109/ICASSP.2016.7472826
Joris Pelemans, Tom Vanallemeersch, Kris Demuynck, Lyan Verwimp, H. V. hamme, P. Wambacq
Language model adaptation based on Machine Translation (MT) is a recently proposed approach to improve the Automatic Speech Recognition (ASR) of spoken translations that does not suffer from a common problem in approaches based on rescoring i.e. errors made during recognition cannot be recovered by the MT system. In previous work we presented an efficient implementation for MT-based language model adaptation using a word-based translation model. By omitting renormalization and employing weighted updates, the implementation exhibited virtually no adaptation overhead, enabling its use in a real-time setting. In this paper we investigate whether we can improve recognition accuracy without sacrificing the achieved efficiency. More precisely, we investigate the effect of both state-of-the-art phrase-based translation models and named entity probability estimation. We report relative WER reductions of 6.2% over a word-based LM adaptation technique and 25.3% over an unadapted 3-gram baseline on an English-to-Dutch dataset.
{"title":"Language model adaptation for ASR of spoken translations using phrase-based translation models and named entity models","authors":"Joris Pelemans, Tom Vanallemeersch, Kris Demuynck, Lyan Verwimp, H. V. hamme, P. Wambacq","doi":"10.1109/ICASSP.2016.7472826","DOIUrl":"https://doi.org/10.1109/ICASSP.2016.7472826","url":null,"abstract":"Language model adaptation based on Machine Translation (MT) is a recently proposed approach to improve the Automatic Speech Recognition (ASR) of spoken translations that does not suffer from a common problem in approaches based on rescoring i.e. errors made during recognition cannot be recovered by the MT system. In previous work we presented an efficient implementation for MT-based language model adaptation using a word-based translation model. By omitting renormalization and employing weighted updates, the implementation exhibited virtually no adaptation overhead, enabling its use in a real-time setting. In this paper we investigate whether we can improve recognition accuracy without sacrificing the achieved efficiency. More precisely, we investigate the effect of both state-of-the-art phrase-based translation models and named entity probability estimation. We report relative WER reductions of 6.2% over a word-based LM adaptation technique and 25.3% over an unadapted 3-gram baseline on an English-to-Dutch dataset.","PeriodicalId":165321,"journal":{"name":"2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123429588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-31DOI: 10.1109/ICASSP.2016.7472446
Oguzhan Teke, P. Vaidyanathan
Signal processing on graphs finds applications in many areas. Motivated by recent developments, this paper studies the concept of spectrum folding (aliasing) for graph signals under the downsample-then-upsample operation. In this development, we use a special eigenvector structure that is unique to the adjacency matrix of M-block cyclic matrices. We then introduce M-channel maximally decimated filter banks. Manipulating the characteristics of the aliasing effect, we construct polynomial filter banks with perfect reconstruction property. Later we describe how we can remove the eigenvector condition by using a generalized decimator. In this study graphs are assumed to be general with a possibly non-symmetric and complex adjacency matrix.
{"title":"Graph filter banks with M-channels, maximal decimation, and perfect reconstruction","authors":"Oguzhan Teke, P. Vaidyanathan","doi":"10.1109/ICASSP.2016.7472446","DOIUrl":"https://doi.org/10.1109/ICASSP.2016.7472446","url":null,"abstract":"Signal processing on graphs finds applications in many areas. Motivated by recent developments, this paper studies the concept of spectrum folding (aliasing) for graph signals under the downsample-then-upsample operation. In this development, we use a special eigenvector structure that is unique to the adjacency matrix of M-block cyclic matrices. We then introduce M-channel maximally decimated filter banks. Manipulating the characteristics of the aliasing effect, we construct polynomial filter banks with perfect reconstruction property. Later we describe how we can remove the eigenvector condition by using a generalized decimator. In this study graphs are assumed to be general with a possibly non-symmetric and complex adjacency matrix.","PeriodicalId":165321,"journal":{"name":"2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128216746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-31DOI: 10.1109/ICASSP.2016.7472808
Hardik B. Sailor, H. Patil
Convolutional Restricted Boltzmann Machine (ConvRBM) as a model for speech signal is presented in this paper. We have developed ConvRBM with sampling from noisy rectified linear units (NReLUs). ConvRBM is trained in an unsupervised way to model speech signal of arbitrary lengths. Weights of the model can represent an auditory-like filterbank. Our proposed learned filterbank is also nonlinear with respect to center frequencies of subband filters similar to standard filterbanks (such as Mel, Bark, ERB, etc.). We have used our proposed model as a front-end to learn features and applied to speech recognition task. Performance of ConvRBM features is improved compared to MFCC with relative improvement of 5% on TIMIT test set and 7% on WSJ0 database for both Nov'92 test sets using GMM-HMM systems. With DNN-HMM systems, we achieved relative improvement of 3% on TIMIT test set over MFCC and Mel filterbank (FBANK). On WSJ0 Nov'92 test sets, we achieved relative improvement of 4-14% using ConvRBM features over MFCC features and 3.6-5.6% using ConvRBM filterbank over FBANK features.
{"title":"Filterbank learning using Convolutional Restricted Boltzmann Machine for speech recognition","authors":"Hardik B. Sailor, H. Patil","doi":"10.1109/ICASSP.2016.7472808","DOIUrl":"https://doi.org/10.1109/ICASSP.2016.7472808","url":null,"abstract":"Convolutional Restricted Boltzmann Machine (ConvRBM) as a model for speech signal is presented in this paper. We have developed ConvRBM with sampling from noisy rectified linear units (NReLUs). ConvRBM is trained in an unsupervised way to model speech signal of arbitrary lengths. Weights of the model can represent an auditory-like filterbank. Our proposed learned filterbank is also nonlinear with respect to center frequencies of subband filters similar to standard filterbanks (such as Mel, Bark, ERB, etc.). We have used our proposed model as a front-end to learn features and applied to speech recognition task. Performance of ConvRBM features is improved compared to MFCC with relative improvement of 5% on TIMIT test set and 7% on WSJ0 database for both Nov'92 test sets using GMM-HMM systems. With DNN-HMM systems, we achieved relative improvement of 3% on TIMIT test set over MFCC and Mel filterbank (FBANK). On WSJ0 Nov'92 test sets, we achieved relative improvement of 4-14% using ConvRBM features over MFCC features and 3.6-5.6% using ConvRBM filterbank over FBANK features.","PeriodicalId":165321,"journal":{"name":"2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121111659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-29DOI: 10.1109/ICASSP.2016.7471659
Robin Scheibler, M. Vetterli
We introduce in this paper the recursive Hessian sketch, a new adaptive filtering algorithm based on sketching the same exponentially weighted least squares problem solved by the recursive least squares algorithm. The algorithm maintains a number of sketches of the inverse autocorrelation matrix and recursively updates them at random intervals. These are in turn used to update the unknown filter estimate. The complexity of the proposed algorithm compares favorably to that of recursive least squares. The convergence properties of this algorithm are studied through extensive numerical experiments. With an appropriate choice or parameters, its convergence speed falls between that of least mean squares and recursive least squares adaptive filters, with less computations than the latter.
{"title":"The recursive hessian sketch for adaptive filtering","authors":"Robin Scheibler, M. Vetterli","doi":"10.1109/ICASSP.2016.7471659","DOIUrl":"https://doi.org/10.1109/ICASSP.2016.7471659","url":null,"abstract":"We introduce in this paper the recursive Hessian sketch, a new adaptive filtering algorithm based on sketching the same exponentially weighted least squares problem solved by the recursive least squares algorithm. The algorithm maintains a number of sketches of the inverse autocorrelation matrix and recursively updates them at random intervals. These are in turn used to update the unknown filter estimate. The complexity of the proposed algorithm compares favorably to that of recursive least squares. The convergence properties of this algorithm are studied through extensive numerical experiments. With an appropriate choice or parameters, its convergence speed falls between that of least mean squares and recursive least squares adaptive filters, with less computations than the latter.","PeriodicalId":165321,"journal":{"name":"2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126160470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-25DOI: 10.1109/ICASSP.2016.7471865
Jack Gaston, J. Ming, D. Crookes
Given the success of patch-based approaches to image denoising, this paper addresses the ill-posed problem of patch size selection. Large patch sizes improve noise robustness in the presence of good matches, but can also lead to artefacts in textured regions due to the rare patch effect; smaller patch sizes reconstruct details more accurately but risk over-fitting to the noise in uniform regions. We propose to jointly optimize each matching patch's identity and size for grayscale image denoising, and present several implementations. The new approach effectively selects the largest matching areas, subject to the constraints of the available data and noise level, to improve noise robustness. Experiments on standard test images demonstrate our approach's ability to improve on fixed-size reconstruction, particularly at high noise levels, on smoother image regions.
{"title":"A largest matching area approach to image denoising","authors":"Jack Gaston, J. Ming, D. Crookes","doi":"10.1109/ICASSP.2016.7471865","DOIUrl":"https://doi.org/10.1109/ICASSP.2016.7471865","url":null,"abstract":"Given the success of patch-based approaches to image denoising, this paper addresses the ill-posed problem of patch size selection. Large patch sizes improve noise robustness in the presence of good matches, but can also lead to artefacts in textured regions due to the rare patch effect; smaller patch sizes reconstruct details more accurately but risk over-fitting to the noise in uniform regions. We propose to jointly optimize each matching patch's identity and size for grayscale image denoising, and present several implementations. The new approach effectively selects the largest matching areas, subject to the constraints of the available data and noise level, to improve noise robustness. Experiments on standard test images demonstrate our approach's ability to improve on fixed-size reconstruction, particularly at high noise levels, on smoother image regions.","PeriodicalId":165321,"journal":{"name":"2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"158 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128868046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-25DOI: 10.1109/ICASSP.2016.7471772
Yuhang Xu, V. McClelland, Z. Cvetković, K. Mills
The traditional way to estimate the time delay between the motor cortex and the periphery is based on the estimation of the slope of the phase of the cross spectral density between motor cortex electroencephalogram (EEG) and electromyography (EMG) signals recorded synchronously during a motor control task. There are several issues that could make the delay estimation using this method subject to errors, leading frequently to estimates which are in disagreement with underlying physiology. This study introduces cortico-muscular coherence with time lag (CMCTL) function and proposes a method for estimating the delay based on finding its local maxima. We further address the issue of the interpretation of such time delay in multi-path propagation systems. Delay estimates obtained using the proposed method are more consistent compared with results obtained using the phase method and in a better agreement with physiological facts.
{"title":"Delay estimation between EEG and EMG via coherence with time lag","authors":"Yuhang Xu, V. McClelland, Z. Cvetković, K. Mills","doi":"10.1109/ICASSP.2016.7471772","DOIUrl":"https://doi.org/10.1109/ICASSP.2016.7471772","url":null,"abstract":"The traditional way to estimate the time delay between the motor cortex and the periphery is based on the estimation of the slope of the phase of the cross spectral density between motor cortex electroencephalogram (EEG) and electromyography (EMG) signals recorded synchronously during a motor control task. There are several issues that could make the delay estimation using this method subject to errors, leading frequently to estimates which are in disagreement with underlying physiology. This study introduces cortico-muscular coherence with time lag (CMCTL) function and proposes a method for estimating the delay based on finding its local maxima. We further address the issue of the interpretation of such time delay in multi-path propagation systems. Delay estimates obtained using the proposed method are more consistent compared with results obtained using the phase method and in a better agreement with physiological facts.","PeriodicalId":165321,"journal":{"name":"2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131197532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-23DOI: 10.1109/ICASSP.2016.7472112
Avner May, Michael Collins, Daniel J. Hsu, Brian Kingsbury
A simple but effective method is proposed for learning compact random feature models that approximate non-linear kernel methods, in the context of acoustic modeling. The method is able to explore a large number of non-linear features while maintaining a compact model via feature selection more efficiently than existing approaches. For certain kernels, this random feature selection may be regarded as a means of non-linear feature selection at the level of the raw input features, which motivates additional methods for computational improvements. An empirical evaluation demonstrates the effectiveness of the proposed method relative to the natural baseline method for kernel approximation.
{"title":"Compact kernel models for acoustic modeling via random feature selection","authors":"Avner May, Michael Collins, Daniel J. Hsu, Brian Kingsbury","doi":"10.1109/ICASSP.2016.7472112","DOIUrl":"https://doi.org/10.1109/ICASSP.2016.7472112","url":null,"abstract":"A simple but effective method is proposed for learning compact random feature models that approximate non-linear kernel methods, in the context of acoustic modeling. The method is able to explore a large number of non-linear features while maintaining a compact model via feature selection more efficiently than existing approaches. For certain kernels, this random feature selection may be regarded as a means of non-linear feature selection at the level of the raw input features, which motivates additional methods for computational improvements. An empirical evaluation demonstrates the effectiveness of the proposed method relative to the natural baseline method for kernel approximation.","PeriodicalId":165321,"journal":{"name":"2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115854983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}