Pub Date : 2018-09-13DOI: 10.1109/ICASSP.2018.8461308
K. Nguyen, Quang-Doanh Vu, Le-Nam Tran, M. Juntti
A joint beamforming and remote radio head (RRH)-user association design for downlink of cloud radio access networks (CRANs) is considered. The aim is to maximize the system energy efficiency subject to constraints on users' quality-of-service, capacity offronthaullinks and transmit power. Different to the conventional power consumption models, we embrace the dependence of baseband signal processing power on the data rate, and the dynamics of the power amplifiers' efficiency. The considered problem is a mixed Boolean nonconvex program whose optimal solution is difficult to find. As our main contribution, we provide a discrete branch-reduce-and-bound (DBRnB) approach to solve the problem globally. We also make some modifications to the standard DBRnB procedure. Those remarkably improve the convergence performance. Numerical results are provided to confirm the validity of the proposed method.
{"title":"Globally Optimal Energy Efficiency Maximization for Capacity-Limited Fronthaul Crans with Dynamic Power Amplifiers’ Efficiency","authors":"K. Nguyen, Quang-Doanh Vu, Le-Nam Tran, M. Juntti","doi":"10.1109/ICASSP.2018.8461308","DOIUrl":"https://doi.org/10.1109/ICASSP.2018.8461308","url":null,"abstract":"A joint beamforming and remote radio head (RRH)-user association design for downlink of cloud radio access networks (CRANs) is considered. The aim is to maximize the system energy efficiency subject to constraints on users' quality-of-service, capacity offronthaullinks and transmit power. Different to the conventional power consumption models, we embrace the dependence of baseband signal processing power on the data rate, and the dynamics of the power amplifiers' efficiency. The considered problem is a mixed Boolean nonconvex program whose optimal solution is difficult to find. As our main contribution, we provide a discrete branch-reduce-and-bound (DBRnB) approach to solve the problem globally. We also make some modifications to the standard DBRnB procedure. Those remarkably improve the convergence performance. Numerical results are provided to confirm the validity of the proposed method.","PeriodicalId":6638,"journal":{"name":"2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"346 1","pages":"3759-3763"},"PeriodicalIF":0.0,"publicationDate":"2018-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83456041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-13DOI: 10.1109/ICASSP.2018.8461293
I. Sobieraj, Lucas Rencker, Mark D. Plumbley
Non-negative Matrix Factorization (NMF) is a well established tool for audio analysis. However, it is not well suited for learning on weakly labeled data, i.e. data where the exact timestamp of the sound of interest is not known. In this paper we propose a novel extension to NMF, that allows it to extract meaningful representations from weakly labeled audio data. Recently, a constraint on the activation matrix was proposed to adapt for learning on weak labels. To further improve the method we propose to add an orthogonality regularizer of the dictionary in the cost function of NMF. In that way we obtain appropriate dictionaries for the sounds of interest and background sounds from weakly labeled data. We demonstrate that the proposed Orthogonality-Regularized Masked NMF (ORM-NMF) can be used for Audio Event Detection of rare events and evaluate the method on the development data from Task2 of DCASE2017 Challenge.
{"title":"Orthogonality-Regularized Masked NMF for Learning on Weakly Labeled Audio Data","authors":"I. Sobieraj, Lucas Rencker, Mark D. Plumbley","doi":"10.1109/ICASSP.2018.8461293","DOIUrl":"https://doi.org/10.1109/ICASSP.2018.8461293","url":null,"abstract":"Non-negative Matrix Factorization (NMF) is a well established tool for audio analysis. However, it is not well suited for learning on weakly labeled data, i.e. data where the exact timestamp of the sound of interest is not known. In this paper we propose a novel extension to NMF, that allows it to extract meaningful representations from weakly labeled audio data. Recently, a constraint on the activation matrix was proposed to adapt for learning on weak labels. To further improve the method we propose to add an orthogonality regularizer of the dictionary in the cost function of NMF. In that way we obtain appropriate dictionaries for the sounds of interest and background sounds from weakly labeled data. We demonstrate that the proposed Orthogonality-Regularized Masked NMF (ORM-NMF) can be used for Audio Event Detection of rare events and evaluate the method on the development data from Task2 of DCASE2017 Challenge.","PeriodicalId":6638,"journal":{"name":"2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"34 1","pages":"2436-2440"},"PeriodicalIF":0.0,"publicationDate":"2018-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90380650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-10DOI: 10.1109/ICASSP.2018.8462189
Shota Minami, Jun Kuroda, Yasuhiro Oikawa
A parametric array loudspeaker (PAL) consists of a lot of ultrasonic transducers in most cases and is driven by an ultrasonic which is modulated by audible sound. Because each ultrasonic transducer has each difference resonant frequency, there is the individual difference in ultrasonic transducers of a PAL in a manufacturing process. In this paper, two PALs are made of each set of transducers with large and small variance of resonant frequencies. Quality factor of PAL with the large variance of resonant frequencies is smaller than that of PAL with small variance, and the demodulated audible sound pressure level (SPL) is large and almost flat to 3 kHz in PAL with the large variance of resonant frequencies.
{"title":"Individual Difference of Ultrasonic Transducers for Parametric Array Loudspeaker","authors":"Shota Minami, Jun Kuroda, Yasuhiro Oikawa","doi":"10.1109/ICASSP.2018.8462189","DOIUrl":"https://doi.org/10.1109/ICASSP.2018.8462189","url":null,"abstract":"A parametric array loudspeaker (PAL) consists of a lot of ultrasonic transducers in most cases and is driven by an ultrasonic which is modulated by audible sound. Because each ultrasonic transducer has each difference resonant frequency, there is the individual difference in ultrasonic transducers of a PAL in a manufacturing process. In this paper, two PALs are made of each set of transducers with large and small variance of resonant frequencies. Quality factor of PAL with the large variance of resonant frequencies is smaller than that of PAL with small variance, and the demodulated audible sound pressure level (SPL) is large and almost flat to 3 kHz in PAL with the large variance of resonant frequencies.","PeriodicalId":6638,"journal":{"name":"2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"48 1","pages":"486-490"},"PeriodicalIF":0.0,"publicationDate":"2018-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83428538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-10DOI: 10.1109/ICASSP.2018.8461854
Shaofeng Zou, V. Veeravalli
We consider the problem of quickest detection of dynamic events in sensor networks. After an event occurs, a number of sensors are affected and undergo a change in the statistics of their observations. We assume that the event is dynamic and can propagate with time, i.e., different sensors perceive the event at different times. The goal is to design a sequential algorithm that can detect when the event has affected no less than η sensors as quickly as possible, subject to false alarm constraints. We design a computationally efficient algorithm that is adaptive to unknown propagation dynamics, and demonstrate its asymptotic optimality as the false alarm rate goes to zero. We also provide numerical simulations to validate our theoretical results.
{"title":"Quickest Detection of Dynamic Events in Sensor Networks","authors":"Shaofeng Zou, V. Veeravalli","doi":"10.1109/ICASSP.2018.8461854","DOIUrl":"https://doi.org/10.1109/ICASSP.2018.8461854","url":null,"abstract":"We consider the problem of quickest detection of dynamic events in sensor networks. After an event occurs, a number of sensors are affected and undergo a change in the statistics of their observations. We assume that the event is dynamic and can propagate with time, i.e., different sensors perceive the event at different times. The goal is to design a sequential algorithm that can detect when the event has affected no less than η sensors as quickly as possible, subject to false alarm constraints. We design a computationally efficient algorithm that is adaptive to unknown propagation dynamics, and demonstrate its asymptotic optimality as the false alarm rate goes to zero. We also provide numerical simulations to validate our theoretical results.","PeriodicalId":6638,"journal":{"name":"2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"5 1","pages":"6907-6911"},"PeriodicalIF":0.0,"publicationDate":"2018-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85207765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-10DOI: 10.1109/ICASSP.2018.8462374
Fabian Agren, Johan Sward, A. Jakobsson
In this work, we introduce an antenna placement algorithm for indoor radio networks. The algorithm aims to minimize the number of antennas required to provide sufficient coverage in an area of interest, minimizing the cost of equipment and installation work. The optimization algorithm exploits a semi-deterministic model for the most dominant radio paths. Each path is in turn determined with the A⋆ path finding algorithm. Both the proposed antenna placement algorithm and the used indoor radio propagation model are evaluated using real measurements, confirming the efficiency of the method.
{"title":"On Selecting Antenna Placements in Indoor Radio Environments","authors":"Fabian Agren, Johan Sward, A. Jakobsson","doi":"10.1109/ICASSP.2018.8462374","DOIUrl":"https://doi.org/10.1109/ICASSP.2018.8462374","url":null,"abstract":"In this work, we introduce an antenna placement algorithm for indoor radio networks. The algorithm aims to minimize the number of antennas required to provide sufficient coverage in an area of interest, minimizing the cost of equipment and installation work. The optimization algorithm exploits a semi-deterministic model for the most dominant radio paths. Each path is in turn determined with the A⋆ path finding algorithm. Both the proposed antenna placement algorithm and the used indoor radio propagation model are evaluated using real measurements, confirming the efficiency of the method.","PeriodicalId":6638,"journal":{"name":"2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"9 1","pages":"3719-3723"},"PeriodicalIF":0.0,"publicationDate":"2018-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81959547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-10DOI: 10.1109/ICASSP.2018.8461832
Jie Ding, Enmao Diao, Jiawei Zhou, V. Tarokh
Machine learning systems learn from and make predictions by building models from observed data. Because large models tend to overfit while small models tend to underfit for a given fixed dataset, a critical challenge is to select an appropriate model (e.g. set of variables/features). Model selection aims to strike a balance between the goodness of fit and model complexity, and thus to gain reliable predictive power. In this paper, we study a penalized model selection technique that asymptotically achieves the optimal expected prediction loss (referred to as the limit of learning) offered by a set of candidate models. We prove that the proposed procedure is both statistically efficient in the sense that it asymptotically approaches the limit of learning, and computationally efficient in the sense that it can be much faster than cross validation methods. Our theory applies for a wide variety of model classes, loss functions, and high dimensions (in the sense that the models' complexity can grow with data size). We released a python package with our proposed method for general usage like logistic regression and neural networks.
{"title":"A Penalized Method for the Predictive Limit of Learning","authors":"Jie Ding, Enmao Diao, Jiawei Zhou, V. Tarokh","doi":"10.1109/ICASSP.2018.8461832","DOIUrl":"https://doi.org/10.1109/ICASSP.2018.8461832","url":null,"abstract":"Machine learning systems learn from and make predictions by building models from observed data. Because large models tend to overfit while small models tend to underfit for a given fixed dataset, a critical challenge is to select an appropriate model (e.g. set of variables/features). Model selection aims to strike a balance between the goodness of fit and model complexity, and thus to gain reliable predictive power. In this paper, we study a penalized model selection technique that asymptotically achieves the optimal expected prediction loss (referred to as the limit of learning) offered by a set of candidate models. We prove that the proposed procedure is both statistically efficient in the sense that it asymptotically approaches the limit of learning, and computationally efficient in the sense that it can be much faster than cross validation methods. Our theory applies for a wide variety of model classes, loss functions, and high dimensions (in the sense that the models' complexity can grow with data size). We released a python package with our proposed method for general usage like logistic regression and neural networks.","PeriodicalId":6638,"journal":{"name":"2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"38 1","pages":"4414-4418"},"PeriodicalIF":0.0,"publicationDate":"2018-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86695862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-10DOI: 10.1109/ICASSP.2018.8461638
Mahdi Bayat, M. Fatemi
A low rank plus sparse framework for concurrent clutter and noise suppression in Doppler processing of echo ensembles obtained by non-contrast ultrasound imaging is presented. A low rank component which represents mostly strong tissue clutter signal and a sparse component which represents mostly blood echoes received from slow flows in microvasculature are assumed. The proposed method is applied to simulated data and its superior performance over conventional singular value thresholding in removing clutter and background noise is presented.
{"title":"Concurrent Clutter and Noise Suppression via Low Rank Plus Sparse Optimization for Non-Contrast Ultrasound Flow Doppler Processing in Microvasculature","authors":"Mahdi Bayat, M. Fatemi","doi":"10.1109/ICASSP.2018.8461638","DOIUrl":"https://doi.org/10.1109/ICASSP.2018.8461638","url":null,"abstract":"A low rank plus sparse framework for concurrent clutter and noise suppression in Doppler processing of echo ensembles obtained by non-contrast ultrasound imaging is presented. A low rank component which represents mostly strong tissue clutter signal and a sparse component which represents mostly blood echoes received from slow flows in microvasculature are assumed. The proposed method is applied to simulated data and its superior performance over conventional singular value thresholding in removing clutter and background noise is presented.","PeriodicalId":6638,"journal":{"name":"2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"57 1","pages":"1080-1084"},"PeriodicalIF":0.0,"publicationDate":"2018-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89045626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-10DOI: 10.1109/ICASSP.2018.8461509
Yanning Shen, Tianyi Chen, G. Giannakis
Kernel-based methods have well-appreciated performance in various nonlinear learning tasks. Most of them rely on a preselected kernel, whose prudent choice presumes task-specific prior information. To cope with this limitation, multi-kernel learning has gained popularity thanks to its flexibility in choosing kernels from a prescribed kernel dictionary. Leveraging the random feature approximation and its recent orthogonality-promoting variant, the present contribution develops an online multi-kernel learning scheme to infer the intended nonlinear function ‘on the fly.’ Performance analysis shows that the novel algorithm can afford sublinear regret. Numerical tests on real datasets are carried out to showcase the effectiveness of the proposed algorithms.
{"title":"Online Multi-Kernel Learning with Orthogonal Random Features","authors":"Yanning Shen, Tianyi Chen, G. Giannakis","doi":"10.1109/ICASSP.2018.8461509","DOIUrl":"https://doi.org/10.1109/ICASSP.2018.8461509","url":null,"abstract":"Kernel-based methods have well-appreciated performance in various nonlinear learning tasks. Most of them rely on a preselected kernel, whose prudent choice presumes task-specific prior information. To cope with this limitation, multi-kernel learning has gained popularity thanks to its flexibility in choosing kernels from a prescribed kernel dictionary. Leveraging the random feature approximation and its recent orthogonality-promoting variant, the present contribution develops an online multi-kernel learning scheme to infer the intended nonlinear function ‘on the fly.’ Performance analysis shows that the novel algorithm can afford sublinear regret. Numerical tests on real datasets are carried out to showcase the effectiveness of the proposed algorithms.","PeriodicalId":6638,"journal":{"name":"2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"11 1","pages":"6289-6293"},"PeriodicalIF":0.0,"publicationDate":"2018-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90154936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-10DOI: 10.1109/ICASSP.2018.8461702
Charbel Sakr, Naresh R Shanbhag
There has been growing interest in the deployment of deep learning systems onto resource-constrained platforms for fast and efficient inference. However, typical models are overwhelmingly complex, making such integration very challenging and requiring compression mechanisms such as reduced precision. We present a layer-wise granular precision analysis which allows us to efficiently quantize pre-trained deep neural networks at minimal cost in terms of accuracy degradation. Our results are consistent with recent findings that perturbations in earlier layers are most destructive and hence needing more precision than in later layers. Our approach allows for significant complexity reduction demonstrated by numerical results on the MNIST and CIFAR-10 datasets. Indeed, for an equivalent level of accuracy, our fine-grained approach reduces the minimum precision in the network up to 8 bits over a naive uniform assignment. Furthermore, we match the accuracy level of a state-of-the-art binary network while requiring up to ~ 3.5 × lower complexity. Similarly, when compared to a state-of-the-art fixed-point network, the complexity savings are even higher (up to ~ 14×) with no loss in accuracy.
{"title":"An Analytical Method to Determine Minimum Per-Layer Precision of Deep Neural Networks","authors":"Charbel Sakr, Naresh R Shanbhag","doi":"10.1109/ICASSP.2018.8461702","DOIUrl":"https://doi.org/10.1109/ICASSP.2018.8461702","url":null,"abstract":"There has been growing interest in the deployment of deep learning systems onto resource-constrained platforms for fast and efficient inference. However, typical models are overwhelmingly complex, making such integration very challenging and requiring compression mechanisms such as reduced precision. We present a layer-wise granular precision analysis which allows us to efficiently quantize pre-trained deep neural networks at minimal cost in terms of accuracy degradation. Our results are consistent with recent findings that perturbations in earlier layers are most destructive and hence needing more precision than in later layers. Our approach allows for significant complexity reduction demonstrated by numerical results on the MNIST and CIFAR-10 datasets. Indeed, for an equivalent level of accuracy, our fine-grained approach reduces the minimum precision in the network up to 8 bits over a naive uniform assignment. Furthermore, we match the accuracy level of a state-of-the-art binary network while requiring up to ~ 3.5 × lower complexity. Similarly, when compared to a state-of-the-art fixed-point network, the complexity savings are even higher (up to ~ 14×) with no loss in accuracy.","PeriodicalId":6638,"journal":{"name":"2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"124 1","pages":"1090-1094"},"PeriodicalIF":0.0,"publicationDate":"2018-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88035638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-10DOI: 10.1109/ICASSP.2018.8461641
Tianyi Chen, G. Giannakis
This paper focuses on the online fog computing tasks in the Internet-of-Things (IoT), where online decisions must flexibly adapt to the changing user preferences (loss functions), and the temporally unpredictable availability of resources (constraints). Tailored for such human-in-the-loop systems where the loss functions are hard to model, a family of bandit online saddle-point (BanSP) schemes are developed, which adaptively adjust the online operations based on (possibly multiple) bandit feedback of the loss functions, and the changing environment. Performance here is assessed by: i) dynamic regret that generalizes the widely used static regret; and, ii) fit that captures the accumulated amount of constraint violations. Specifically, BanSP is proved to simultaneously yield sub-linear dynamic regret and fit, provided that the best dynamic solutions vary slowly over time. Numerical tests on fog computing tasks corroborate that BanSP offers desired performance under such limited information.
{"title":"Harnessing Bandit Online Learning to Low-Latency Fog Computing","authors":"Tianyi Chen, G. Giannakis","doi":"10.1109/ICASSP.2018.8461641","DOIUrl":"https://doi.org/10.1109/ICASSP.2018.8461641","url":null,"abstract":"This paper focuses on the online fog computing tasks in the Internet-of-Things (IoT), where online decisions must flexibly adapt to the changing user preferences (loss functions), and the temporally unpredictable availability of resources (constraints). Tailored for such human-in-the-loop systems where the loss functions are hard to model, a family of bandit online saddle-point (BanSP) schemes are developed, which adaptively adjust the online operations based on (possibly multiple) bandit feedback of the loss functions, and the changing environment. Performance here is assessed by: i) dynamic regret that generalizes the widely used static regret; and, ii) fit that captures the accumulated amount of constraint violations. Specifically, BanSP is proved to simultaneously yield sub-linear dynamic regret and fit, provided that the best dynamic solutions vary slowly over time. Numerical tests on fog computing tasks corroborate that BanSP offers desired performance under such limited information.","PeriodicalId":6638,"journal":{"name":"2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"1 1","pages":"6418-6422"},"PeriodicalIF":0.0,"publicationDate":"2018-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80240597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}