Pub Date : 2018-09-10DOI: 10.1109/ICASSP.2018.8462039
S. Talebi, Stefan Werner, V. Koivunen
In this work, a distributed Kalman filtering and clustering framework for sensor networks tasked with tracking multiple state vector sequences is developed. This is achieved through recursively updating the likelihood of a state vector estimation from one agent offering valid information about the state vector of its neighbors, given the available observation data. These likelihoods then form the diffusion coefficients, used for information fusion over the sensor network. For rigour, the mean and mean square behavior of the developed Kalman filtering and clustering framework is analyzed, convergence criteria are established, and the performance of the developed framework is demonstrated in a simulation example.
{"title":"Kalman Filtering and Clustering in Sensor Networks","authors":"S. Talebi, Stefan Werner, V. Koivunen","doi":"10.1109/ICASSP.2018.8462039","DOIUrl":"https://doi.org/10.1109/ICASSP.2018.8462039","url":null,"abstract":"In this work, a distributed Kalman filtering and clustering framework for sensor networks tasked with tracking multiple state vector sequences is developed. This is achieved through recursively updating the likelihood of a state vector estimation from one agent offering valid information about the state vector of its neighbors, given the available observation data. These likelihoods then form the diffusion coefficients, used for information fusion over the sensor network. For rigour, the mean and mean square behavior of the developed Kalman filtering and clustering framework is analyzed, convergence criteria are established, and the performance of the developed framework is demonstrated in a simulation example.","PeriodicalId":6638,"journal":{"name":"2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"32 1","pages":"4309-4313"},"PeriodicalIF":0.0,"publicationDate":"2018-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88550520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-10DOI: 10.1109/ICASSP.2018.8462528
Sissi Xiaoxiao Wu, Hoi-To Wai, A. Scaglione, A. Nedić, Amir Leshem
This paper studies the security aspect of gossip-based decentralized optimization algorithms for multi agent systems against data injection attacks. Our contributions are two-fold. First, we show that the popular distributed projected gradient method (by Nedić et al.) can be attacked by coordinated insider attacks, in which the attackers are able to steer the final state to a point of their choosing. Second, we propose a metric that can be computed locally by the trustworthy agents processing their own iterates and those of their neighboring agents. This metric can be used by the trustworthy agents to detect and localize the attackers. We conclude the paper by supporting our findings with numerical experiments.
{"title":"Data Injection Attack on Decentralized Optimization","authors":"Sissi Xiaoxiao Wu, Hoi-To Wai, A. Scaglione, A. Nedić, Amir Leshem","doi":"10.1109/ICASSP.2018.8462528","DOIUrl":"https://doi.org/10.1109/ICASSP.2018.8462528","url":null,"abstract":"This paper studies the security aspect of gossip-based decentralized optimization algorithms for multi agent systems against data injection attacks. Our contributions are two-fold. First, we show that the popular distributed projected gradient method (by Nedić et al.) can be attacked by coordinated insider attacks, in which the attackers are able to steer the final state to a point of their choosing. Second, we propose a metric that can be computed locally by the trustworthy agents processing their own iterates and those of their neighboring agents. This metric can be used by the trustworthy agents to detect and localize the attackers. We conclude the paper by supporting our findings with numerical experiments.","PeriodicalId":6638,"journal":{"name":"2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"24 1","pages":"3644-3648"},"PeriodicalIF":0.0,"publicationDate":"2018-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89145823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-10DOI: 10.1109/ICASSP.2018.8461893
Shane Settle, Jonathan Le Roux, Takaaki Hori, Shinji Watanabe, J. Hershey
Current advances in deep learning have resulted in a convergence of methods across a wide range of tasks, opening the door for tighter integration of modules that were previously developed and optimized in isolation. Recent ground-breaking works have produced end-to-end deep network methods for both speech separation and end-to-end automatic speech recognition (ASR). Speech separation methods such as deep clustering address the challenging cocktail-party problem of distinguishing multiple simultaneous speech signals. This is an enabling technology for real-world human machine interaction (HMI). However, speech separation requires ASR to interpret the speech for any HMI task. Likewise, ASR requires speech separation to work in an unconstrained environment. Although these two components can be trained in isolation and connected after the fact, this paradigm is likely to be sub-optimal, since it relies on artificially mixed data. In this paper, we develop the first fully end-to-end, jointly trained deep learning system for separation and recognition of overlapping speech signals. The joint training framework synergistically adapts the separation and recognition to each other. As an additional benefit, it enables training on more realistic data that contains only mixed signals and their transcriptions, and thus is suited to large scale training on existing transcribed data.
{"title":"End-to-End Multi-Speaker Speech Recognition","authors":"Shane Settle, Jonathan Le Roux, Takaaki Hori, Shinji Watanabe, J. Hershey","doi":"10.1109/ICASSP.2018.8461893","DOIUrl":"https://doi.org/10.1109/ICASSP.2018.8461893","url":null,"abstract":"Current advances in deep learning have resulted in a convergence of methods across a wide range of tasks, opening the door for tighter integration of modules that were previously developed and optimized in isolation. Recent ground-breaking works have produced end-to-end deep network methods for both speech separation and end-to-end automatic speech recognition (ASR). Speech separation methods such as deep clustering address the challenging cocktail-party problem of distinguishing multiple simultaneous speech signals. This is an enabling technology for real-world human machine interaction (HMI). However, speech separation requires ASR to interpret the speech for any HMI task. Likewise, ASR requires speech separation to work in an unconstrained environment. Although these two components can be trained in isolation and connected after the fact, this paradigm is likely to be sub-optimal, since it relies on artificially mixed data. In this paper, we develop the first fully end-to-end, jointly trained deep learning system for separation and recognition of overlapping speech signals. The joint training framework synergistically adapts the separation and recognition to each other. As an additional benefit, it enables training on more realistic data that contains only mixed signals and their transcriptions, and thus is suited to large scale training on existing transcribed data.","PeriodicalId":6638,"journal":{"name":"2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"38 1","pages":"4819-4823"},"PeriodicalIF":0.0,"publicationDate":"2018-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74298807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-10DOI: 10.1109/ICASSP.2018.8462392
M. Yukawa, K. Müller, Yuto Ogino
We present an exact analytic expression of the contributions of the kernel principal components to the relevant information in a nonlinear regression problem. A related study has been presented by Braun, Buhmann, and Müller in 2008, where an upper bound of the contributions was given for a general supervised learning problem but with “uncentered” kernel PCAs. Our analysis clarifies that the relevant information of a kernel regression under explicit centering operation is contained in a finite number of leading kernel principal components, as in the “uncentered” kernel-Pca case, if the kernel matches the underlying nonlinear function so that the eigenvalues of the centered kernel matrix decay quickly. We compare the regression performances of the least-square-based methods with the centered and uncentered kernel PCAs by simulations.
{"title":"How are the Centered Kernel Principal Components Relevant to Regression Task? -An Exact Analysis","authors":"M. Yukawa, K. Müller, Yuto Ogino","doi":"10.1109/ICASSP.2018.8462392","DOIUrl":"https://doi.org/10.1109/ICASSP.2018.8462392","url":null,"abstract":"We present an exact analytic expression of the contributions of the kernel principal components to the relevant information in a nonlinear regression problem. A related study has been presented by Braun, Buhmann, and Müller in 2008, where an upper bound of the contributions was given for a general supervised learning problem but with “uncentered” kernel PCAs. Our analysis clarifies that the relevant information of a kernel regression under explicit centering operation is contained in a finite number of leading kernel principal components, as in the “uncentered” kernel-Pca case, if the kernel matches the underlying nonlinear function so that the eigenvalues of the centered kernel matrix decay quickly. We compare the regression performances of the least-square-based methods with the centered and uncentered kernel PCAs by simulations.","PeriodicalId":6638,"journal":{"name":"2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"1 1","pages":"2841-2845"},"PeriodicalIF":0.0,"publicationDate":"2018-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82497587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-10DOI: 10.1109/ICASSP.2018.8462354
Charilaos I. Kanatsoulis, Xiao Fu, N. Sidiropoulos, Mingyi Hong
The sum-of-correlations (SUMCOR) generalized canonical correlation analysis (GCCA) aims at producing low-dimensional representations of multiview data via enforcing pairwise similarity of the reduced-dimension views. SUMCOR has been applied to a large variety of applications including blind separation, multilingual word embedding, and cross-modality retrieval. Despite the NP-hardness of SUMCOR, recent work has proposed effective algorithms for handling it at very large scale. However, the existing scalable algorithms are not easy to extend to incorporate structural regularization and prior information - which are critical for real-world applications where outliers and modeling mismatches are present. In this work, we propose a new computational framework for large-scale SUMCOR GCCA. The algorithm can easily incorporate a suite of structural regularizers which are frequently used in data analytics, has lightweight updates and low memory complexity, and can be easily implemented in a parallel fashion. The proposed algorithm is also guaranteed to converge to a Karush-Kuhn-Tucker (KKT) point of the regularized SUMCOR problem. Carefully designed simulations are employed to demonstrate the effectiveness of the proposed algorithm.
{"title":"Large-Scale Regularized Sumcor GCCA via Penalty-Dual Decomposition","authors":"Charilaos I. Kanatsoulis, Xiao Fu, N. Sidiropoulos, Mingyi Hong","doi":"10.1109/ICASSP.2018.8462354","DOIUrl":"https://doi.org/10.1109/ICASSP.2018.8462354","url":null,"abstract":"The sum-of-correlations (SUMCOR) generalized canonical correlation analysis (GCCA) aims at producing low-dimensional representations of multiview data via enforcing pairwise similarity of the reduced-dimension views. SUMCOR has been applied to a large variety of applications including blind separation, multilingual word embedding, and cross-modality retrieval. Despite the NP-hardness of SUMCOR, recent work has proposed effective algorithms for handling it at very large scale. However, the existing scalable algorithms are not easy to extend to incorporate structural regularization and prior information - which are critical for real-world applications where outliers and modeling mismatches are present. In this work, we propose a new computational framework for large-scale SUMCOR GCCA. The algorithm can easily incorporate a suite of structural regularizers which are frequently used in data analytics, has lightweight updates and low memory complexity, and can be easily implemented in a parallel fashion. The proposed algorithm is also guaranteed to converge to a Karush-Kuhn-Tucker (KKT) point of the regularized SUMCOR problem. Carefully designed simulations are employed to demonstrate the effectiveness of the proposed algorithm.","PeriodicalId":6638,"journal":{"name":"2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"7 1","pages":"6363-6367"},"PeriodicalIF":0.0,"publicationDate":"2018-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84153948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-10DOI: 10.1109/ICASSP.2018.8461824
Minje Kim, P. Smaragdis
We present Bitwise Neural Networks (BNN) as an efficient hardware-friendly solution to single-channel source separation tasks in resource-constrained environments. In the proposed BNN system, we replace all the real-valued operations during the feedforward process of a Deep Neural Network (DNN) with bitwise arithmetic (e.g. the XNOR operation between bipolar binaries in place of multiplications). Thanks to the fully bitwise run-time operations, the BNN system can serve as an alternative solution where efficient real-time processing is critical, for example real-time speech enhancement in embedded systems. Furthermore, we also propose a binarization scheme to convert the input signals into bit strings so that the BNN parameters learn the Boolean mapping between input binarized mixture signals and their target Ideal Binary Masks (IBM). Experiments on the single-channel speech denoising tasks show that the efficient BNN-based source separation system works well with an acceptable performance loss compared to a comprehensive real-valued network, while consuming a minimal amount of resources.
{"title":"Bitwise Neural Networks for Efficient Single-Channel Source Separation","authors":"Minje Kim, P. Smaragdis","doi":"10.1109/ICASSP.2018.8461824","DOIUrl":"https://doi.org/10.1109/ICASSP.2018.8461824","url":null,"abstract":"We present Bitwise Neural Networks (BNN) as an efficient hardware-friendly solution to single-channel source separation tasks in resource-constrained environments. In the proposed BNN system, we replace all the real-valued operations during the feedforward process of a Deep Neural Network (DNN) with bitwise arithmetic (e.g. the XNOR operation between bipolar binaries in place of multiplications). Thanks to the fully bitwise run-time operations, the BNN system can serve as an alternative solution where efficient real-time processing is critical, for example real-time speech enhancement in embedded systems. Furthermore, we also propose a binarization scheme to convert the input signals into bit strings so that the BNN parameters learn the Boolean mapping between input binarized mixture signals and their target Ideal Binary Masks (IBM). Experiments on the single-channel speech denoising tasks show that the efficient BNN-based source separation system works well with an acceptable performance loss compared to a comprehensive real-valued network, while consuming a minimal amount of resources.","PeriodicalId":6638,"journal":{"name":"2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"30 1","pages":"701-705"},"PeriodicalIF":0.0,"publicationDate":"2018-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78677812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-10DOI: 10.1109/ICASSP.2018.8462072
Pinzhang Wang, Lin Li, S. Godsill
This paper investigates the on-line analysis of high-frequency financial order book data using Bayesian modelling techniques. Order book data involves evolving queues of orders at different prices, and here we propose that the order book shape is proportional to a gamma or inverse-gamma density function. Inference for these models is implemented on-line using particle filters and evaluated on a high-frequency EURUSD foreign exchange limit order book. The two possible order book shapes are tested using particle filter marginal likelihood estimates and in addition, heat maps are constructed based on the inference results to reveal the imbalance of order distributions between the two sides of an order book, thereby offering valuable insights into the movements of future prices.
{"title":"Particle Filtering and Inference for Limit Order Books in High Frequency Finance","authors":"Pinzhang Wang, Lin Li, S. Godsill","doi":"10.1109/ICASSP.2018.8462072","DOIUrl":"https://doi.org/10.1109/ICASSP.2018.8462072","url":null,"abstract":"This paper investigates the on-line analysis of high-frequency financial order book data using Bayesian modelling techniques. Order book data involves evolving queues of orders at different prices, and here we propose that the order book shape is proportional to a gamma or inverse-gamma density function. Inference for these models is implemented on-line using particle filters and evaluated on a high-frequency EURUSD foreign exchange limit order book. The two possible order book shapes are tested using particle filter marginal likelihood estimates and in addition, heat maps are constructed based on the inference results to reveal the imbalance of order distributions between the two sides of an order book, thereby offering valuable insights into the movements of future prices.","PeriodicalId":6638,"journal":{"name":"2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"55 1","pages":"4264-4268"},"PeriodicalIF":0.0,"publicationDate":"2018-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75292007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-10DOI: 10.1109/ICASSP.2018.8462377
Hoi-To Wai, A. Ozdaglar, A. Scaglione
We provide a compressive-measurement based method to detect susceptible agents who may receive misinformation through their contact with ‘stubborn agents’ whose goal is to influence the opinions of agents in the network. We consider a DeGroot-type opinion dynamics model where regular agents revise their opinions by linearly combining their neighbors' opinions, but stubborn agents, while influencing others, do not change their opinions. Our proposed method hinges on estimating the temporal difference vector of network-wide opinions, computed at time instances when the stubborn agents interact. We show that this temporal difference vector has approximately the same support as the locations of the susceptible agents. Moreover, both the interaction instances and the temporal difference vector can be estimated from a small number of aggregated opinions. The performance of our method is studied both analytically and empirically. We show that the detection error decreases when the social network is better connected, or when the stubborn agents are ‘less talkative’.
{"title":"Identifying Susceptible Agents in Time Varying Opinion Dynamics Through Compressive Measurements","authors":"Hoi-To Wai, A. Ozdaglar, A. Scaglione","doi":"10.1109/ICASSP.2018.8462377","DOIUrl":"https://doi.org/10.1109/ICASSP.2018.8462377","url":null,"abstract":"We provide a compressive-measurement based method to detect susceptible agents who may receive misinformation through their contact with ‘stubborn agents’ whose goal is to influence the opinions of agents in the network. We consider a DeGroot-type opinion dynamics model where regular agents revise their opinions by linearly combining their neighbors' opinions, but stubborn agents, while influencing others, do not change their opinions. Our proposed method hinges on estimating the temporal difference vector of network-wide opinions, computed at time instances when the stubborn agents interact. We show that this temporal difference vector has approximately the same support as the locations of the susceptible agents. Moreover, both the interaction instances and the temporal difference vector can be estimated from a small number of aggregated opinions. The performance of our method is studied both analytically and empirically. We show that the detection error decreases when the social network is better connected, or when the stubborn agents are ‘less talkative’.","PeriodicalId":6638,"journal":{"name":"2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"36 1","pages":"4114-4118"},"PeriodicalIF":0.0,"publicationDate":"2018-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77438276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-10DOI: 10.1109/ICASSP.2018.8461827
Prem Seetharaman, G. Mysore, P. Smaragdis, Bryan Pardo
The speech transmission index (STI) of a listening position within a given room indicates the quality and intelligibility of speech uttered in that room. The measure is very reliable for predicting speech intelligibility in many room conditions but requires an STI measurement of the impulse response for the room. We present a method for blindly estimating the STI without measuring or modeling the impulse response of the room using deep convolutional neural networks. Our model is trained entirely using simulated room impulse responses combined with clean speech examples from the DAPS dataset [1] and works directly on PCM audio. Our experiments show that our method predicts true STI with a high degree of accuracy – an average error of under 4%. It can also distinguish between different STI conditions to a level of granularity that is comparable to humans.
{"title":"Blind Estimation of the Speech Transmission Index for Speech Quality Prediction","authors":"Prem Seetharaman, G. Mysore, P. Smaragdis, Bryan Pardo","doi":"10.1109/ICASSP.2018.8461827","DOIUrl":"https://doi.org/10.1109/ICASSP.2018.8461827","url":null,"abstract":"The speech transmission index (STI) of a listening position within a given room indicates the quality and intelligibility of speech uttered in that room. The measure is very reliable for predicting speech intelligibility in many room conditions but requires an STI measurement of the impulse response for the room. We present a method for blindly estimating the STI without measuring or modeling the impulse response of the room using deep convolutional neural networks. Our model is trained entirely using simulated room impulse responses combined with clean speech examples from the DAPS dataset [1] and works directly on PCM audio. Our experiments show that our method predicts true STI with a high degree of accuracy – an average error of under 4%. It can also distinguish between different STI conditions to a level of granularity that is comparable to humans.","PeriodicalId":6638,"journal":{"name":"2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"32 1","pages":"591-595"},"PeriodicalIF":0.0,"publicationDate":"2018-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77922312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-10DOI: 10.1109/ICASSP.2018.8462689
Nguyen Tran, A. Jung
We characterize the sample size required for accurate graphical model selection from non-stationary samples. The observed samples are modeled as a zero-mean Gaussian random process whose samples are uncorrelated but have different covariance matrices. This includes the case where observations form stationary or underspread processes. We derive a sufficient condition on the required sample size by analyzing a simple sparse neighborhood regression method.
{"title":"On the Sample Complexity of Graphical Model Selection from Non-Stationary Samples","authors":"Nguyen Tran, A. Jung","doi":"10.1109/ICASSP.2018.8462689","DOIUrl":"https://doi.org/10.1109/ICASSP.2018.8462689","url":null,"abstract":"We characterize the sample size required for accurate graphical model selection from non-stationary samples. The observed samples are modeled as a zero-mean Gaussian random process whose samples are uncorrelated but have different covariance matrices. This includes the case where observations form stationary or underspread processes. We derive a sufficient condition on the required sample size by analyzing a simple sparse neighborhood regression method.","PeriodicalId":6638,"journal":{"name":"2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"479 1","pages":"6314-6317"},"PeriodicalIF":0.0,"publicationDate":"2018-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74599205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}