Pub Date : 2023-10-01DOI: 10.1109/IEEECONF59524.2023.10476951
Proloy Das, Mingjian He, Patrick L Purdon
During cognitive tasks, the elicited brain responses that are time-locked to the stimulus presentation are manifested in electroencephalogram (EEG) as Event Related Potentials (ERPs). In general, ERPs are signals embedded in the background of much stronger neural oscillations, and thus they are traditionally extracted by averaging hundreds of trial responses so that the neural oscillations can cancel out each other. However, often in cognitive science experiments, it is difficult to administer large number of trials due to physical constraints. Additionally, these excessive averaging can also blur fine structures of the ERPs signals, which might otherwise be indicative of various intrinsic factors. Here we propose to model the background oscillations using a novel oscillation state-space representation and identify their time-traces in a data-driven way. This allows us to effectively separate the oscillations from the response signals of interest, thus improving the signal-to-noise of the evoked response, and eventually increasing trial fidelity. We also consider a random-walk like continuity constraint for the ERP waveforms to recover smooth, de-noised estimates. We employ a generalized expectation maximization algorithm for estimating the model parameters, and then infer the approximate posterior distribution of ERP waveforms. We demonstrate the reduced reliance of our proposed ERP extraction technique via a simulation study. Finally, we showcase how the extracted ERPs using our method can be more informative than the traditional average-based ERPs when analyzing EEG data in cognitive task settings with fewer trials.
{"title":"Multilevel State-Space Models Enable High Precision Event Related Potential Analysis.","authors":"Proloy Das, Mingjian He, Patrick L Purdon","doi":"10.1109/IEEECONF59524.2023.10476951","DOIUrl":"10.1109/IEEECONF59524.2023.10476951","url":null,"abstract":"<p><p>During cognitive tasks, the elicited brain responses that are time-locked to the stimulus presentation are manifested in electroencephalogram (EEG) as Event Related Potentials (ERPs). In general, ERPs are <math><mo>~</mo> <mn>1</mn> <mi>μ</mi> <mtext>V</mtext></math> signals embedded in the background of much stronger neural oscillations, and thus they are traditionally extracted by averaging hundreds of trial responses so that the neural oscillations can cancel out each other. However, often in cognitive science experiments, it is difficult to administer large number of trials due to physical constraints. Additionally, these excessive averaging can also blur fine structures of the ERPs signals, which might otherwise be indicative of various intrinsic factors. Here we propose to model the background oscillations using a novel oscillation state-space representation and identify their time-traces in a data-driven way. This allows us to effectively separate the oscillations from the response signals of interest, thus improving the signal-to-noise of the evoked response, and eventually increasing trial fidelity. We also consider a random-walk like continuity constraint for the ERP waveforms to recover smooth, de-noised estimates. We employ a generalized expectation maximization algorithm for estimating the model parameters, and then infer the approximate posterior distribution of ERP waveforms. We demonstrate the reduced reliance of our proposed ERP extraction technique via a simulation study. Finally, we showcase how the extracted ERPs using our method can be more informative than the traditional average-based ERPs when analyzing EEG data in cognitive task settings with fewer trials.</p>","PeriodicalId":72692,"journal":{"name":"Conference record. Asilomar Conference on Signals, Systems & Computers","volume":"2023 ","pages":"1496-1499"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11534075/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142577378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.1109/ieeeconf59524.2023.10476822
Dorsa EPMoghaddam, Anton Banta, Allison Post, Mehdi Razavi, Behnaam Aazhang
This paper presents a novel approach to synthesize a standard 12-lead electrocardiogram (ECG) from any three independent ECG leads using a patient-specific encoder-decoder convolutional neural network. The objective is to decrease the number of recording locations required to obtain the same information as a 12-lead ECG, thereby enhancing patients' comfort during the recording process. We evaluate the proposed algorithm on a dataset comprising fifteen patients, as well as a randomly selected cohort of patients from the PTB diagnostic database. To evaluate the precision of the reconstructed ECG signals, we present two metrics: the correlation coefficient and root mean square error. Our proposed method achieves superior performance compared to most existing synthesis techniques, with an average correlation coefficient of 0.976 and 0.97 for datasets, respectively. These results demonstrate the potential of our approach to improve the efficiency and comfort of ECG recording for patients, while maintaining high diagnostic accuracy.
{"title":"A novel method for 12-lead ECG reconstruction.","authors":"Dorsa EPMoghaddam, Anton Banta, Allison Post, Mehdi Razavi, Behnaam Aazhang","doi":"10.1109/ieeeconf59524.2023.10476822","DOIUrl":"https://doi.org/10.1109/ieeeconf59524.2023.10476822","url":null,"abstract":"<p><p>This paper presents a novel approach to synthesize a standard 12-lead electrocardiogram (ECG) from any three independent ECG leads using a patient-specific encoder-decoder convolutional neural network. The objective is to decrease the number of recording locations required to obtain the same information as a 12-lead ECG, thereby enhancing patients' comfort during the recording process. We evaluate the proposed algorithm on a dataset comprising fifteen patients, as well as a randomly selected cohort of patients from the PTB diagnostic database. To evaluate the precision of the reconstructed ECG signals, we present two metrics: the correlation coefficient and root mean square error. Our proposed method achieves superior performance compared to most existing synthesis techniques, with an average correlation coefficient of 0.976 and 0.97 for datasets, respectively. These results demonstrate the potential of our approach to improve the efficiency and comfort of ECG recording for patients, while maintaining high diagnostic accuracy.</p>","PeriodicalId":72692,"journal":{"name":"Conference record. Asilomar Conference on Signals, Systems & Computers","volume":"2023 ","pages":"1054-1058"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11404295/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142302417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01Epub Date: 2023-03-07DOI: 10.1109/ieeeconf56349.2022.10052019
Eun Som Jeon, Hongjun Choi, Ankita Shukla, Yuan Wang, Matthew P Buman, Pavan Turaga
Converting wearable sensor data to actionable health insights has witnessed large interest in recent years. Deep learning methods have been utilized in and have achieved a lot of successes in various applications involving wearables fields. However, wearable sensor data has unique issues related to sensitivity and variability between subjects, and dependency on sampling-rate for analysis. To mitigate these issues, a different type of analysis using topological data analysis has shown promise as well. Topological data analysis (TDA) captures robust features, such as persistence images (PI), in complex data through the persistent homology algorithm, which holds the promise of boosting machine learning performance. However, because of the computational load required by TDA methods for large-scale data, integration and implementation has lagged behind. Further, many applications involving wearables require models to be compact enough to allow deployment on edge-devices. In this context, knowledge distillation (KD) has been widely applied to generate a small model (student model), using a pre-trained high-capacity network (teacher model). In this paper, we propose a new KD strategy using two teacher models - one that uses the raw time-series and another that uses persistence images from the time-series. These two teachers then train a student using KD. In essence, the student learns from heterogeneous teachers providing different knowledge. To consider different properties in features from teachers, we apply an annealing strategy and adaptive temperature in KD. Finally, a robust student model is distilled, which utilizes the time series data only. We find that incorporation of persistence features via second teacher leads to significantly improved performance. This approach provides a unique way of fusing deep-learning with topological features to develop effective models.
{"title":"Topological Knowledge Distillation for Wearable Sensor Data.","authors":"Eun Som Jeon, Hongjun Choi, Ankita Shukla, Yuan Wang, Matthew P Buman, Pavan Turaga","doi":"10.1109/ieeeconf56349.2022.10052019","DOIUrl":"10.1109/ieeeconf56349.2022.10052019","url":null,"abstract":"<p><p>Converting wearable sensor data to actionable health insights has witnessed large interest in recent years. Deep learning methods have been utilized in and have achieved a lot of successes in various applications involving wearables fields. However, wearable sensor data has unique issues related to sensitivity and variability between subjects, and dependency on sampling-rate for analysis. To mitigate these issues, a different type of analysis using topological data analysis has shown promise as well. Topological data analysis (TDA) captures robust features, such as persistence images (PI), in complex data through the persistent homology algorithm, which holds the promise of boosting machine learning performance. However, because of the computational load required by TDA methods for large-scale data, integration and implementation has lagged behind. Further, many applications involving wearables require models to be compact enough to allow deployment on edge-devices. In this context, knowledge distillation (KD) has been widely applied to generate a small model (student model), using a pre-trained high-capacity network (teacher model). In this paper, we propose a new KD strategy using two teacher models - one that uses the raw time-series and another that uses persistence images from the time-series. These two teachers then train a student using KD. In essence, the student learns from heterogeneous teachers providing different knowledge. To consider different properties in features from teachers, we apply an annealing strategy and adaptive temperature in KD. Finally, a robust student model is distilled, which utilizes the time series data only. We find that incorporation of persistence features via second teacher leads to significantly improved performance. This approach provides a unique way of fusing deep-learning with topological features to develop effective models.</p>","PeriodicalId":72692,"journal":{"name":"Conference record. Asilomar Conference on Signals, Systems & Computers","volume":"2022 ","pages":"837-842"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10426276/pdf/nihms-1920709.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10022058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-01DOI: 10.1109/ieeeconf53345.2021.9723364
Michael Perlmutter, Jieqian He, Mark Iwen, Matthew Hirn
The scattering transform is a wavelet-based model of Convolutional Neural Networks originally introduced by S. Mallat. Mallat's analysis shows that this network has desirable stability and invariance guarantees and therefore helps explain the observation that the filters learned by early layers of a Convolutional Neural Network typically resemble wavelets. Our aim is to understand what sort of filters should be used in the later layers of the network. Towards this end, we propose a two-layer hybrid scattering transform. In our first layer, we convolve the input signal with a wavelet filter transform to promote sparsity, and, in the second layer, we convolve with a Gabor filter to leverage the sparsity created by the first layer. We show that these measurements characterize information about signals with isolated singularities. We also show that the Gabor measurements used in the second layer can be used to synthesize sparse signals such as those produced by the first layer.
{"title":"A Hybrid Scattering Transform for Signals with Isolated Singularities.","authors":"Michael Perlmutter, Jieqian He, Mark Iwen, Matthew Hirn","doi":"10.1109/ieeeconf53345.2021.9723364","DOIUrl":"https://doi.org/10.1109/ieeeconf53345.2021.9723364","url":null,"abstract":"<p><p>The scattering transform is a wavelet-based model of Convolutional Neural Networks originally introduced by S. Mallat. Mallat's analysis shows that this network has desirable stability and invariance guarantees and therefore helps explain the observation that the filters learned by early layers of a Convolutional Neural Network typically resemble wavelets. Our aim is to understand what sort of filters should be used in the later layers of the network. Towards this end, we propose a two-layer hybrid scattering transform. In our first layer, we convolve the input signal with a wavelet filter transform to promote sparsity, and, in the second layer, we convolve with a Gabor filter to leverage the sparsity created by the first layer. We show that these measurements characterize information about signals with isolated singularities. We also show that the Gabor measurements used in the second layer can be used to synthesize sparse signals such as those produced by the first layer.</p>","PeriodicalId":72692,"journal":{"name":"Conference record. Asilomar Conference on Signals, Systems & Computers","volume":"2021 ","pages":"1322-1329"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9425109/pdf/nihms-1829244.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9389168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-01Epub Date: 2022-03-04DOI: 10.1109/ieeeconf53345.2021.9723187
Xuehao Ding, Dongsoo Lee, Satchel Grant, Heike Stein, Lane McIntosh, Niru Maheswaranathan, Stephen Baccus
The visual system processes stimuli over a wide range of spatiotemporal scales, with individual neurons receiving input from tens of thousands of neurons whose dynamics range from milliseconds to tens of seconds. This poses a challenge to create models that both accurately capture visual computations and are mechanistically interpretable. Here we present a model of salamander retinal ganglion cell spiking responses recorded with a multielectrode array that captures natural scene responses and slow adaptive dynamics. The model consists of a three-layer convolutional neural network (CNN) modified to include local recurrent synaptic dynamics taken from a linear-nonlinear-kinetic (LNK) model [1]. We presented alternating natural scenes and uniform field white noise stimuli designed to engage slow contrast adaptation. To overcome difficulties fitting slow and fast dynamics together, we first optimized all fast spatiotemporal parameters, then separately optimized recurrent slow synaptic parameters. The resulting full model reproduces a wide range of retinal computations and is mechanistically interpretable, having internal units that correspond to retinal interneurons with biophysically modeled synapses. This model allows us to study the contribution of model units to any retinal computation, and examine how long-term adaptation changes the retinal neural code for natural scenes through selective adaptation of retinal pathways.
{"title":"A mechanistically interpretable model of the retinal neural code for natural scenes with multiscale adaptive dynamics.","authors":"Xuehao Ding, Dongsoo Lee, Satchel Grant, Heike Stein, Lane McIntosh, Niru Maheswaranathan, Stephen Baccus","doi":"10.1109/ieeeconf53345.2021.9723187","DOIUrl":"10.1109/ieeeconf53345.2021.9723187","url":null,"abstract":"<p><p>The visual system processes stimuli over a wide range of spatiotemporal scales, with individual neurons receiving input from tens of thousands of neurons whose dynamics range from milliseconds to tens of seconds. This poses a challenge to create models that both accurately capture visual computations and are mechanistically interpretable. Here we present a model of salamander retinal ganglion cell spiking responses recorded with a multielectrode array that captures natural scene responses and slow adaptive dynamics. The model consists of a three-layer convolutional neural network (CNN) modified to include local recurrent synaptic dynamics taken from a linear-nonlinear-kinetic (LNK) model [1]. We presented alternating natural scenes and uniform field white noise stimuli designed to engage slow contrast adaptation. To overcome difficulties fitting slow and fast dynamics together, we first optimized all fast spatiotemporal parameters, then separately optimized recurrent slow synaptic parameters. The resulting full model reproduces a wide range of retinal computations and is mechanistically interpretable, having internal units that correspond to retinal interneurons with biophysically modeled synapses. This model allows us to study the contribution of model units to any retinal computation, and examine how long-term adaptation changes the retinal neural code for natural scenes through selective adaptation of retinal pathways.</p>","PeriodicalId":72692,"journal":{"name":"Conference record. Asilomar Conference on Signals, Systems & Computers","volume":"2021 ","pages":"287-291"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10680971/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138447454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01Epub Date: 2020-03-30DOI: 10.1109/ieeeconf44664.2019.9048716
Ching-Hua Lee, Bhaskar D Rao, Harinath Garudadri
In this paper, a novel way of deriving proportionate adaptive filters is proposed based on diversity measure minimization using the iterative reweighting techniques well-known in the sparse signal recovery (SSR) area. The resulting least mean square (LMS)-type and normalized LMS (NLMS)-type sparse adaptive filtering algorithms can incorporate various diversity measures that have proved effective in SSR. Furthermore, by setting the regularization coefficient of the diversity measure term to zero in the resulting algorithms, Sparsity promoting LMS (SLMS) and Sparsity promoting NLMS (SNLMS) are introduced, which exploit but do not strictly enforce the sparsity of the system response if it already exists. Moreover, unlike most existing proportionate algorithms that design the step-size control factors based on heuristics, our SSR-based framework leads to designing the factors in a more systematic way. Simulation results are presented to demonstrate the convergence behavior of the derived algorithms for systems with different sparsity levels.
本文利用稀疏信号恢复(SSR)领域著名的迭代重权技术,在分集度最小化的基础上,提出了一种推导比例自适应滤波器的新方法。由此产生的最小均方(LMS)型和归一化 LMS(NLMS)型稀疏自适应滤波算法,可以包含在 SSR 中被证明有效的各种多样性度量。此外,通过将算法中多样性度量项的正则化系数设为零,还引入了稀疏性促进 LMS(SLMS)和稀疏性促进 NLMS(SNLMS)算法,这两种算法可以利用系统响应的稀疏性,但并不严格强制系统响应的稀疏性(如果稀疏性已经存在)。此外,与大多数基于启发式设计步长控制因子的现有比例算法不同,我们基于 SSR 的框架能以更系统的方式设计因子。仿真结果展示了衍生算法在不同稀疏程度系统中的收敛行为。
{"title":"Proportionate Adaptive Filters Based on Minimizing Diversity Measures for Promoting Sparsity.","authors":"Ching-Hua Lee, Bhaskar D Rao, Harinath Garudadri","doi":"10.1109/ieeeconf44664.2019.9048716","DOIUrl":"10.1109/ieeeconf44664.2019.9048716","url":null,"abstract":"<p><p>In this paper, a novel way of deriving proportionate adaptive filters is proposed based on diversity measure minimization using the iterative reweighting techniques well-known in the sparse signal recovery (SSR) area. The resulting least mean square (LMS)-type and normalized LMS (NLMS)-type sparse adaptive filtering algorithms can incorporate various diversity measures that have proved effective in SSR. Furthermore, by setting the regularization coefficient of the diversity measure term to zero in the resulting algorithms, Sparsity promoting LMS (SLMS) and Sparsity promoting NLMS (SNLMS) are introduced, which exploit but do not strictly enforce the sparsity of the system response if it already exists. Moreover, unlike most existing proportionate algorithms that design the step-size control factors based on heuristics, our SSR-based framework leads to designing the factors in a more systematic way. Simulation results are presented to demonstrate the convergence behavior of the derived algorithms for systems with different sparsity levels.</p>","PeriodicalId":72692,"journal":{"name":"Conference record. Asilomar Conference on Signals, Systems & Computers","volume":"2019 ","pages":"769-773"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7676632/pdf/nihms-1644992.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38736422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01Epub Date: 2020-03-30DOI: 10.1109/ieeeconf44664.2019.9048807
Sourish Chakravarty, Zachary D Threlkeld, Yelena G Bodien, Brian L Edlow, Emery N Brown
Dynamic functional connectivity (DFC) analysis involves measuring correlated neural activity over time across multiple brain regions. Significant regional correlations among neural signals, such as those obtained from resting-state functional magnetic resonance imaging (fMRI), may represent neural circuits associated with rest. The conventional approach of estimating the correlation dynamics as a sequence of static correlations from sliding time-windows has statistical limitations. To address this issue, we propose a multivariate stochastic volatility model for estimating DFC inspired by recent work in econometrics research. This model assumes a state-space framework where the correlation dynamics of a multivariate normal observation sequence is governed by a positive-definite matrix-variate latent process. Using this statistical model within a sequential Bayesian estimation framework, we use blood oxygenation level dependent activity from multiple brain regions to estimate posterior distributions on the correlation trajectory. We demonstrate the utility of this DFC estimation framework by analyzing its performance on simulated data, and by estimating correlation dynamics in resting state fMRI data from a patient with a disorder of consciousness (DoC). Our work advances the state-of-the-art in DFC analysis and its principled use in DoC biomarker exploration.
{"title":"A state-space model for dynamic functional connectivity.","authors":"Sourish Chakravarty, Zachary D Threlkeld, Yelena G Bodien, Brian L Edlow, Emery N Brown","doi":"10.1109/ieeeconf44664.2019.9048807","DOIUrl":"10.1109/ieeeconf44664.2019.9048807","url":null,"abstract":"<p><p>Dynamic functional connectivity (DFC) analysis involves measuring correlated neural activity over time across multiple brain regions. Significant regional correlations among neural signals, such as those obtained from resting-state functional magnetic resonance imaging (fMRI), may represent neural circuits associated with rest. The conventional approach of estimating the correlation dynamics as a sequence of static correlations from sliding time-windows has statistical limitations. To address this issue, we propose a multivariate stochastic volatility model for estimating DFC inspired by recent work in econometrics research. This model assumes a state-space framework where the correlation dynamics of a multivariate normal observation sequence is governed by a positive-definite matrix-variate latent process. Using this statistical model within a sequential Bayesian estimation framework, we use blood oxygenation level dependent activity from multiple brain regions to estimate posterior distributions on the correlation trajectory. We demonstrate the utility of this DFC estimation framework by analyzing its performance on simulated data, and by estimating correlation dynamics in resting state fMRI data from a patient with a disorder of consciousness (DoC). Our work advances the state-of-the-art in DFC analysis and its principled use in DoC biomarker exploration.</p>","PeriodicalId":72692,"journal":{"name":"Conference record. Asilomar Conference on Signals, Systems & Computers","volume":"2019 ","pages":"240-244"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7425228/pdf/nihms-1612215.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38278372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01Epub Date: 2020-03-30DOI: 10.1109/IEEECONF44664.2019.9048963
Guanchao Feng, J Gerald Quirk, Petar M Djurić
Convergent cross mapping (CCM) is a state space reconstruction (SSR)-based method designed for causal discovery in coupled time series, where Granger causality may not be applicable due to a separability assumption. However, CCM requires a large number of observations and is not robust to observation noise which limits its applicability. Moreover, in CCM and its variants, the SSR step is mostly implemented with delay embedding where the parameters for reconstruction usually need to be selected using grid search-based methods. In this paper, we propose a Bayesian version of CCM using deep Gaussian processes (DGPs), which are naturally connected with deep neural networks. In particular, we adopt the framework of SSR-based causal discovery and carry out the key steps using DGPs within a non-parametric Bayesian probabilistic framework in a principled manner. The proposed approach is first validated on simulated data and then tested on data used in obstetrics for monitoring the well-being of fetuses, i.e., fetal heart rate (FHR) and uterine activity (UA) signals in the last two hours before delivery. Our results indicate that UA affects the FHR, which agrees with recent clinical studies.
{"title":"Detecting Causality using Deep Gaussian Processes.","authors":"Guanchao Feng, J Gerald Quirk, Petar M Djurić","doi":"10.1109/IEEECONF44664.2019.9048963","DOIUrl":"https://doi.org/10.1109/IEEECONF44664.2019.9048963","url":null,"abstract":"<p><p>Convergent cross mapping (CCM) is a state space reconstruction (SSR)-based method designed for causal discovery in coupled time series, where Granger causality may not be applicable due to a separability assumption. However, CCM requires a large number of observations and is not robust to observation noise which limits its applicability. Moreover, in CCM and its variants, the SSR step is mostly implemented with delay embedding where the parameters for reconstruction usually need to be selected using grid search-based methods. In this paper, we propose a Bayesian version of CCM using deep Gaussian processes (DGPs), which are naturally connected with deep neural networks. In particular, we adopt the framework of SSR-based causal discovery and carry out the key steps using DGPs within a non-parametric Bayesian probabilistic framework in a principled manner. The proposed approach is first validated on simulated data and then tested on data used in obstetrics for monitoring the well-being of fetuses, i.e., fetal heart rate (FHR) and uterine activity (UA) signals in the last two hours before delivery. Our results indicate that UA affects the FHR, which agrees with recent clinical studies.</p>","PeriodicalId":72692,"journal":{"name":"Conference record. Asilomar Conference on Signals, Systems & Computers","volume":"2019 ","pages":"472-476"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/IEEECONF44664.2019.9048963","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25341418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01Epub Date: 2019-02-21DOI: 10.1109/ACSSC.2018.8645313
Chi Zhang, Steen Moeller, Sebastian Weingärtner, Kâmil Uğurbil, Mehmet Akçakaya
Simultaneous multi-slice or multi-band (SMS/MB) imaging allows accelerated coverage in magnetic resonance imaging (MRI). Multiple slices are excited and acquired at the same time, and reconstructed using the redundancies in receiver coil arrays, similar to parallel imaging. SMS/MB reconstruction is currently performed with linear reconstruction techniques. Recently, a nonlinear reconstruction method for parallel imaging, Robust Artificial-neural-networks for k-space Interpolation (RAKI) was proposed and shown to improve upon linear methods. This method uses convolutional neural networks (CNN) trained solely on subject-specific calibration data. In this study, we sought to extend RAKI to SMS/MB imaging reconstruction. CNN training was performed on calibration data acquired prior to SMS/MB imaging, in a manner consistent with the existing linear methods. These CNNs were used to reconstruct a time series of functional MRI (fMRI) data. CNN network parameters were optimized using an extensive search of the parameter space. With these optimal parameters, RAKI substantially improves image quality compared to a commonly used linear reconstruction algorithm, especially for high acceleration rates.
{"title":"Accelerated Simultaneous Multi-Slice MRI using Subject-Specific Convolutional Neural Networks.","authors":"Chi Zhang, Steen Moeller, Sebastian Weingärtner, Kâmil Uğurbil, Mehmet Akçakaya","doi":"10.1109/ACSSC.2018.8645313","DOIUrl":"10.1109/ACSSC.2018.8645313","url":null,"abstract":"<p><p>Simultaneous multi-slice or multi-band (SMS/MB) imaging allows accelerated coverage in magnetic resonance imaging (MRI). Multiple slices are excited and acquired at the same time, and reconstructed using the redundancies in receiver coil arrays, similar to parallel imaging. SMS/MB reconstruction is currently performed with linear reconstruction techniques. Recently, a nonlinear reconstruction method for parallel imaging, Robust Artificial-neural-networks for k-space Interpolation (RAKI) was proposed and shown to improve upon linear methods. This method uses convolutional neural networks (CNN) trained solely on subject-specific calibration data. In this study, we sought to extend RAKI to SMS/MB imaging reconstruction. CNN training was performed on calibration data acquired prior to SMS/MB imaging, in a manner consistent with the existing linear methods. These CNNs were used to reconstruct a time series of functional MRI (fMRI) data. CNN network parameters were optimized using an extensive search of the parameter space. With these optimal parameters, RAKI substantially improves image quality compared to a commonly used linear reconstruction algorithm, especially for high acceleration rates.</p>","PeriodicalId":72692,"journal":{"name":"Conference record. Asilomar Conference on Signals, Systems & Computers","volume":"2018 ","pages":"1636-1640"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6938220/pdf/nihms-1064538.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37503776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01Epub Date: 2019-02-21DOI: 10.1109/ACSSC.2018.8645557
Louis Pisha, Sean Hamilton, Dhiman Sengupta, Ching-Hua Lee, Krishna Chaithanya Vastare, Tamara Zubatiy, Sergio Luna, Cagri Yalcin, Alex Grant, Rajesh Gupta, Ganz Chockalingam, Bhaskar D Rao, Harinath Garudadri
We have previously reported a realtime, open-source speech-processing platform (OSP) for hearing aids (HAs) research. In this contribution, we describe a wearable version of this platform to facilitate audiological studies in the lab and in the field. The system is based on smartphone chipsets to leverage power efficiency in terms of FLOPS/watt and economies of scale. We present the system architecture and discuss salient design elements in support of HA research. The ear-level assemblies support up to 4 microphones on each ear, with 96 kHz, 24 bit codecs. The wearable unit runs OSP Release 2018c on top of 64-bit Debian Linux for binaural HA with an overall latency of 5.6 ms. The wearable unit also hosts an embedded web server (EWS) to monitor and control the HA state in realtime. We describe three example web apps in support of typical audiological studies they enable. Finally, we describe a baseline speech enhancement module included with Release 2018c, and describe extensions to the algorithms as future work.
{"title":"A Wearable Platform for Research in Augmented Hearing.","authors":"Louis Pisha, Sean Hamilton, Dhiman Sengupta, Ching-Hua Lee, Krishna Chaithanya Vastare, Tamara Zubatiy, Sergio Luna, Cagri Yalcin, Alex Grant, Rajesh Gupta, Ganz Chockalingam, Bhaskar D Rao, Harinath Garudadri","doi":"10.1109/ACSSC.2018.8645557","DOIUrl":"10.1109/ACSSC.2018.8645557","url":null,"abstract":"<p><p>We have previously reported a realtime, open-source speech-processing platform (OSP) for hearing aids (HAs) research. In this contribution, we describe a wearable version of this platform to facilitate audiological studies in the lab and in the field. The system is based on smartphone chipsets to leverage power efficiency in terms of FLOPS/watt and economies of scale. We present the system architecture and discuss salient design elements in support of HA research. The ear-level assemblies support up to 4 microphones on each ear, with 96 kHz, 24 bit codecs. The wearable unit runs OSP Release 2018c on top of 64-bit Debian Linux for binaural HA with an overall latency of 5.6 ms. The wearable unit also hosts an embedded web server (EWS) to monitor and control the HA state in realtime. We describe three example web apps in support of typical audiological studies they enable. Finally, we describe a baseline speech enhancement module included with Release 2018c, and describe extensions to the algorithms as future work.</p>","PeriodicalId":72692,"journal":{"name":"Conference record. Asilomar Conference on Signals, Systems & Computers","volume":"2018 ","pages":"223-227"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6677400/pdf/nihms-1035792.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41221813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}