Pub Date : 2024-09-17DOI: 10.1088/1741-2552/ad7bec
Abbey Sawyer,Nikole Chetty,David P McMullen,Heather Dean,Jacek Eisler,Melanie Fried-Oken,Leigh R Hochberg,Chris Gibbons,Elizabeth Waite,Tom Oxley,Adam Fry,Douglas J Weber,David Putrino
The 10th International Brain Computer Interface (BCI) Society Meeting, 'Balancing Innovation and Translation', was held from the 6th to 9th of June 2023 in Brussels, Belgium. This report provides a summary of the workshop 'Building Consensus on Clinical Outcome Assessments (COAs) for BCI Devices'. This workshop was intended to give participants an overview of the current state of BCI, future opportunities, and how different countries and regions provide regulatory oversight to support the BCI community to develop safe and effective devices for patients. Five presentations and a panel discussion including representatives from regulators, industry, and clinical research stakeholders focused on how various stakeholders and the BCI community might best work together to ensure studies provide data that is useful for evaluating safety and effectiveness, including reaching consensus on clinical outcome assessments (COAs) that represent clinically meaningful benefits and support regulatory and payor requirements. This report focuses on the regulatory and reimbursement requirements for medical devices and how to best measure safety and effectiveness and summarizes the presentations from five experts and the discussion between the panel and the audience. Consensus was reached on the following items specifically related to BCI: (i) the importance of and need for a new generation of COAs, (ii) the challenges facing the development of appropriate clinical outcome assessments, and (iii) that improvements in COAs should demonstrate obvious and clinically meaningful benefit(s). There was discussion on: (i) clinical trial design for BCIs and (ii) considerations for payor reimbursement and other funding. Whilst the importance of building community consensus on COAs was apparent, further collaboration will be required to reach consensus on which specific current and/or novel COAs could be used for the BCI field to evolve from research to market.
{"title":"Building consensus on clinical outcome assessments for BCI devices. A summary of the 10th BCI society meeting 2023 workshop.","authors":"Abbey Sawyer,Nikole Chetty,David P McMullen,Heather Dean,Jacek Eisler,Melanie Fried-Oken,Leigh R Hochberg,Chris Gibbons,Elizabeth Waite,Tom Oxley,Adam Fry,Douglas J Weber,David Putrino","doi":"10.1088/1741-2552/ad7bec","DOIUrl":"https://doi.org/10.1088/1741-2552/ad7bec","url":null,"abstract":"The 10th International Brain Computer Interface (BCI) Society Meeting, 'Balancing Innovation and Translation', was held from the 6th to 9th of June 2023 in Brussels, Belgium. This report provides a summary of the workshop 'Building Consensus on Clinical Outcome Assessments (COAs) for BCI Devices'. This workshop was intended to give participants an overview of the current state of BCI, future opportunities, and how different countries and regions provide regulatory oversight to support the BCI community to develop safe and effective devices for patients. Five presentations and a panel discussion including representatives from regulators, industry, and clinical research stakeholders focused on how various stakeholders and the BCI community might best work together to ensure studies provide data that is useful for evaluating safety and effectiveness, including reaching consensus on clinical outcome assessments (COAs) that represent clinically meaningful benefits and support regulatory and payor requirements. This report focuses on the regulatory and reimbursement requirements for medical devices and how to best measure safety and effectiveness and summarizes the presentations from five experts and the discussion between the panel and the audience. Consensus was reached on the following items specifically related to BCI: (i) the importance of and need for a new generation of COAs, (ii) the challenges facing the development of appropriate clinical outcome assessments, and (iii) that improvements in COAs should demonstrate obvious and clinically meaningful benefit(s). There was discussion on: (i) clinical trial design for BCIs and (ii) considerations for payor reimbursement and other funding. Whilst the importance of building community consensus on COAs was apparent, further collaboration will be required to reach consensus on which specific current and/or novel COAs could be used for the BCI field to evolve from research to market.","PeriodicalId":16753,"journal":{"name":"Journal of neural engineering","volume":"20 1","pages":""},"PeriodicalIF":4.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142268800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the context of Electroencephalographic (EEG) signal processing, artifacts generated by ocular movements, such as blinks, are significant confounding factors. These artifacts overwhelm informative EEG features and may occur too frequently to simply remove affected epochs without losing valuable data. Correcting these artifacts remains a challenge, particularly in out-of-lab and online applications using wearable EEG systems (i.e. with low number of EEG channels, without any additional channels to track EOG).OBJECTIVEthe main objective of the present work consisted in validating a novel ocular blinks artefacts correction method, named o-CLEAN (multi-stage OCuLar artEfActs deNoising algorithm), suitable for online processing with minimal EEG channels.APPROACHthe research was conducted considering one EEG dataset collected in highly controlled environment, and a second one collected in real environment. The analysis was performed by comparing the o-CLEAN method with previously validated state-of-art techniques, and by evaluating its performance along two dimensions: a) the ocular artefacts correction performance (IN-Blink), and b) the EEG signal preservation when the method was applied without any ocular artefacts occurrence (OUT-Blink).MAIN RESULTSresults highlighted that i) o-CLEAN algorithm resulted to be, at least, significantly reliable as the most validated approaches identified in scientific literature in terms of ocular blink artifacts correction, ii) o-CLEAN showed the best performances in terms of EEG signal preservation especially with a low number of EEG channels.SIGNIFICANCEthe testing and validation of the o-CLEAN addresses a relevant open issue in bioengineering EEG processing, especially within out-of-the-lab application. In fact, the method offers an effective solution for correcting ocular artifacts in EEG signals with a low number of available channels, for online processing, and without any specific template of the EOG. It was demonstrated to be particularly effective for EEG data gathered in real environments using wearable systems, a rapidly expanding area within applied neuroscience.
{"title":"o-CLEAN: a novel multi-stage algorithm for the ocular artifacts' correction from EEG data in out-of-the-lab applications.","authors":"Vincenzo Ronca,Gianluca Di Flumeri,Andrea Giorgi,Alessia Vozzi,Rossella Capotorto,Daniele Germano,Nicolina Sciaraffa,Gianluca Borghini,Fabio Babiloni,Pietro Aricò","doi":"10.1088/1741-2552/ad7b78","DOIUrl":"https://doi.org/10.1088/1741-2552/ad7b78","url":null,"abstract":"In the context of Electroencephalographic (EEG) signal processing, artifacts generated by ocular movements, such as blinks, are significant confounding factors. These artifacts overwhelm informative EEG features and may occur too frequently to simply remove affected epochs without losing valuable data. Correcting these artifacts remains a challenge, particularly in out-of-lab and online applications using wearable EEG systems (i.e. with low number of EEG channels, without any additional channels to track EOG).OBJECTIVEthe main objective of the present work consisted in validating a novel ocular blinks artefacts correction method, named o-CLEAN (multi-stage OCuLar artEfActs deNoising algorithm), suitable for online processing with minimal EEG channels.APPROACHthe research was conducted considering one EEG dataset collected in highly controlled environment, and a second one collected in real environment. The analysis was performed by comparing the o-CLEAN method with previously validated state-of-art techniques, and by evaluating its performance along two dimensions: a) the ocular artefacts correction performance (IN-Blink), and b) the EEG signal preservation when the method was applied without any ocular artefacts occurrence (OUT-Blink).MAIN RESULTSresults highlighted that i) o-CLEAN algorithm resulted to be, at least, significantly reliable as the most validated approaches identified in scientific literature in terms of ocular blink artifacts correction, ii) o-CLEAN showed the best performances in terms of EEG signal preservation especially with a low number of EEG channels.SIGNIFICANCEthe testing and validation of the o-CLEAN addresses a relevant open issue in bioengineering EEG processing, especially within out-of-the-lab application. In fact, the method offers an effective solution for correcting ocular artifacts in EEG signals with a low number of available channels, for online processing, and without any specific template of the EOG. It was demonstrated to be particularly effective for EEG data gathered in real environments using wearable systems, a rapidly expanding area within applied neuroscience.","PeriodicalId":16753,"journal":{"name":"Journal of neural engineering","volume":"106 1","pages":""},"PeriodicalIF":4.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142268798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Potential usage of dry electrodes in emerging applications such as wearable devices, flexible tattoo circuits, and stretchable displays requires that, to become practical solutions, issues such as easy fabrication, strong durability, and low-cost materials must be addressed. The objective of this study was to propose soft and dry electrodes developed from polydimethylsiloxane (PDMS) and carbon nanotube (CNT) composites. Connected with both conventional and in-house NTAmp biosignal instruments for comparative studies, performances of the proposed dry electrodes were evaluated through electromyogram (EMG), electrocardiogram (ECG), and electroencephalogram (EEG) measurements. Results demonstrated that the capability of the PDMS/CNT electrodes to receive biosignals was on par with that of commercial electrodes (adhesive and gold-cup electrodes). Depending on the type of stimuli, a signal-to-noise ratio (SNR) of 5-10 dB range was achieved. The results of the study show that the performance of the proposed dry electrode is comparable to that of commercial electrodes, offering possibilities for diverse applications. These applications may include the physical examination of vital medical signs, the control of intelligent devices and robots, and the transmission of signals through flexible materials.
干电极在可穿戴设备、柔性纹身电路和可拉伸显示器等新兴应用中的潜在用途要求,要成为实用的解决方案,必须解决易于制造、耐用性强和材料成本低等问题。本研究的目的是提出利用聚二甲基硅氧烷(PDMS)和碳纳米管(CNT)复合材料开发的干式软电极。为进行比较研究,将传统和内部 NTAmp 生物信号仪器连接起来,通过肌电图(EMG)、心电图(ECG)和脑电图(EEG)测量来评估所提出的干电极的性能。结果表明,PDMS/CNT 电极接收生物信号的能力与商用电极(粘合剂和金杯电极)相当。根据刺激类型的不同,信噪比(SNR)可达到 5-10 dB 的范围。研究结果表明,所提议的干电极的性能与商用电极相当,为各种应用提供了可能性。这些应用可能包括重要医疗体征的物理检查、智能设备和机器人的控制,以及通过柔性材料传输信号。
{"title":"PDMS/CNT electrodes with bioamplifier for practical in-the-ear and conventional biosignal recordings.","authors":"Jongsook Sanguantrakul,Apit Hemakom,Tharapong Soonrach,Pasin Israsena","doi":"10.1088/1741-2552/ad7905","DOIUrl":"https://doi.org/10.1088/1741-2552/ad7905","url":null,"abstract":"Potential usage of dry electrodes in emerging applications such as wearable devices, flexible tattoo circuits, and stretchable displays requires that, to become practical solutions, issues such as easy fabrication, strong durability, and low-cost materials must be addressed. The objective of this study was to propose soft and dry electrodes developed from polydimethylsiloxane (PDMS) and carbon nanotube (CNT) composites. Connected with both conventional and in-house NTAmp biosignal instruments for comparative studies, performances of the proposed dry electrodes were evaluated through electromyogram (EMG), electrocardiogram (ECG), and electroencephalogram (EEG) measurements. Results demonstrated that the capability of the PDMS/CNT electrodes to receive biosignals was on par with that of commercial electrodes (adhesive and gold-cup electrodes). Depending on the type of stimuli, a signal-to-noise ratio (SNR) of 5-10 dB range was achieved. The results of the study show that the performance of the proposed dry electrode is comparable to that of commercial electrodes, offering possibilities for diverse applications. These applications may include the physical examination of vital medical signs, the control of intelligent devices and robots, and the transmission of signals through flexible materials.","PeriodicalId":16753,"journal":{"name":"Journal of neural engineering","volume":"53 1","pages":""},"PeriodicalIF":4.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142178640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-10DOI: 10.1088/1741-2552/ad7904
Sha Zhao,Yue Cao,Wei Yang,Jie Yu,Chuan Xu,Wei Dai,Shijian Li,Gang Pan,Benyan Luo
OBJECTIVEAccurately diagnosing patients with disorders of consciousness (DOC) is challenging and prone to errors. Recent studies have demonstrated that EEG (electroencephalography), a non-invasive technique of recording the spontaneous electrical activity of brains, offers valuable insights for DOC diagnosis. However, some challenges remain: 1) the EEG signals have not been fully used; and 2) the data scale in most existing studies is limited. In this study, our goal is to differentiate between minimally conscious state (MCS) and unresponsive wakefulness syndrome (UWS) using resting-state EEG signals, by proposing a new deep learning framework.APPROACHWe propose DOCTer, an end-to-end framework for DOC diagnosis based on EEG. It extracts multiple pertinent features from the raw EEG signals, including time-frequency features and microstates. Meanwhile, it takes clinical characteristics of patients into account, and then combines all the features together for the diagnosis. To evaluate its effectiveness, we collect a large-scale dataset containing 409 resting-state EEG recordings from 128 UWS and 187 MCS cases.MAIN RESULTSEvaluated on our dataset, DOCTer achieves the state-of-the-art performance, compared to other methods. The temporal/spectral features contributes the most to the diagnosis task. The cerebral integrity is important for detecting the consciousness level. Meanwhile, we investigate the influence of different EEG collection duration and number of channels, in order to help make the appropriate choices for clinics.SIGNIFICANCEThe DOCTer framework significantly improves the accuracy of DOC diagnosis, helpful for developing appropriate treatment programs. Findings derived from the large-scale dataset provide valuable insights for clinics.
{"title":"DOCTer: a novel EEG-based diagnosis framework for disorders of consciousness.","authors":"Sha Zhao,Yue Cao,Wei Yang,Jie Yu,Chuan Xu,Wei Dai,Shijian Li,Gang Pan,Benyan Luo","doi":"10.1088/1741-2552/ad7904","DOIUrl":"https://doi.org/10.1088/1741-2552/ad7904","url":null,"abstract":"OBJECTIVEAccurately diagnosing patients with disorders of consciousness (DOC) is challenging and prone to errors. Recent studies have demonstrated that EEG (electroencephalography), a non-invasive technique of recording the spontaneous electrical activity of brains, offers valuable insights for DOC diagnosis. However, some challenges remain: 1) the EEG signals have not been fully used; and 2) the data scale in most existing studies is limited. In this study, our goal is to differentiate between minimally conscious state (MCS) and unresponsive wakefulness syndrome (UWS) using resting-state EEG signals, by proposing a new deep learning framework.APPROACHWe propose DOCTer, an end-to-end framework for DOC diagnosis based on EEG. It extracts multiple pertinent features from the raw EEG signals, including time-frequency features and microstates. Meanwhile, it takes clinical characteristics of patients into account, and then combines all the features together for the diagnosis. To evaluate its effectiveness, we collect a large-scale dataset containing 409 resting-state EEG recordings from 128 UWS and 187 MCS cases.MAIN RESULTSEvaluated on our dataset, DOCTer achieves the state-of-the-art performance, compared to other methods. The temporal/spectral features contributes the most to the diagnosis task. The cerebral integrity is important for detecting the consciousness level. Meanwhile, we investigate the influence of different EEG collection duration and number of channels, in order to help make the appropriate choices for clinics.SIGNIFICANCEThe DOCTer framework significantly improves the accuracy of DOC diagnosis, helpful for developing appropriate treatment programs. Findings derived from the large-scale dataset provide valuable insights for clinics.","PeriodicalId":16753,"journal":{"name":"Journal of neural engineering","volume":"14 1","pages":""},"PeriodicalIF":4.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142178661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-09DOI: 10.1088/1741-2552/ad788e
Taeho Kang,Yiyu Chen,Christian Wallraven
textit{Objective.}
In this paper, we conduct a detailed investigation on the effect of IC-based noise rejection methods in neural network classifier-based decoding of electroencephalography (EEG) data in different task datasets.
textit{Approach.}
We apply a pipeline matrix of two popular different Independent Component (IC) decomposition methods (Infomax, AMICA) with three different component rejection strategies (none, ICLabel, and MARA) on three different EEG datasets (Motor imagery, long-term memory formation, and visual memory). We cross-validate processed data from each pipeline with three architectures commonly used for EEG classification (two convolutional neural networks (CNN) and one long short term memory (LSTM) based model. We compare decoding performances on within-participant and within-dataset levels.
textit{Main Results.}
Our results show that the benefit from using IC-based noise rejection for decoding analyses is at best minor, as component-rejected data did not show consistently better performance than data without rejections---especially given the significant computational resources required for ICA computations.
textit{Significance.}
With ever growing emphasis on transparency and reproducibility, as well as the obvious benefits arising from streamlined processing of large-scale datasets, there has been an increased interest in automated methods for pre-processing EEG data. One prominent part of such pre-processing pipelines consists of identifying and potentially removing artifacts arising from extraneous sources. This is typically done via Independent Component (IC) based correction for which numerous methods have been proposed, differing not only in the decomposition of the raw data into ICs, but also in how they reject the computed ICs. While the benefits of these methods are well established in univariate statistical analyses, it is unclear whether they help in multivariate scenarios, and specifically in neural network based decoding studies. As computational costs for pre-processing large-scale datasets are considerable, it is important to consider whether the tradeoff between model performance and available resources is worth the effort.
textit{Objective.}
In this paper, we conduct a detailed investigation on the effect of IC-based noise rejection methods in neural network classifier-based decoding of electroencephalography (EEG) data in different task datasets.
textit{Approach.}
我们在三个不同的脑电图数据集(运动图像、长期记忆形成和视觉记忆)上应用了由两种流行的不同独立分量(IC)分解方法(Infomax、AMICA)和三种不同的分量剔除策略(无、ICLabel 和 MARA)组成的流水矩阵。我们用三种常用于脑电图分类的架构(两个卷积神经网络(CNN)和一个基于长短期记忆(LSTM)的模型)对每个管道处理过的数据进行交叉验证。我们比较了参与者内部和数据集内部的解码性能。我们的结果表明,在解码分析中使用基于集成电路的噪声剔除技术最多只能带来微不足道的好处,因为剔除成分的数据并没有显示出比没有剔除成分的数据持续更好的性能--特别是考虑到 ICA 计算所需的大量计算资源。
textit{Significance.}
随着对透明度和可重复性的日益重视,以及简化处理大规模数据集带来的明显好处,人们对自动预处理脑电图数据的方法越来越感兴趣。此类预处理管道的一个重要部分是识别并可能去除外来来源产生的假象。这通常是通过基于独立分量(IC)的校正来实现的,为此已经提出了许多方法,这些方法不仅在将原始数据分解成 IC 方面存在差异,而且在如何剔除计算出的 IC 方面也存在差异。虽然这些方法的优点在单变量统计分析中已得到充分证实,但在多变量情况下,特别是在基于神经网络的解码研究中,这些方法是否有帮助还不清楚。由于预处理大规模数据集的计算成本相当高,因此必须考虑在模型性能和可用资源之间的权衡是否值得。
{"title":"I see artifacts: ICA-based EEG artifact removal does not improve deep network decoding across three BCI tasks.","authors":"Taeho Kang,Yiyu Chen,Christian Wallraven","doi":"10.1088/1741-2552/ad788e","DOIUrl":"https://doi.org/10.1088/1741-2552/ad788e","url":null,"abstract":"textit{Objective.}
In this paper, we conduct a detailed investigation on the effect of IC-based noise rejection methods in neural network classifier-based decoding of electroencephalography (EEG) data in different task datasets.
textit{Approach.}
We apply a pipeline matrix of two popular different Independent Component (IC) decomposition methods (Infomax, AMICA) with three different component rejection strategies (none, ICLabel, and MARA) on three different EEG datasets (Motor imagery, long-term memory formation, and visual memory). We cross-validate processed data from each pipeline with three architectures commonly used for EEG classification (two convolutional neural networks (CNN) and one long short term memory (LSTM) based model. We compare decoding performances on within-participant and within-dataset levels. 
textit{Main Results.}
Our results show that the benefit from using IC-based noise rejection for decoding analyses is at best minor, as component-rejected data did not show consistently better performance than data without rejections---especially given the significant computational resources required for ICA computations.
textit{Significance.}
With ever growing emphasis on transparency and reproducibility, as well as the obvious benefits arising from streamlined processing of large-scale datasets, there has been an increased interest in automated methods for pre-processing EEG data. One prominent part of such pre-processing pipelines consists of identifying and potentially removing artifacts arising from extraneous sources. This is typically done via Independent Component (IC) based correction for which numerous methods have been proposed, differing not only in the decomposition of the raw data into ICs, but also in how they reject the computed ICs. While the benefits of these methods are well established in univariate statistical analyses, it is unclear whether they help in multivariate scenarios, and specifically in neural network based decoding studies. As computational costs for pre-processing large-scale datasets are considerable, it is important to consider whether the tradeoff between model performance and available resources is worth the effort.","PeriodicalId":16753,"journal":{"name":"Journal of neural engineering","volume":"06 1","pages":""},"PeriodicalIF":4.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142178641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-09DOI: 10.1088/1741-2552/ad788d
Jin Yin,Aiping Liu,LanLan Wang,Ruobing Qian,Xun Chen
OBJECTIVEVarious artifacts in electroencephalography (EEG) are a big hurdle to prevent brain-computer interfaces from real-life usage. Recently, deep learning-based EEG denoising methods have shown excellent performance. However, existing deep network designs inadequately leverage inter-channel relationships in processing multichannel EEG signals. Typically, most methods process multi-channel signals in a channel-by-channel way. Considering the correlations among EEG channels during the same brain activity, this paper proposes utilizing channel relationships to enhance denoising performance.APPROACHWe explicitly model the inter-channel relationships using the self attention mechanism, hypothesizing that these correlations can support and improve the denoising process. Specifically, we introduce a novel denoising network, named Spatial-Temporal Fusion Network (STFNet), which integrates stacked multi-dimension feature extractor to explicitly capture both temporal dependencies and spatial relationships.MAIN RESULTSThe proposed network exhibits superior denoising performance, with a 24.27% reduction in relative root mean squared error compared to other methods on a public benchmark. STFNet proves effective in cross-dataset denoising and downstream classification tasks, improving accuracy by 1.40%, while also offering fast processing on CPU.SIGNIFICANCEThe experimental results demonstrate the importance of integrating spatial and temporal characteristics. The computational efficiency of STFNet makes it suitable for real-time applications and a potential tool for deployment in realistic environments.
{"title":"Integrating spatial and temporal features for enhanced artifact removal in multi-channel EEG recordings.","authors":"Jin Yin,Aiping Liu,LanLan Wang,Ruobing Qian,Xun Chen","doi":"10.1088/1741-2552/ad788d","DOIUrl":"https://doi.org/10.1088/1741-2552/ad788d","url":null,"abstract":"OBJECTIVEVarious artifacts in electroencephalography (EEG) are a big hurdle to prevent brain-computer interfaces from real-life usage. Recently, deep learning-based EEG denoising methods have shown excellent performance. However, existing deep network designs inadequately leverage inter-channel relationships in processing multichannel EEG signals. Typically, most methods process multi-channel signals in a channel-by-channel way. Considering the correlations among EEG channels during the same brain activity, this paper proposes utilizing channel relationships to enhance denoising performance.APPROACHWe explicitly model the inter-channel relationships using the self attention mechanism, hypothesizing that these correlations can support and improve the denoising process. Specifically, we introduce a novel denoising network, named Spatial-Temporal Fusion Network (STFNet), which integrates stacked multi-dimension feature extractor to explicitly capture both temporal dependencies and spatial relationships.MAIN RESULTSThe proposed network exhibits superior denoising performance, with a 24.27% reduction in relative root mean squared error compared to other methods on a public benchmark. STFNet proves effective in cross-dataset denoising and downstream classification tasks, improving accuracy by 1.40%, while also offering fast processing on CPU.SIGNIFICANCEThe experimental results demonstrate the importance of integrating spatial and temporal characteristics. The computational efficiency of STFNet makes it suitable for real-time applications and a potential tool for deployment in realistic environments.","PeriodicalId":16753,"journal":{"name":"Journal of neural engineering","volume":"23 1","pages":""},"PeriodicalIF":4.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142178662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The clinical diagnosis of Parkinson's disease (PD) relying on medical history, clinical symptoms, and signs is subjective and lacks sensitivity. Resting-state fMRI (rs-fMRI) has been demonstrated to be an effective biomarker for diagnosing Parkinson's disease.
Approach: This study proposes a deep learning approach for the automatic diagnosis of PD using rs-fMRI, named PD-ARnet. Specifically, PD-ARnet utilizes Amplitude of Low Frequency Fluctuations (ALFF) and Regional Homogeneity (ReHo) extracted from rs-fMRI as inputs. The inputs are then processed through a developed dual-branch 3D feature extractor to perform advanced feature extraction. During this process, a Correlation-Driven weighting module is applied to capture complementary information from both features. Subsequently, the Attention-Enhanced fusion module is developed to effectively merge two types of features, and the fused features are input into a fully connected layer for automatic diagnosis classification.
Main results: Using 145 samples from the PPMI dataset to evaluate the detection performance of PD-ARnet, the results indicated an average classification accuracy of 91.6% (95% confidence interval [CI]: 90.9%, 92.4%), precision of 94.7% (95% CI: 94.2%, 95.1%), recall of 86.2% (95% CI: 84.9%, 87.4%), F1 score of 90.2% (95% CI: 89.3%, 91.1%), and AUC of 92.8% (95% CI: 91.1%, 95.0%).
Significance: The proposed method has the potential to become a clinical auxiliary diagnostic tool for Parkinson's disease, reducing subjectivity in the diagnostic process, and enhancing diagnostic efficiency and consistency.
.
{"title":"PD-ARnet: a deep learning approach for Parkinson's disease diagnosis from resting-state fMRI.","authors":"Guangyao Li,Yalin Song,Mingyang Liang,Junyang Yu,Rui Zhai","doi":"10.1088/1741-2552/ad788b","DOIUrl":"https://doi.org/10.1088/1741-2552/ad788b","url":null,"abstract":"The clinical diagnosis of Parkinson's disease (PD) relying on medical history, clinical symptoms, and signs is subjective and lacks sensitivity. Resting-state fMRI (rs-fMRI) has been demonstrated to be an effective biomarker for diagnosing Parkinson's disease.
Approach: This study proposes a deep learning approach for the automatic diagnosis of PD using rs-fMRI, named PD-ARnet. Specifically, PD-ARnet utilizes Amplitude of Low Frequency Fluctuations (ALFF) and Regional Homogeneity (ReHo) extracted from rs-fMRI as inputs. The inputs are then processed through a developed dual-branch 3D feature extractor to perform advanced feature extraction. During this process, a Correlation-Driven weighting module is applied to capture complementary information from both features. Subsequently, the Attention-Enhanced fusion module is developed to effectively merge two types of features, and the fused features are input into a fully connected layer for automatic diagnosis classification. 
Main results: Using 145 samples from the PPMI dataset to evaluate the detection performance of PD-ARnet, the results indicated an average classification accuracy of 91.6% (95% confidence interval [CI]: 90.9%, 92.4%), precision of 94.7% (95% CI: 94.2%, 95.1%), recall of 86.2% (95% CI: 84.9%, 87.4%), F1 score of 90.2% (95% CI: 89.3%, 91.1%), and AUC of 92.8% (95% CI: 91.1%, 95.0%).
Significance: The proposed method has the potential to become a clinical auxiliary diagnostic tool for Parkinson's disease, reducing subjectivity in the diagnostic process, and enhancing diagnostic efficiency and consistency.
.","PeriodicalId":16753,"journal":{"name":"Journal of neural engineering","volume":"142 1","pages":""},"PeriodicalIF":4.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142178663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-09DOI: 10.1088/1741-2552/ad788c
Runkai Zhang,Rong Rong,Yun Xu,Haixian Wang,Xiaoyun Wang
Monotherapy with antiepileptic drugs (AEDs) is the preferred strategy for the initial treatment of epilepsy. However, an inadequate response to the initially prescribed AED is a significant indicator of a poor long-term prognosis, emphasizing the importance of precise prediction of treatment outcomes with the initial AED regimen in patients with epilepsy.
Approach: We introduce OxcarNet, an end-to-end neural network framework developed to predict treatment outcomes in patients undergoing oxcarbazepine monotherapy. The proposed predictive model adopts a Sinc Module in its initial layers for adaptive identification of discriminative frequency bands. The derived feature maps are then processed through a Spatial Module, which characterizes the scalp distribution patterns of the electroencephalography (EEG) signals. Subsequently, these features are fed into an attention-enhanced Temporal Module to capture temporal dynamics and discrepancies. A Channel Module with an attention mechanism is employed to reveal inter-channel dependencies within the output of the temporal module, ultimately achieving response prediction. OxcarNet was rigorously evaluated using a proprietary dataset of retrospectively collected EEG data from newly diagnosed epilepsy patients at Nanjing Drum Tower Hospital. This dataset included patients who underwent long-term EEG monitoring in a clinical inpatient setting.
Main results: OxcarNet demonstrated exceptional accuracy in predicting treatment outcomes for patients undergoing Oxcarbazepine monotherapy. In the ten-fold cross-validation, the model achieved an accuracy of 97.27%, and in the validation involving unseen patient data, it maintained an accuracy of 89.17%, outperforming six conventional machine learning methods and three generic neural decoding networks. These findings underscore the model's effectiveness in accurately predicting the treatment responses in patients with newly diagnosed epilepsy. The analysis of features extracted by the Sinc filters revealed a predominant concentration of predictive frequencies in the high-frequency range of the gamma band.
Significance: The findings of our study offer substantial support and new insights into tailoring early AED selection, enhancing the prediction accuracy for the responses of AEDs.
.
{"title":"OxcarNet: Sinc convolutional network with temporal and channel attention for prediction of Oxcarbazepine monotherapy responses in patients with newly diagnosed epilepsy.","authors":"Runkai Zhang,Rong Rong,Yun Xu,Haixian Wang,Xiaoyun Wang","doi":"10.1088/1741-2552/ad788c","DOIUrl":"https://doi.org/10.1088/1741-2552/ad788c","url":null,"abstract":"Monotherapy with antiepileptic drugs (AEDs) is the preferred strategy for the initial treatment of epilepsy. However, an inadequate response to the initially prescribed AED is a significant indicator of a poor long-term prognosis, emphasizing the importance of precise prediction of treatment outcomes with the initial AED regimen in patients with epilepsy.
Approach: We introduce OxcarNet, an end-to-end neural network framework developed to predict treatment outcomes in patients undergoing oxcarbazepine monotherapy. The proposed predictive model adopts a Sinc Module in its initial layers for adaptive identification of discriminative frequency bands. The derived feature maps are then processed through a Spatial Module, which characterizes the scalp distribution patterns of the electroencephalography (EEG) signals. Subsequently, these features are fed into an attention-enhanced Temporal Module to capture temporal dynamics and discrepancies. A Channel Module with an attention mechanism is employed to reveal inter-channel dependencies within the output of the temporal module, ultimately achieving response prediction. OxcarNet was rigorously evaluated using a proprietary dataset of retrospectively collected EEG data from newly diagnosed epilepsy patients at Nanjing Drum Tower Hospital. This dataset included patients who underwent long-term EEG monitoring in a clinical inpatient setting.
Main results: OxcarNet demonstrated exceptional accuracy in predicting treatment outcomes for patients undergoing Oxcarbazepine monotherapy. In the ten-fold cross-validation, the model achieved an accuracy of 97.27%, and in the validation involving unseen patient data, it maintained an accuracy of 89.17%, outperforming six conventional machine learning methods and three generic neural decoding networks. These findings underscore the model's effectiveness in accurately predicting the treatment responses in patients with newly diagnosed epilepsy. The analysis of features extracted by the Sinc filters revealed a predominant concentration of predictive frequencies in the high-frequency range of the gamma band.
Significance: The findings of our study offer substantial support and new insights into tailoring early AED selection, enhancing the prediction accuracy for the responses of AEDs.


.","PeriodicalId":16753,"journal":{"name":"Journal of neural engineering","volume":"55 1","pages":""},"PeriodicalIF":4.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142178664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-13DOI: 10.1088/1741-2552/ace551
Zeliang Jiang, Xingwei An, Shuang Liu, Erwei Yin, Ye Yan, Dong Ming
Objective.Multisensory integration is more likely to occur if the multimodal inputs are within a narrow temporal window called temporal binding window (TBW). Prestimulus local neural oscillations and interregional synchrony within sensory areas can modulate cross-modal integration. Previous work has examined the role of ongoing neural oscillations in audiovisual temporal integration, but there is no unified conclusion. This study aimed to explore whether local ongoing neural oscillations and interregional audiovisual synchrony modulate audiovisual temporal integration.Approach.The human participants performed a simultaneity judgment (SJ) task with the beep-flash stimuli while recording electroencephalography. We focused on two stimulus onset asynchrony (SOA) conditions where subjects report ∼50% proportion of synchronous responses in auditory- and visual-leading SOA (A50V and V50A).Main results.We found that the alpha band power is larger in synchronous response in the central-right posterior and posterior sensors in A50V and V50A conditions, respectively. The results suggested that the alpha band power reflects neuronal excitability in the auditory or visual cortex, which can modulate audiovisual temporal perception depending on the leading sense. Additionally, the SJs were modulated by the opposite phases of alpha (5-10 Hz) and low beta (14-20 Hz) bands in the A50V condition while the low beta band (14-18 Hz) in the V50A condition. One cycle of alpha or two cycles of beta oscillations matched an auditory-leading TBW of ∼86 ms, while two cycles of beta oscillations matched a visual-leading TBW of ∼105 ms. This result indicated the opposite phases in the alpha and beta bands reflect opposite cortical excitability, which modulated the audiovisual SJs. Finally, we found stronger high beta (21-28 Hz) audiovisual phase synchronization for synchronous response in the A50V condition. The phase synchrony of the beta band might be related to maintaining information flow between visual and auditory regions in a top-down manner.Significance.These results clarified whether and how the prestimulus brain state, including local neural oscillations and functional connectivity between brain regions, affects audiovisual temporal integration.
{"title":"Beyond alpha band: prestimulus local oscillation and interregional synchrony of the beta band shape the temporal perception of the audiovisual beep-flash stimulus.","authors":"Zeliang Jiang, Xingwei An, Shuang Liu, Erwei Yin, Ye Yan, Dong Ming","doi":"10.1088/1741-2552/ace551","DOIUrl":"10.1088/1741-2552/ace551","url":null,"abstract":"<p><p><i>Objective.</i>Multisensory integration is more likely to occur if the multimodal inputs are within a narrow temporal window called temporal binding window (TBW). Prestimulus local neural oscillations and interregional synchrony within sensory areas can modulate cross-modal integration. Previous work has examined the role of ongoing neural oscillations in audiovisual temporal integration, but there is no unified conclusion. This study aimed to explore whether local ongoing neural oscillations and interregional audiovisual synchrony modulate audiovisual temporal integration.<i>Approach.</i>The human participants performed a simultaneity judgment (SJ) task with the beep-flash stimuli while recording electroencephalography. We focused on two stimulus onset asynchrony (SOA) conditions where subjects report ∼50% proportion of synchronous responses in auditory- and visual-leading SOA (A50V and V50A).<i>Main results.</i>We found that the alpha band power is larger in synchronous response in the central-right posterior and posterior sensors in A50V and V50A conditions, respectively. The results suggested that the alpha band power reflects neuronal excitability in the auditory or visual cortex, which can modulate audiovisual temporal perception depending on the leading sense. Additionally, the SJs were modulated by the opposite phases of alpha (5-10 Hz) and low beta (14-20 Hz) bands in the A50V condition while the low beta band (14-18 Hz) in the V50A condition. One cycle of alpha or two cycles of beta oscillations matched an auditory-leading TBW of ∼86 ms, while two cycles of beta oscillations matched a visual-leading TBW of ∼105 ms. This result indicated the opposite phases in the alpha and beta bands reflect opposite cortical excitability, which modulated the audiovisual SJs. Finally, we found stronger high beta (21-28 Hz) audiovisual phase synchronization for synchronous response in the A50V condition. The phase synchrony of the beta band might be related to maintaining information flow between visual and auditory regions in a top-down manner.<i>Significance.</i>These results clarified whether and how the prestimulus brain state, including local neural oscillations and functional connectivity between brain regions, affects audiovisual temporal integration.</p>","PeriodicalId":16753,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9760856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-17DOI: 10.1088/1741-2552/ad3b3a
Yahia H Ali, Kevin Bodkin, Mattia Rigotti-Thompson, Kushant Patel, Nicholas S Card, Bareesh Bhaduri, Samuel R Nason-Tomaszewski, Domenick M Mifsud, Xianda Hou, Claire Nicolas, Shane Allcroft, Leigh R Hochberg, Nicholas Au Yong, Sergey D Stavisky, Lee E Miller, David M Brandman, Chethan Pandarinath
Objective. Artificial neural networks (ANNs) are state-of-the-art tools for modeling and decoding neural activity, but deploying them in closed-loop experiments with tight timing constraints is challenging due to their limited support in existing real-time frameworks. Researchers need a platform that fully supports high-level languages for running ANNs (e.g. Python and Julia) while maintaining support for languages that are critical for low-latency data acquisition and processing (e.g. C and C++). Approach. To address these needs, we introduce the Backend for Realtime Asynchronous Neural Decoding (BRAND). BRAND comprises Linux processes, termed nodes, which communicate with each other in a graph via streams of data. Its asynchronous design allows for acquisition, control, and analysis to be executed in parallel on streams of data that may operate at different timescales. BRAND uses Redis, an in-memory database, to send data between nodes, which enables fast inter-process communication and supports 54 different programming languages. Thus, developers can easily deploy existing ANN models in BRAND with minimal implementation changes. Main results. In our tests, BRAND achieved <600 microsecond latency between processes when sending large quantities of data (1024 channels of 30 kHz neural data in 1 ms chunks). BRAND runs a brain-computer interface with a recurrent neural network (RNN) decoder with less than 8 ms of latency from neural data input to decoder prediction. In a real-world demonstration of the system, participant T11 in the BrainGate2 clinical trial (ClinicalTrials.gov Identifier: NCT00912041) performed a standard cursor control task, in which 30 kHz signal processing, RNN decoding, task control, and graphics were all executed in BRAND. This system also supports real-time inference with complex latent variable models like Latent Factor Analysis via Dynamical Systems. Significance. By providing a framework that is fast, modular, and language-agnostic, BRAND lowers the barriers to integrating the latest tools in neuroscience and machine learning into closed-loop experiments.
{"title":"BRAND: a platform for closed-loop experiments with deep network models","authors":"Yahia H Ali, Kevin Bodkin, Mattia Rigotti-Thompson, Kushant Patel, Nicholas S Card, Bareesh Bhaduri, Samuel R Nason-Tomaszewski, Domenick M Mifsud, Xianda Hou, Claire Nicolas, Shane Allcroft, Leigh R Hochberg, Nicholas Au Yong, Sergey D Stavisky, Lee E Miller, David M Brandman, Chethan Pandarinath","doi":"10.1088/1741-2552/ad3b3a","DOIUrl":"https://doi.org/10.1088/1741-2552/ad3b3a","url":null,"abstract":"<italic toggle=\"yes\">Objective.</italic> Artificial neural networks (ANNs) are state-of-the-art tools for modeling and decoding neural activity, but deploying them in closed-loop experiments with tight timing constraints is challenging due to their limited support in existing real-time frameworks. Researchers need a platform that fully supports high-level languages for running ANNs (e.g. Python and Julia) while maintaining support for languages that are critical for low-latency data acquisition and processing (e.g. C and C++). <italic toggle=\"yes\">Approach.</italic> To address these needs, we introduce the Backend for Realtime Asynchronous Neural Decoding (BRAND). BRAND comprises Linux processes, termed <italic toggle=\"yes\">nodes</italic>, which communicate with each other in a <italic toggle=\"yes\">graph</italic> via streams of data. Its asynchronous design allows for acquisition, control, and analysis to be executed in parallel on streams of data that may operate at different timescales. BRAND uses Redis, an in-memory database, to send data between nodes, which enables fast inter-process communication and supports 54 different programming languages. Thus, developers can easily deploy existing ANN models in BRAND with minimal implementation changes. <italic toggle=\"yes\">Main results.</italic> In our tests, BRAND achieved <600 microsecond latency between processes when sending large quantities of data (1024 channels of 30 kHz neural data in 1 ms chunks). BRAND runs a brain-computer interface with a recurrent neural network (RNN) decoder with less than 8 ms of latency from neural data input to decoder prediction. In a real-world demonstration of the system, participant T11 in the BrainGate2 clinical trial (ClinicalTrials.gov Identifier: NCT00912041) performed a standard cursor control task, in which 30 kHz signal processing, RNN decoding, task control, and graphics were all executed in BRAND. This system also supports real-time inference with complex latent variable models like Latent Factor Analysis via Dynamical Systems. <italic toggle=\"yes\">Significance.</italic> By providing a framework that is fast, modular, and language-agnostic, BRAND lowers the barriers to integrating the latest tools in neuroscience and machine learning into closed-loop experiments.","PeriodicalId":16753,"journal":{"name":"Journal of neural engineering","volume":"48 1","pages":""},"PeriodicalIF":4.0,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140611256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}