首页 > 最新文献

Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference最新文献

英文 中文
Beyond Frequency: Leveraging Spatial Features in SSVEP-Based Brain-Computer Interfaces with Visual Animations. 超越频率:利用基于ssvep的脑机接口与视觉动画的空间特征。
Yike Sun, Ziyu Zhang, Qi Qi, Xiaoyang Li, Jingnan Sun, Kemeng Zhang, Jiaxiang Zhuang, Xiaogang Chen, Xiaorong Gao

Current research on steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) predominantly focuses on utilizing the frequency- and phase-locking characteristics of SSVEP for encoding purposes. In this study, we propose an innovative paradigm wherein SSVEP serves as a marker, integrated with different types of motion animations to identify distinct neural processing pathways associated with these animations. This approach enables the classification of SSVEP-based BCIs without relying on frequency features. We designed six distinct animations corresponding to six behaviors commonly observed in daily life. Each animation was tagged with a uniform 6 Hz stimulus frequency, forming a six-target classification task. Offline testing was conducted with 10 participants. Despite identical frequency components, significant differences in spatial distribution corresponding to the animations were observed, likely due to the behavioral variations in the animations. Classification analysis demonstrated an accuracy of 0.93 within a 6-second window, validating the practical feasibility of this approach. This paradigm offers a novel direction for the advancement of SSVEP-based BCIs, potentially enabling the integration of multi-sensory information.

目前基于稳态视觉诱发电位(SSVEP)的脑机接口研究主要集中在利用SSVEP的锁频和锁相特性进行编码。在这项研究中,我们提出了一个创新的范例,其中SSVEP作为一个标记,与不同类型的运动动画相结合,以识别与这些动画相关的不同神经处理途径。这种方法使得基于ssvep的bci的分类不依赖于频率特征。我们针对日常生活中常见的六种行为设计了六个不同的动画。每个动画都用统一的6hz刺激频率标记,形成一个六目标分类任务。线下测试共10人。尽管频率成分相同,但在与动画对应的空间分布上却存在显著差异,这可能是由于动画中的行为差异。分类分析表明,在6秒窗口内,准确率为0.93,验证了该方法的实际可行性。这种模式为基于ssvep的脑机接口的发展提供了一个新的方向,有可能实现多感官信息的整合。
{"title":"Beyond Frequency: Leveraging Spatial Features in SSVEP-Based Brain-Computer Interfaces with Visual Animations.","authors":"Yike Sun, Ziyu Zhang, Qi Qi, Xiaoyang Li, Jingnan Sun, Kemeng Zhang, Jiaxiang Zhuang, Xiaogang Chen, Xiaorong Gao","doi":"10.1109/EMBC58623.2025.11254745","DOIUrl":"https://doi.org/10.1109/EMBC58623.2025.11254745","url":null,"abstract":"<p><p>Current research on steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) predominantly focuses on utilizing the frequency- and phase-locking characteristics of SSVEP for encoding purposes. In this study, we propose an innovative paradigm wherein SSVEP serves as a marker, integrated with different types of motion animations to identify distinct neural processing pathways associated with these animations. This approach enables the classification of SSVEP-based BCIs without relying on frequency features. We designed six distinct animations corresponding to six behaviors commonly observed in daily life. Each animation was tagged with a uniform 6 Hz stimulus frequency, forming a six-target classification task. Offline testing was conducted with 10 participants. Despite identical frequency components, significant differences in spatial distribution corresponding to the animations were observed, likely due to the behavioral variations in the animations. Classification analysis demonstrated an accuracy of 0.93 within a 6-second window, validating the practical feasibility of this approach. This paradigm offers a novel direction for the advancement of SSVEP-based BCIs, potentially enabling the integration of multi-sensory information.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2025 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145671015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Hybrid Deep Learning Model for Sleep Staging with Multi-Domain Feature Fusion from Single-Channel EEG. 基于多域特征融合的单通道脑电睡眠分期混合深度学习模型。
Xinlei Zhang, Junwei Ma, Keifei Liu, Wanqi Chen, Kang Ding, Shuangyuan Yang, Fan Li, Fengyu Cong

Automatic sleep staging typically requires multi-channel EEG data, limiting its application in portable devices. To address this, we propose a hybrid deep learning model that utilizes multi-domain features from single-channel EEG data collected via polysomnography (PSG). Our model employs two feature extractors to capture time-domain and time-frequency-domain features, which are fused for final predictions. Validated on the Haaglanden Medisch Centrum Sleep Centre Database (HMC) with EEG data from 151 subjects, the model achieves an accuracy of 0.747 and an F1 score of 0.742. Compared to state-of-the-art methods, it shows improved multi-classification performance, particularly in N3 stage detection. This study highlights the potential of single-channel EEG for accurate sleep staging and the development of portable PSG-based monitoring systems.Clinical Relevance-This study develops a deep learning model for automatic sleep staging only using a single-channel EEG. Our research would be helpful to automatically classify stages during sleep for sleep physicians.

自动睡眠分期通常需要多通道脑电图数据,限制了其在便携式设备中的应用。为了解决这个问题,我们提出了一种混合深度学习模型,该模型利用了通过多导睡眠图(PSG)收集的单通道EEG数据的多域特征。我们的模型采用两个特征提取器来捕获时域和时频域特征,并将其融合以进行最终预测。在Haaglanden Medisch Centrum Sleep Centre Database (HMC)中使用151名受试者的EEG数据进行验证,该模型的准确率为0.747,F1得分为0.742。与最先进的方法相比,该方法具有更好的多分类性能,特别是在N3阶段检测方面。这项研究强调了单通道脑电图在精确睡眠分期和便携式psg监测系统开发方面的潜力。临床意义:本研究开发了一种深度学习模型,仅使用单通道脑电图进行自动睡眠分期。我们的研究将有助于睡眠医生对睡眠阶段进行自动分类。
{"title":"A Hybrid Deep Learning Model for Sleep Staging with Multi-Domain Feature Fusion from Single-Channel EEG.","authors":"Xinlei Zhang, Junwei Ma, Keifei Liu, Wanqi Chen, Kang Ding, Shuangyuan Yang, Fan Li, Fengyu Cong","doi":"10.1109/EMBC58623.2025.11252818","DOIUrl":"https://doi.org/10.1109/EMBC58623.2025.11252818","url":null,"abstract":"<p><p>Automatic sleep staging typically requires multi-channel EEG data, limiting its application in portable devices. To address this, we propose a hybrid deep learning model that utilizes multi-domain features from single-channel EEG data collected via polysomnography (PSG). Our model employs two feature extractors to capture time-domain and time-frequency-domain features, which are fused for final predictions. Validated on the Haaglanden Medisch Centrum Sleep Centre Database (HMC) with EEG data from 151 subjects, the model achieves an accuracy of 0.747 and an F1 score of 0.742. Compared to state-of-the-art methods, it shows improved multi-classification performance, particularly in N3 stage detection. This study highlights the potential of single-channel EEG for accurate sleep staging and the development of portable PSG-based monitoring systems.Clinical Relevance-This study develops a deep learning model for automatic sleep staging only using a single-channel EEG. Our research would be helpful to automatically classify stages during sleep for sleep physicians.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2025 ","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145671095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Joint Optimization Guided Deep Learning Model based on CNN and Channel-Wise Transformers for Robust Sleep Stage Classification from EEG Signal. 基于CNN和Channel-Wise变压器的联合优化引导深度学习模型在脑电信号鲁棒睡眠阶段分类中的应用。
Angkon Deb, Celia Shahnaz, Mohammad Saquib

Sleep stage classification is a critical task in sleep research, with significant implications for diagnosing and treating sleep disorders. Traditional methods rely on manual scoring of polysomnography (PSG) data, which is time-consuming and prone to human error. While recent advances in deep learning have enabled automated sleep stage classification, challenges persist in handling the complex, non-linear patterns of physiological signals. Existing models are often computationally expensive, require sophisticated feature extraction methods, and are unsuitable for real-time implementation. To address these limitations, we propose a lightweight and efficient dual-branch deep-learning model that leverages the feature extraction capabilities of CNNs and the channel-wise attention mechanisms of Transformers. Unlike conventional transformers, it avoids excessive computational complexity while effectively capturing both local and global dependencies in physiological signals. The model is validated on four benchmark datasets-SleepEDF-20, SleepEDF-78, SleepEDFx, and SHHS-and demonstrates superior performance compared to several baseline algorithms. Our proposed algorithm achieves state-of-the-art results across all datasets, highlighting its robustness and scalability for real-world applications. The code for the proposed algorithm is publicly available at link, enabling reproducibility and further research. Combining the strengths of CNNs and Transformers, it offers a promising solution for accurate and efficient sleep stage classification, paving the way for improved diagnosis and treatment of sleep disorders. The code is available at https://github.com/ang-frozen/embc2025.

睡眠阶段分类是睡眠研究中的一项重要任务,对睡眠障碍的诊断和治疗具有重要意义。传统的方法依赖于人工对多导睡眠图(PSG)数据进行评分,这既耗时又容易出现人为错误。虽然深度学习的最新进展使自动睡眠阶段分类成为可能,但在处理复杂的、非线性的生理信号模式方面仍然存在挑战。现有模型通常计算成本高,需要复杂的特征提取方法,并且不适合实时实现。为了解决这些限制,我们提出了一种轻量级和高效的双分支深度学习模型,该模型利用cnn的特征提取能力和Transformers的通道智能注意机制。与传统的变压器不同,它避免了过度的计算复杂性,同时有效地捕获生理信号中的局部和全局依赖关系。该模型在四个基准数据集(sleeppedf -20、sleeppedf -78、sleeppedfx和shhs)上进行了验证,与几种基准算法相比,该模型表现出了优越的性能。我们提出的算法在所有数据集上实现了最先进的结果,突出了其在现实世界应用中的鲁棒性和可扩展性。所提出的算法的代码可以在链接上公开获得,从而实现可重复性和进一步的研究。它结合了cnn和transformer的优势,为准确高效的睡眠阶段分类提供了一个有前景的解决方案,为改善睡眠障碍的诊断和治疗铺平了道路。代码可在https://github.com/ang-frozen/embc2025上获得。
{"title":"A Joint Optimization Guided Deep Learning Model based on CNN and Channel-Wise Transformers for Robust Sleep Stage Classification from EEG Signal.","authors":"Angkon Deb, Celia Shahnaz, Mohammad Saquib","doi":"10.1109/EMBC58623.2025.11252945","DOIUrl":"https://doi.org/10.1109/EMBC58623.2025.11252945","url":null,"abstract":"<p><p>Sleep stage classification is a critical task in sleep research, with significant implications for diagnosing and treating sleep disorders. Traditional methods rely on manual scoring of polysomnography (PSG) data, which is time-consuming and prone to human error. While recent advances in deep learning have enabled automated sleep stage classification, challenges persist in handling the complex, non-linear patterns of physiological signals. Existing models are often computationally expensive, require sophisticated feature extraction methods, and are unsuitable for real-time implementation. To address these limitations, we propose a lightweight and efficient dual-branch deep-learning model that leverages the feature extraction capabilities of CNNs and the channel-wise attention mechanisms of Transformers. Unlike conventional transformers, it avoids excessive computational complexity while effectively capturing both local and global dependencies in physiological signals. The model is validated on four benchmark datasets-SleepEDF-20, SleepEDF-78, SleepEDFx, and SHHS-and demonstrates superior performance compared to several baseline algorithms. Our proposed algorithm achieves state-of-the-art results across all datasets, highlighting its robustness and scalability for real-world applications. The code for the proposed algorithm is publicly available at link, enabling reproducibility and further research. Combining the strengths of CNNs and Transformers, it offers a promising solution for accurate and efficient sleep stage classification, paving the way for improved diagnosis and treatment of sleep disorders. The code is available at https://github.com/ang-frozen/embc2025.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2025 ","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145671098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Bidirectional Long Short-Term Memory Deep Learning Model for Classification of Pulse Waveform. 脉冲波形分类的双向长短期记忆深度学习模型。
Diletta Guberti, Zongheng Guo, Antoine Herpain, Marta Carrara, Manuela Ferrario

The morphology of the arterial blood pressure (ABP) waveform has been demonstrated to serve as a significant indicator of the patient's condition and a marker of impending changes. The wave separation analysis (WSA) approach is based on invasive measures of concomitant arterial blood flow (ABF) and ABP. Other methods were developed as well, but the analyses were limited to waveforms with the physiological shape, namely the Type A. This study introduces a bidirectional long short-term memory (BiLSTM) deep learning model to classify ABP beats into Type A and Type B/C, this last group refers to a condition of altered vascular compliance and resistance. The models were developed by using central (aortic) and peripheral (femoral) waveforms. The best models have achieved an accuracy of 96% and 90% for aortic and femoral signals, respectively. The ultimate objective of this research is to enhance non-invasive cardiovascular monitoring and facilitate the early detection of arterial alterations.

动脉血压(ABP)波形的形态已被证明是患者病情的重要指标和即将发生变化的标志。波分离分析(WSA)方法基于伴随动脉血流(ABF)和ABP的侵入性测量,其他方法也被开发出来,但分析仅限于具有生理形状的波形,即a型。本研究引入了双向长短期记忆(BiLSTM)深度学习模型,将ABP心跳分为a型和B/C型,B/C型指血管顺应性和阻力改变的情况。采用中央(主动脉)和外周(股)波形建立模型。最好的模型对主动脉和股动脉信号的准确率分别达到96%和90%。本研究的最终目的是加强无创心血管监测,促进动脉病变的早期发现。
{"title":"A Bidirectional Long Short-Term Memory Deep Learning Model for Classification of Pulse Waveform.","authors":"Diletta Guberti, Zongheng Guo, Antoine Herpain, Marta Carrara, Manuela Ferrario","doi":"10.1109/EMBC58623.2025.11252620","DOIUrl":"https://doi.org/10.1109/EMBC58623.2025.11252620","url":null,"abstract":"<p><p>The morphology of the arterial blood pressure (ABP) waveform has been demonstrated to serve as a significant indicator of the patient's condition and a marker of impending changes. The wave separation analysis (WSA) approach is based on invasive measures of concomitant arterial blood flow (ABF) and ABP. Other methods were developed as well, but the analyses were limited to waveforms with the physiological shape, namely the Type A. This study introduces a bidirectional long short-term memory (BiLSTM) deep learning model to classify ABP beats into Type A and Type B/C, this last group refers to a condition of altered vascular compliance and resistance. The models were developed by using central (aortic) and peripheral (femoral) waveforms. The best models have achieved an accuracy of 96% and 90% for aortic and femoral signals, respectively. The ultimate objective of this research is to enhance non-invasive cardiovascular monitoring and facilitate the early detection of arterial alterations.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2025 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145671062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Battery-Free Unintentional-Fall Detection System Utilizing TENG Insoles. 利用TENG鞋垫的无电池意外跌落检测系统。
Haruki Higoshi, Enzo Osumi, Tamon Miyake, Ryosuke Tsumura, Shigeki Sugano, Hiroki Shigemune

The increasing elderly population has made falls due to frailty a critical issue, with physical and cognitive factors interacting to elevate the risks of fractures and solitary deaths. Falls are challenging to predict due to the involvement of multiple factors, necessitating the development of continuous gait monitoring and fall detection technologies. Although various fall detection methods have been proposed, many rely on batteries requiring maintenance with circuit complexity, higher costs, or both. This study aims to develop an unintentional-falling detection system by utilizing triboelectric nanogenerator (TENG) technology to create a battery-free insole device. The device was tested to analyze the feature of the voltage signals produced by unintentional falls. The results suggest that the signals generated by the device are sufficiently distinguishable from the frequency-amplitude and the ratio of maximum and minimum voltage values during one gait cycle, indicating the feasibility of constructing a battery-free system capable of detecting unintentional-falling.

随着老年人口的不断增加,身体和认知因素的相互作用增加了骨折和孤独死亡的风险,使因身体虚弱而跌倒成为一个关键问题。由于涉及多种因素,跌倒预测具有挑战性,因此需要开发连续步态监测和跌倒检测技术。虽然已经提出了各种各样的跌倒检测方法,但许多方法依赖于需要维护的电池,电路复杂,成本较高,或两者兼而有之。本研究旨在利用摩擦电纳米发电机(TENG)技术制造一种无电池鞋垫装置,开发一种无意跌倒检测系统。对该装置进行了测试,分析了意外跌落产生的电压信号的特征。结果表明,该装置产生的信号与一个步态周期内的频率幅值和最大最小电压值的比值有足够的区别,表明构建一个能够检测无意跌倒的无电池系统是可行的。
{"title":"A Battery-Free Unintentional-Fall Detection System Utilizing TENG Insoles.","authors":"Haruki Higoshi, Enzo Osumi, Tamon Miyake, Ryosuke Tsumura, Shigeki Sugano, Hiroki Shigemune","doi":"10.1109/EMBC58623.2025.11251584","DOIUrl":"https://doi.org/10.1109/EMBC58623.2025.11251584","url":null,"abstract":"<p><p>The increasing elderly population has made falls due to frailty a critical issue, with physical and cognitive factors interacting to elevate the risks of fractures and solitary deaths. Falls are challenging to predict due to the involvement of multiple factors, necessitating the development of continuous gait monitoring and fall detection technologies. Although various fall detection methods have been proposed, many rely on batteries requiring maintenance with circuit complexity, higher costs, or both. This study aims to develop an unintentional-falling detection system by utilizing triboelectric nanogenerator (TENG) technology to create a battery-free insole device. The device was tested to analyze the feature of the voltage signals produced by unintentional falls. The results suggest that the signals generated by the device are sufficiently distinguishable from the frequency-amplitude and the ratio of maximum and minimum voltage values during one gait cycle, indicating the feasibility of constructing a battery-free system capable of detecting unintentional-falling.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2025 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145671076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Radiomics-Based Machine Learning Model to Predict Total Knee Arthroplasty Failure from Plain Radiographs. 基于放射学的机器学习模型从x线平片预测全膝关节置换术失败。
Anna Corti, Sarah Galante, Katia Chiappetta, Mattia Loppini, Valentina D A Corino

The rising number of total knee arthroplasty (TKA) revisions combined with the inferior outcomes compared to the primary TKA highlight the critical need for early detection of primary TKA failure. The present work aims to propose a radiomics-based machine learning model to automatically detect TKA failure from radiographs. The dataset comprised radiographs from 44 failed and 51 non-failed TKA patients. Following preprocessing phases, 465 radiomic features were extracted. A cross-validation procedure, consisting in 100 repeated training-validation splits was implemented. The training phase encompassed feature selection, data balancing and machine learning classifier training. Four feature selection approaches were evaluated combined with several classifiers. Based on the average performance metrics on the validation set, the Least Absolute Shrinkage and Selector Operator (LASSO) feature selection, combined with Logistic Regression (LR) classifier achieved the best performance, with an F1-score of 0.701, a balanced accuracy of 0.710 and area under the curve (AUC) of 0.783. The results demonstrate the potentialities of the developed radiomics-based approach in automatically detecting TKA failure from plain radiographs.Clinical Relevance-The increasing number of revision procedures poses significant challenges for healthcare systems, highlighting the critical need for automated early detection of primary TKA failure. The developed model can support clinicians by reducing their workload and minimizing inter- and intra-observer variability.

全膝关节置换术(TKA)翻修数量的增加,加上与原发性TKA相比预后较差,突出了早期发现原发性TKA失败的迫切需要。目前的工作旨在提出一种基于放射组学的机器学习模型,以自动检测x光片中的TKA故障。该数据集包括44例失败和51例未失败的TKA患者的x线片。经过预处理,提取了465个放射性特征。交叉验证过程包括100个重复的训练-验证分割。训练阶段包括特征选择、数据平衡和机器学习分类器训练。结合几种分类器对四种特征选择方法进行了评价。基于验证集上的平均性能指标,最小绝对收缩和选择算子(LASSO)特征选择结合逻辑回归(LR)分类器的性能最佳,f1得分为0.701,平衡精度为0.710,曲线下面积(AUC)为0.783。结果表明,基于放射组学的方法在从x线平片自动检测TKA故障方面具有潜力。临床相关性-越来越多的修订程序对医疗保健系统提出了重大挑战,突出了对原发性TKA失败的自动早期检测的迫切需要。开发的模型可以通过减少临床医生的工作量和最小化观察者之间和内部的可变性来支持他们。
{"title":"A Radiomics-Based Machine Learning Model to Predict Total Knee Arthroplasty Failure from Plain Radiographs.","authors":"Anna Corti, Sarah Galante, Katia Chiappetta, Mattia Loppini, Valentina D A Corino","doi":"10.1109/EMBC58623.2025.11254563","DOIUrl":"https://doi.org/10.1109/EMBC58623.2025.11254563","url":null,"abstract":"<p><p>The rising number of total knee arthroplasty (TKA) revisions combined with the inferior outcomes compared to the primary TKA highlight the critical need for early detection of primary TKA failure. The present work aims to propose a radiomics-based machine learning model to automatically detect TKA failure from radiographs. The dataset comprised radiographs from 44 failed and 51 non-failed TKA patients. Following preprocessing phases, 465 radiomic features were extracted. A cross-validation procedure, consisting in 100 repeated training-validation splits was implemented. The training phase encompassed feature selection, data balancing and machine learning classifier training. Four feature selection approaches were evaluated combined with several classifiers. Based on the average performance metrics on the validation set, the Least Absolute Shrinkage and Selector Operator (LASSO) feature selection, combined with Logistic Regression (LR) classifier achieved the best performance, with an F1-score of 0.701, a balanced accuracy of 0.710 and area under the curve (AUC) of 0.783. The results demonstrate the potentialities of the developed radiomics-based approach in automatically detecting TKA failure from plain radiographs.Clinical Relevance-The increasing number of revision procedures poses significant challenges for healthcare systems, highlighting the critical need for automated early detection of primary TKA failure. The developed model can support clinicians by reducing their workload and minimizing inter- and intra-observer variability.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2025 ","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145671131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Proof-of-Concept Spike Based Neuromorphic Brain-Computer Interface. 基于脉冲的神经形态脑机接口的概念验证。
E B Dijkema, C M A Pennartz, U Olcese

Closed-loop brain-computer interfaces (BCIs) hold promise for restoring function after neurological damage by dynamically processing neural signals and delivering targeted brain stimulation. To achieve clinically meaningful outcomes, such systems must operate with high spatiotemporal precision. This work aims to demonstrate a proof-of-concept neuromorphic BCI that processes neural spike events in near-real time, without necessitating preprocessing besides signal filtering and spike detection. Methods - We developed a system that acquires neural signals and streams spike events into a spiking neural network (SNN) running on SpiNNaker neuromorphic hardware. We evaluated the system's performance using both in vivo recordings from mouse visual cortex and simulated neural waveforms. We measured the roundtrip latency, defined as the time from spike detection to an output spike generated by the SNN. Results - Under baseline conditions with no hidden SNN layers, mean roundtrip latency was 4.69 ms (±1.70 ms). Adding hidden layers increased latency by approximately 3.65 ms per layer, reflecting the computational overhead of deeper networks. The system successfully detected and processed spikes in near real-time, demonstrating that neuromorphic hardware can manage spike-based input at speeds suitable for closed-loop intervention. Discussion - These findings indicate that neuromorphic SNNs can rapidly process neural signals, providing a foundation for closed-loop BCIs capable of bypassing damaged neural pathways. Future efforts will involve implementing stimulation protocols and functional SNNs. Such developments may ultimately facilitate more effective, flexible, and power-efficient neuroprosthetic devices.

闭环脑机接口(bci)有望通过动态处理神经信号和提供有针对性的脑刺激来恢复神经损伤后的功能。为了获得有临床意义的结果,这些系统必须以高时空精度运行。这项工作旨在展示一种概念验证的神经形态脑机接口,它可以近乎实时地处理神经尖峰事件,除了信号滤波和尖峰检测之外,不需要进行预处理。方法:我们开发了一个系统,该系统获取神经信号并将尖峰事件流式传输到在SpiNNaker神经形态硬件上运行的尖峰神经网络(SNN)中。我们使用小鼠视觉皮层的体内记录和模拟的神经波形来评估系统的性能。我们测量了往返延迟,定义为从峰值检测到SNN生成的输出峰值的时间。结果-在没有隐藏SNN层的基线条件下,平均往返延迟为4.69 ms(±1.70 ms)。添加隐藏层会使每层的延迟增加大约3.65 ms,这反映了更深层次网络的计算开销。该系统近乎实时地成功检测和处理了峰值,表明神经形态硬件可以以适合闭环干预的速度管理基于峰值的输入。这些发现表明,神经形态snn可以快速处理神经信号,为能够绕过受损神经通路的闭环脑机接口提供了基础。未来的工作将包括实施刺激方案和功能snn。这样的发展可能最终促成更有效、更灵活、更节能的神经假体装置。
{"title":"A Proof-of-Concept Spike Based Neuromorphic Brain-Computer Interface.","authors":"E B Dijkema, C M A Pennartz, U Olcese","doi":"10.1109/EMBC58623.2025.11253485","DOIUrl":"https://doi.org/10.1109/EMBC58623.2025.11253485","url":null,"abstract":"<p><p>Closed-loop brain-computer interfaces (BCIs) hold promise for restoring function after neurological damage by dynamically processing neural signals and delivering targeted brain stimulation. To achieve clinically meaningful outcomes, such systems must operate with high spatiotemporal precision. This work aims to demonstrate a proof-of-concept neuromorphic BCI that processes neural spike events in near-real time, without necessitating preprocessing besides signal filtering and spike detection. Methods - We developed a system that acquires neural signals and streams spike events into a spiking neural network (SNN) running on SpiNNaker neuromorphic hardware. We evaluated the system's performance using both in vivo recordings from mouse visual cortex and simulated neural waveforms. We measured the roundtrip latency, defined as the time from spike detection to an output spike generated by the SNN. Results - Under baseline conditions with no hidden SNN layers, mean roundtrip latency was 4.69 ms (±1.70 ms). Adding hidden layers increased latency by approximately 3.65 ms per layer, reflecting the computational overhead of deeper networks. The system successfully detected and processed spikes in near real-time, demonstrating that neuromorphic hardware can manage spike-based input at speeds suitable for closed-loop intervention. Discussion - These findings indicate that neuromorphic SNNs can rapidly process neural signals, providing a foundation for closed-loop BCIs capable of bypassing damaged neural pathways. Future efforts will involve implementing stimulation protocols and functional SNNs. Such developments may ultimately facilitate more effective, flexible, and power-efficient neuroprosthetic devices.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2025 ","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145671134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Levant's Differentiator-Based Descriptor for EEG-Based Motor Intent Decoding. 一种新的基于黎凡特微分算子的脑电图运动意图解码描述符。
Frank Kulwa, Doreen S Sarwatt, Mojisola G Asogbon, Jiaming Huang, Rami N Khushaba, Tolulope T Oyemakinde, Guanglin Li, Oluwarotimi W Samuel, Hai Li, Yongcheng Li

Motor intent (MI)-based brain-computer interfaces (BCIs) have been extensively studied to improve the performance and clinical realization of assistive robots for motor recovery in stroke patients. However, challenges arise in their low decoding performance. This can be attributed to the low spatial resolution and signal-to-noise ratio of electroencephalography (EEG), particularly in accurately deciphering hand movements, which reduces classification performance. Therefore, we have developed a novel feature extraction technique that exploits Levant's differentiators to extract distinct patterns in EEG signals and employs symmetric positive definite matrices (SPD) to effectively leverage the spatial-temporal properties of the EEG signal. Results from nine post-stroke patients and fifteen normal subjects showed an improved decoding accuracy of 99.16±0.64% and 99.30±0.69%, respectively in classifying twenty-four hand motor intents, significantly outperforming existing related methods. Thus, the proposed technique has the potential to greatly enhance the reliability and effectiveness of EEG-based control systems for post-stroke rehabilitation.Clinical Relevance- The outcome of this study can lead to better control of rehabilitation robots and improve the recovery speed of the stroke patients.

基于运动意图(MI)的脑机接口(bci)已被广泛研究,以提高辅助机器人在脑卒中患者运动恢复中的性能和临床实现。然而,它们的解码性能较低,这是一个挑战。这可归因于脑电图(EEG)的低空间分辨率和信噪比,特别是在准确破译手部运动时,这降低了分类性能。因此,我们开发了一种新的特征提取技术,该技术利用Levant微分算子提取脑电信号中的不同模式,并采用对称正定矩阵(SPD)有效地利用脑电信号的时空特性。9例脑卒中后患者和15例正常人对24个手部运动意图的解码准确率分别为99.16±0.64%和99.30±0.69%,明显优于现有的相关方法。因此,所提出的技术有可能大大提高脑卒中后康复中基于脑电图的控制系统的可靠性和有效性。临床意义-本研究的结果可以更好地控制康复机器人,提高脑卒中患者的恢复速度。
{"title":"A Novel Levant's Differentiator-Based Descriptor for EEG-Based Motor Intent Decoding.","authors":"Frank Kulwa, Doreen S Sarwatt, Mojisola G Asogbon, Jiaming Huang, Rami N Khushaba, Tolulope T Oyemakinde, Guanglin Li, Oluwarotimi W Samuel, Hai Li, Yongcheng Li","doi":"10.1109/EMBC58623.2025.11253249","DOIUrl":"https://doi.org/10.1109/EMBC58623.2025.11253249","url":null,"abstract":"<p><p>Motor intent (MI)-based brain-computer interfaces (BCIs) have been extensively studied to improve the performance and clinical realization of assistive robots for motor recovery in stroke patients. However, challenges arise in their low decoding performance. This can be attributed to the low spatial resolution and signal-to-noise ratio of electroencephalography (EEG), particularly in accurately deciphering hand movements, which reduces classification performance. Therefore, we have developed a novel feature extraction technique that exploits Levant's differentiators to extract distinct patterns in EEG signals and employs symmetric positive definite matrices (SPD) to effectively leverage the spatial-temporal properties of the EEG signal. Results from nine post-stroke patients and fifteen normal subjects showed an improved decoding accuracy of 99.16±0.64% and 99.30±0.69%, respectively in classifying twenty-four hand motor intents, significantly outperforming existing related methods. Thus, the proposed technique has the potential to greatly enhance the reliability and effectiveness of EEG-based control systems for post-stroke rehabilitation.Clinical Relevance- The outcome of this study can lead to better control of rehabilitation robots and improve the recovery speed of the stroke patients.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2025 ","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145671185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Unified Learning and Evaluation Framework for Infant Cry-based Verification. 婴儿啼哭验证的统一学习与评价框架
Xinyu Zhang, Ming Xia, Dongmin Huang, Guanghang Liao, Wenjin Wang

Newborns communicate with the outside world primarily by crying. Infant cry-based verification can reduce the risk of mix-ups in hospital obstetrics. Recent studies have explored the potential of using infant cries for identity verification. Yet, model performance remains limited by training with variable-length clips and evaluating the complete audio recording from a single view. To this end, we propose a novel unified training and evaluation framework that uses fixed-length segments during training to ensure input consistency and incorporates a multi-view joint evaluation strategy by associating the audio recording with its local segments. Extensive experiments conducted on the public CryCeleb2023 dataset show that our framework leads to consistent improvements on different verification models. Specifically, the Equal Error Rate (EER) exhibited a reduction of 10.29% for the whisper-PMFA model, 6.63% for the X-Vector model, and 5.91% for the ECAPA-TDNN model. These results demonstrate the effectiveness of our fixed-length segment training and slice-based multi-view evaluation strategy in enhancing the model stability and evaluation accuracy, providing a more robust framework for newborn voice verification. The source code is released at https://github.com/contactless-healthcare/Unified-Infant-Cry-Verification.

新生儿主要通过哭声与外界交流。基于婴儿哭声的验证可以减少医院产科混淆的风险。最近的研究探索了利用婴儿哭声进行身份验证的潜力。然而,模型性能仍然受到可变长度剪辑训练和从单一视图评估完整音频记录的限制。为此,我们提出了一种新的统一的训练和评估框架,该框架在训练过程中使用固定长度的片段来确保输入的一致性,并通过将录音与其局部片段相关联来结合多视图联合评估策略。在公开的CryCeleb2023数据集上进行的大量实验表明,我们的框架在不同的验证模型上取得了一致的改进。具体而言,whisper-PMFA模型的等错误率(EER)降低了10.29%,X-Vector模型降低了6.63%,ECAPA-TDNN模型降低了5.91%。这些结果证明了我们的固定长度片段训练和基于切片的多视图评估策略在提高模型稳定性和评估准确性方面的有效性,为新生儿语音验证提供了一个更强大的框架。源代码发布在https://github.com/contactless-healthcare/Unified-Infant-Cry-Verification。
{"title":"A Unified Learning and Evaluation Framework for Infant Cry-based Verification.","authors":"Xinyu Zhang, Ming Xia, Dongmin Huang, Guanghang Liao, Wenjin Wang","doi":"10.1109/EMBC58623.2025.11254466","DOIUrl":"https://doi.org/10.1109/EMBC58623.2025.11254466","url":null,"abstract":"<p><p>Newborns communicate with the outside world primarily by crying. Infant cry-based verification can reduce the risk of mix-ups in hospital obstetrics. Recent studies have explored the potential of using infant cries for identity verification. Yet, model performance remains limited by training with variable-length clips and evaluating the complete audio recording from a single view. To this end, we propose a novel unified training and evaluation framework that uses fixed-length segments during training to ensure input consistency and incorporates a multi-view joint evaluation strategy by associating the audio recording with its local segments. Extensive experiments conducted on the public CryCeleb2023 dataset show that our framework leads to consistent improvements on different verification models. Specifically, the Equal Error Rate (EER) exhibited a reduction of 10.29% for the whisper-PMFA model, 6.63% for the X-Vector model, and 5.91% for the ECAPA-TDNN model. These results demonstrate the effectiveness of our fixed-length segment training and slice-based multi-view evaluation strategy in enhancing the model stability and evaluation accuracy, providing a more robust framework for newborn voice verification. The source code is released at https://github.com/contactless-healthcare/Unified-Infant-Cry-Verification.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2025 ","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145671188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Multi-Stage Algorithm for Real-Time Detection and Correction of Ocular Artifacts in EEG: A Calibration-Free Approach. 一种新的多阶段脑电信号眼伪影实时检测与校正算法:一种无需校准的方法。
Vincenzo Ronca, Gianluca Di Flumeri, Leonardo Lungarini, Rossella Capotorto, Daniele Germano, Andrea Giorgi, Gianluca Borghini, Fabio Babiloni, Pietro Arico

Ocular artifacts, particularly blinks, significantly affect the integrity of electroencephalographic (EEG) signals, posing a challenge for real-time applications. Traditional correction methods often require a calibration phase or additional electrooculogram (EOG) channels, limiting their applicability in mobile and real-world settings. This study presents a novel detection and correction method, designed for online ocular artifact correction without the need for prior calibration: the CFo-CLEAN. The proposed method integrates an Enhanced Adaptive Data-driven Algorithm (eADA) for real-time identification and correction of ocular artifacts directly from EEG signals. Unlike conventional approaches, this implementation adapts dynamically to ongoing EEG variations, enhancing flexibility and performance. The study evaluates the CFo-CLEAN method using EEG data recorded from 38 participants during real-world driving scenarios. Performance comparisons were conducted against established correction techniques, including Independent Component Analysis (ICA), regression-based methods, and subspace reconstruction approaches. The evaluation considered both artifact removal efficiency and EEG signal preservation across different experimental conditions. Results demonstrated that the method effectively reduced ocular artifact contamination while preserving neurophysiological content. Specifically, two implementations of the method, utilizing 60-second and 90-second time windows, were analyzed, revealing that longer windows provided superior EEG signal preservation, particularly in higher frequency bands. These findings validate the effectiveness of the CFo-CLEAN method for real-time applications, making it a valuable tool for brain-computer interfaces (BCIs), neuroergonomics, and cognitive state monitoring. By avoiding the need for a calibration phase and incorporating adaptive processing, this method represents a significant advancement in real-time EEG artifact correction, facilitating its deployment in dynamic, real-world environments.

眼部伪影,特别是眨眼,严重影响脑电图信号的完整性,对实时应用提出了挑战。传统的校正方法通常需要一个校准相位或额外的眼电图(EOG)通道,这限制了它们在移动和现实环境中的适用性。本研究提出了一种新的检测和校正方法,设计用于在线眼伪影校正,而无需事先校准:CFo-CLEAN。该方法集成了一种增强的自适应数据驱动算法(eADA),可直接从脑电信号中实时识别和校正眼部伪影。与传统方法不同,该实现动态适应正在进行的EEG变化,增强了灵活性和性能。该研究使用38名参与者在真实驾驶场景中记录的脑电图数据来评估CFo-CLEAN方法。与现有校正技术进行了性能比较,包括独立成分分析(ICA)、基于回归的方法和子空间重建方法。评估同时考虑了不同实验条件下的伪影去除效率和脑电信号保存。结果表明,该方法在保留神经生理内容的同时有效地减少了眼部伪影污染。具体来说,分析了两种方法的实现,分别利用60秒和90秒的时间窗,揭示了更长的窗口提供了更好的脑电信号保存,特别是在更高的频段。这些发现验证了CFo-CLEAN方法在实时应用中的有效性,使其成为脑机接口(bci)、神经工效学和认知状态监测的宝贵工具。通过避免校准阶段的需要并结合自适应处理,该方法代表了实时EEG伪影校正的重大进步,促进了其在动态现实环境中的部署。
{"title":"A Novel Multi-Stage Algorithm for Real-Time Detection and Correction of Ocular Artifacts in EEG: A Calibration-Free Approach.","authors":"Vincenzo Ronca, Gianluca Di Flumeri, Leonardo Lungarini, Rossella Capotorto, Daniele Germano, Andrea Giorgi, Gianluca Borghini, Fabio Babiloni, Pietro Arico","doi":"10.1109/EMBC58623.2025.11254864","DOIUrl":"https://doi.org/10.1109/EMBC58623.2025.11254864","url":null,"abstract":"<p><p>Ocular artifacts, particularly blinks, significantly affect the integrity of electroencephalographic (EEG) signals, posing a challenge for real-time applications. Traditional correction methods often require a calibration phase or additional electrooculogram (EOG) channels, limiting their applicability in mobile and real-world settings. This study presents a novel detection and correction method, designed for online ocular artifact correction without the need for prior calibration: the CFo-CLEAN. The proposed method integrates an Enhanced Adaptive Data-driven Algorithm (eADA) for real-time identification and correction of ocular artifacts directly from EEG signals. Unlike conventional approaches, this implementation adapts dynamically to ongoing EEG variations, enhancing flexibility and performance. The study evaluates the CFo-CLEAN method using EEG data recorded from 38 participants during real-world driving scenarios. Performance comparisons were conducted against established correction techniques, including Independent Component Analysis (ICA), regression-based methods, and subspace reconstruction approaches. The evaluation considered both artifact removal efficiency and EEG signal preservation across different experimental conditions. Results demonstrated that the method effectively reduced ocular artifact contamination while preserving neurophysiological content. Specifically, two implementations of the method, utilizing 60-second and 90-second time windows, were analyzed, revealing that longer windows provided superior EEG signal preservation, particularly in higher frequency bands. These findings validate the effectiveness of the CFo-CLEAN method for real-time applications, making it a valuable tool for brain-computer interfaces (BCIs), neuroergonomics, and cognitive state monitoring. By avoiding the need for a calibration phase and incorporating adaptive processing, this method represents a significant advancement in real-time EEG artifact correction, facilitating its deployment in dynamic, real-world environments.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2025 ","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145671189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1