首页 > 最新文献

IEEE Journal of Biomedical and Health Informatics最新文献

英文 中文
A LLM-Based Hybrid-Transformer Diagnosis System in Healthcare. 基于 LLM 的混合变压器医疗诊断系统。
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-16 DOI: 10.1109/JBHI.2024.3481412
Dongyuan Wu, Liming Nie, Rao Asad Mumtaz, Kadambri Agarwal

The application of computer vision-powered large language models (LLMs) for medical image diagnosis has significantly advanced healthcare systems. Recent progress in developing symmetrical architectures has greatly impacted various medical imaging tasks. While CNNs or RNNs have demonstrated excellent performance, these architectures often face notable limitations of substantial losses in detailed information, such as requiring to capture global semantic information effectively and relying heavily on deep encoders and aggressive downsampling. This paper introduces a novel LLM-based Hybrid-Transformer Network (HybridTransNet) designed to encode tokenized Big Data patches with the transformer mechanism, which elegantly embeds multimodal data of varying sizes as token sequence inputs of LLMS. Subsequently, the network performs both inter-scale and intra-scale self-attention, processing data features through a transformer-based symmetric architecture with a refining module, which facilitates accurately recovering both local and global context information. Additionally, the output is refined using a novel fuzzy selector. Compared to other existing methods on two distinct datasets, the experimental findings and formal assessment demonstrate that our LLM-based HybridTransNet provides superior performance for brain tumor diagnosis in healthcare informatics.

计算机视觉驱动的大型语言模型(LLM)在医学图像诊断中的应用极大地推动了医疗保健系统的发展。最近在开发对称架构方面取得的进展极大地影响了各种医学成像任务。虽然 CNNs 或 RNNs 表现出了卓越的性能,但这些架构往往面临着显著的局限性,如需要有效捕捉全局语义信息、严重依赖深度编码器和激进的降采样等,从而导致详细信息的大量损失。本文介绍了一种新颖的基于 LLM 的混合变换器网络(HybridTransNet),旨在利用变换器机制对标记化的大数据补丁进行编码,将不同大小的多模态数据优雅地嵌入 LLMS 的标记序列输入中。随后,该网络执行尺度间和尺度内的自我关注,通过基于转换器的对称架构和精炼模块处理数据特征,从而有助于准确恢复局部和全局上下文信息。此外,还使用了一种新颖的模糊选择器来完善输出。在两个不同的数据集上,与其他现有方法相比,实验结果和正式评估表明,我们基于 LLM 的 HybridTransNet 为医疗信息学中的脑肿瘤诊断提供了卓越的性能。
{"title":"A LLM-Based Hybrid-Transformer Diagnosis System in Healthcare.","authors":"Dongyuan Wu, Liming Nie, Rao Asad Mumtaz, Kadambri Agarwal","doi":"10.1109/JBHI.2024.3481412","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3481412","url":null,"abstract":"<p><p>The application of computer vision-powered large language models (LLMs) for medical image diagnosis has significantly advanced healthcare systems. Recent progress in developing symmetrical architectures has greatly impacted various medical imaging tasks. While CNNs or RNNs have demonstrated excellent performance, these architectures often face notable limitations of substantial losses in detailed information, such as requiring to capture global semantic information effectively and relying heavily on deep encoders and aggressive downsampling. This paper introduces a novel LLM-based Hybrid-Transformer Network (HybridTransNet) designed to encode tokenized Big Data patches with the transformer mechanism, which elegantly embeds multimodal data of varying sizes as token sequence inputs of LLMS. Subsequently, the network performs both inter-scale and intra-scale self-attention, processing data features through a transformer-based symmetric architecture with a refining module, which facilitates accurately recovering both local and global context information. Additionally, the output is refined using a novel fuzzy selector. Compared to other existing methods on two distinct datasets, the experimental findings and formal assessment demonstrate that our LLM-based HybridTransNet provides superior performance for brain tumor diagnosis in healthcare informatics.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142464057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SBTD: Secured Brain Tumor Detection in IoMT Enabled Smart Healthcare. SBTD:IoMT 智能医疗中的安全脑肿瘤检测。
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-16 DOI: 10.1109/JBHI.2024.3482465
Nishtha Tomar, Parkala Vishnu Bharadwaj Bayari, Gaurav Bhatnagar

Brain tumors are fatal and severely disrupt brain function as they advance. Timely detection and precise monitoring are crucial for improving patient outcomes and survival. A smart healthcare system leveraging the Internet of Medical Things (IoMT) revolutionizes patient care by offering streamlined remote healthcare, especially for individuals with acute medical conditions like brain tumors. However, such systems face significant challenges, such as (1) the increasing prevalence of cyber attacks in the expanding digital healthcare landscape, and (2) the lack of reliability and accuracy in existing tumor detection methods. To address these issues, we propose Secured Brain Tumor Detection (SBTD), the first unified system integrating IoMT with secure tumor detection. SBTD features: (1) a robust security framework, grounded in chaos theory, to safeguard medical data; and (2) a reliable machine learning-based tumor detection framework that accurately localizes tumors using their anatomy. Comprehensive experimental evaluations on different multimodal MRI datasets demonstrate the system's suitability, clinical applicability and superior performance over state-of-the-art algorithms.

脑肿瘤是致命的,随着肿瘤的发展会严重破坏大脑功能。及时发现和精确监测对于改善患者预后和生存率至关重要。利用医疗物联网(IoMT)的智能医疗保健系统通过提供简化的远程医疗保健,尤其是针对脑肿瘤等急性病患者的远程医疗保健,彻底改变了患者的护理方式。然而,这类系统面临着巨大的挑战,例如:(1)在不断扩大的数字医疗领域,网络攻击日益猖獗;(2)现有的肿瘤检测方法缺乏可靠性和准确性。为了解决这些问题,我们提出了安全脑肿瘤检测(SBTD),这是首个将 IoMT 与安全肿瘤检测相结合的统一系统。SBTD 的特点是(1) 以混沌理论为基础的稳健安全框架,以保护医疗数据;(2) 基于机器学习的可靠肿瘤检测框架,利用肿瘤的解剖结构准确定位肿瘤。在不同的多模态磁共振成像数据集上进行的全面实验评估证明了该系统的适用性、临床应用性以及优于最先进算法的性能。
{"title":"SBTD: Secured Brain Tumor Detection in IoMT Enabled Smart Healthcare.","authors":"Nishtha Tomar, Parkala Vishnu Bharadwaj Bayari, Gaurav Bhatnagar","doi":"10.1109/JBHI.2024.3482465","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3482465","url":null,"abstract":"<p><p>Brain tumors are fatal and severely disrupt brain function as they advance. Timely detection and precise monitoring are crucial for improving patient outcomes and survival. A smart healthcare system leveraging the Internet of Medical Things (IoMT) revolutionizes patient care by offering streamlined remote healthcare, especially for individuals with acute medical conditions like brain tumors. However, such systems face significant challenges, such as (1) the increasing prevalence of cyber attacks in the expanding digital healthcare landscape, and (2) the lack of reliability and accuracy in existing tumor detection methods. To address these issues, we propose Secured Brain Tumor Detection (SBTD), the first unified system integrating IoMT with secure tumor detection. SBTD features: (1) a robust security framework, grounded in chaos theory, to safeguard medical data; and (2) a reliable machine learning-based tumor detection framework that accurately localizes tumors using their anatomy. Comprehensive experimental evaluations on different multimodal MRI datasets demonstrate the system's suitability, clinical applicability and superior performance over state-of-the-art algorithms.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142464114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prior Visual-guided Self-supervised Learning Enables Color Vignetting Correction for High-throughput Microscopic Imaging. 先验视觉引导的自我监督学习实现了高通量显微成像的色彩晕影校正。
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-16 DOI: 10.1109/JBHI.2024.3471907
Jianhang Wang, Tianyu Ma, Luhong Jin, Yunqi Zhu, Jiahui Yu, Feng Chen, Shujun Fu, Yingke Xu

Vignetting constitutes a prevalent optical degradation that significantly compromises the quality of biomedical microscopic imaging. However, a robust and efficient vignetting correction methodology in multi-channel microscopic images remains absent at present. In this paper, we take advantage of a prior knowledge about the homogeneity of microscopic images and radial attenuation property of vignetting to develop a self-supervised deep learning algorithm that achieves complex vignetting removal in color microscopic images. Our proposed method, vignetting correction lookup table (VCLUT), is trainable on both single and multiple images, which employs adversarial learning to effectively transfer good imaging conditions from the user visually defined central region of its own light field to the entire image. To illustrate its effectiveness, we performed individual correction experiments on data from five distinct biological specimens. The results demonstrate that VCLUT exhibits enhanced performance compared to classical methods. We further examined its performance as a multi-image-based approach on a pathological dataset, revealing its advantage over other stateof-the-art approaches in both qualitative and quantitative measurements. Moreover, it uniquely possesses the capacity for generalization across various levels of vignetting intensity and an ultra-fast model computation capability, rendering it well-suited for integration into high-throughput imaging pipelines of digital microscopy.

渐晕是一种普遍存在的光学退化现象,严重影响了生物医学显微成像的质量。然而,目前仍缺乏一种稳健高效的多通道显微图像渐晕校正方法。在本文中,我们利用有关显微图像均匀性和渐晕径向衰减特性的先验知识,开发了一种自监督深度学习算法,可实现彩色显微图像中复杂渐晕的去除。我们提出的方法--晕影校正查找表(VCLUT)--可在单幅和多幅图像上进行训练,它采用对抗学习,有效地将良好的成像条件从用户视觉定义的自身光场中心区域转移到整个图像。为了说明其有效性,我们对来自五个不同生物标本的数据进行了单独的校正实验。结果表明,与传统方法相比,VCLUT 的性能有所提高。我们还在病理数据集上进一步检验了它作为基于多图像方法的性能,结果表明它在定性和定量测量方面都优于其他最先进的方法。此外,它还具有跨越不同渐晕强度水平的通用能力和超快的模型计算能力,非常适合集成到数字显微镜的高通量成像管道中。
{"title":"Prior Visual-guided Self-supervised Learning Enables Color Vignetting Correction for High-throughput Microscopic Imaging.","authors":"Jianhang Wang, Tianyu Ma, Luhong Jin, Yunqi Zhu, Jiahui Yu, Feng Chen, Shujun Fu, Yingke Xu","doi":"10.1109/JBHI.2024.3471907","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3471907","url":null,"abstract":"<p><p>Vignetting constitutes a prevalent optical degradation that significantly compromises the quality of biomedical microscopic imaging. However, a robust and efficient vignetting correction methodology in multi-channel microscopic images remains absent at present. In this paper, we take advantage of a prior knowledge about the homogeneity of microscopic images and radial attenuation property of vignetting to develop a self-supervised deep learning algorithm that achieves complex vignetting removal in color microscopic images. Our proposed method, vignetting correction lookup table (VCLUT), is trainable on both single and multiple images, which employs adversarial learning to effectively transfer good imaging conditions from the user visually defined central region of its own light field to the entire image. To illustrate its effectiveness, we performed individual correction experiments on data from five distinct biological specimens. The results demonstrate that VCLUT exhibits enhanced performance compared to classical methods. We further examined its performance as a multi-image-based approach on a pathological dataset, revealing its advantage over other stateof-the-art approaches in both qualitative and quantitative measurements. Moreover, it uniquely possesses the capacity for generalization across various levels of vignetting intensity and an ultra-fast model computation capability, rendering it well-suited for integration into high-throughput imaging pipelines of digital microscopy.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142464112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
mDARTS: Searching ML-Based ECG Classifiers against Membership Inference Attacks. mDARTS:针对成员推断攻击搜索基于 ML 的心电图分类器。
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-16 DOI: 10.1109/JBHI.2024.3481505
Eunbin Park, Youngjoo Lee

This paper addresses the critical need for elctrocardiogram (ECG) classifier architectures that balance high classification performance with robust privacy protection against membership inference attacks (MIA). We introduce a comprehensive approach that innovates in both machine learning efficacy and privacy preservation. Key contributions include the development of a privacy estimator to quantify and mitigate privacy leakage in neural network architectures used for ECG classification. Utilizing this privacy estimator, we propose mDARTS (searching MLbased ECG classifier against MIA), integrating MIA's attack loss into the architecture search process to identify architectures that are both accurate and resilient to MIA threats. Our method achieves significant improvements, with an ECG classification accuracy of 92.1% and a lower privacy score of 54.3%, indicating reduced potential for sensitive information leakage. Heuristic experiments refine architecture search parameters specifically for ECG classification, enhancing classifier performance and privacy scores by up to 3.0% and 1.0%, respectively. The framework's adaptability supports user customization, enabling the extraction of architectures that meet specific criteria such as optimal classification performance with minimal privacy risk. By focusing on the intersection of high-performance ECG classification and the mitigation of privacy risks associated with MIA, our study offers a pioneering solution addressing the limitations of previous approaches.

本文探讨了心电图(ECG)分类器架构的关键需求,这种架构既能兼顾高分类性能,又能保护隐私免受成员推理攻击(MIA)。我们介绍了一种在机器学习效率和隐私保护方面都有所创新的综合方法。主要贡献包括开发了一种隐私估算器,用于量化和减轻用于心电图分类的神经网络架构中的隐私泄露。利用这种隐私估算器,我们提出了 mDARTS(搜索基于 ML 的心电图分类器以对抗 MIA),将 MIA 的攻击损失整合到架构搜索过程中,以识别既准确又能抵御 MIA 威胁的架构。我们的方法取得了重大改进,心电图分类准确率达到 92.1%,隐私得分降低了 54.3%,这表明敏感信息泄漏的可能性降低了。启发式实验改进了专门针对心电图分类的架构搜索参数,使分类器性能和隐私得分分别提高了 3.0% 和 1.0%。该框架的适应性支持用户定制,能够提取符合特定标准的架构,如最佳分类性能和最小隐私风险。通过关注高性能心电图分类与降低与 MIA 相关的隐私风险的交叉点,我们的研究提供了一种开创性的解决方案,解决了以往方法的局限性。
{"title":"mDARTS: Searching ML-Based ECG Classifiers against Membership Inference Attacks.","authors":"Eunbin Park, Youngjoo Lee","doi":"10.1109/JBHI.2024.3481505","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3481505","url":null,"abstract":"<p><p>This paper addresses the critical need for elctrocardiogram (ECG) classifier architectures that balance high classification performance with robust privacy protection against membership inference attacks (MIA). We introduce a comprehensive approach that innovates in both machine learning efficacy and privacy preservation. Key contributions include the development of a privacy estimator to quantify and mitigate privacy leakage in neural network architectures used for ECG classification. Utilizing this privacy estimator, we propose mDARTS (searching MLbased ECG classifier against MIA), integrating MIA's attack loss into the architecture search process to identify architectures that are both accurate and resilient to MIA threats. Our method achieves significant improvements, with an ECG classification accuracy of 92.1% and a lower privacy score of 54.3%, indicating reduced potential for sensitive information leakage. Heuristic experiments refine architecture search parameters specifically for ECG classification, enhancing classifier performance and privacy scores by up to 3.0% and 1.0%, respectively. The framework's adaptability supports user customization, enabling the extraction of architectures that meet specific criteria such as optimal classification performance with minimal privacy risk. By focusing on the intersection of high-performance ECG classification and the mitigation of privacy risks associated with MIA, our study offers a pioneering solution addressing the limitations of previous approaches.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142464111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attention-guided 3D CNN With Lesion Feature Selection for Early Alzheimer's Disease Prediction Using Longitudinal sMRI. 利用纵向 sMRI 通过病变特征选择进行早期阿尔茨海默病预测的注意力引导 3D CNN。
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-16 DOI: 10.1109/JBHI.2024.3482001
Jinwei Liu, Yashu Xu, Yi Liu, Huating Luo, Wenxiang Huang, Lizhong Yao

Predicting the progression from mild cognitive impairment (MCI) to Alzheimer's disease (AD) is critical for early intervention. Towards this end, various deep learning models have been applied in this domain, typically relying on structural magnetic resonance imaging (sMRI) data from a single time point whereas neglecting the dynamic changes in brain structure over time. Current longitudinal studies inadequately explore disease evolution dynamics and are burdened by high computational complexity. This paper introduces a novel lightweight 3D convolutional neural network specifically designed to capture the evolution of brain diseases for modeling the progression of MCI. First, a longitudinal lesion feature selection strategy is proposed to extract core features from temporal data, facilitating the detection of subtle differences in brain structure between two time points. Next, to refine the model for a more concentrated emphasis on lesion features, a disease trend attention mechanism is introduced to learn the dependencies between overall disease trends and local variation features. Finally, disease prediction visualization techniques are employed to improve the interpretability of the final predictions. Extensive experiments demonstrate that the proposed model achieves state-of-the-art performance in terms of area under the curve (AUC), accuracy, specificity, precision, and F1 score. This study confirms the efficacy of our early diagnostic method, utilizing only two follow-up sMRI scans to predict the disease status of MCI patients 24 months later with an AUC of 79.03%.

预测从轻度认知障碍(MCI)到阿尔茨海默病(AD)的进展对于早期干预至关重要。为此,各种深度学习模型已被应用于这一领域,它们通常依赖于单个时间点的结构性磁共振成像(sMRI)数据,而忽略了大脑结构随时间的动态变化。目前的纵向研究对疾病的动态演化探索不足,且计算复杂度高。本文介绍了一种新型轻量级三维卷积神经网络,专门用于捕捉脑部疾病的演变过程,为 MCI 的进展建模。首先,本文提出了一种纵向病变特征选择策略,从时间数据中提取核心特征,便于检测两个时间点之间大脑结构的细微差别。接下来,为了完善模型,使其更加侧重于病变特征,引入了疾病趋势关注机制,以学习整体疾病趋势和局部变异特征之间的依赖关系。最后,采用疾病预测可视化技术来提高最终预测结果的可解释性。广泛的实验证明,所提出的模型在曲线下面积(AUC)、准确率、特异性、精确度和 F1 分数方面都达到了最先进的水平。这项研究证实了我们的早期诊断方法的有效性,仅利用两次随访 sMRI 扫描就能预测 MCI 患者 24 个月后的疾病状态,AUC 为 79.03%。
{"title":"Attention-guided 3D CNN With Lesion Feature Selection for Early Alzheimer's Disease Prediction Using Longitudinal sMRI.","authors":"Jinwei Liu, Yashu Xu, Yi Liu, Huating Luo, Wenxiang Huang, Lizhong Yao","doi":"10.1109/JBHI.2024.3482001","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3482001","url":null,"abstract":"<p><p>Predicting the progression from mild cognitive impairment (MCI) to Alzheimer's disease (AD) is critical for early intervention. Towards this end, various deep learning models have been applied in this domain, typically relying on structural magnetic resonance imaging (sMRI) data from a single time point whereas neglecting the dynamic changes in brain structure over time. Current longitudinal studies inadequately explore disease evolution dynamics and are burdened by high computational complexity. This paper introduces a novel lightweight 3D convolutional neural network specifically designed to capture the evolution of brain diseases for modeling the progression of MCI. First, a longitudinal lesion feature selection strategy is proposed to extract core features from temporal data, facilitating the detection of subtle differences in brain structure between two time points. Next, to refine the model for a more concentrated emphasis on lesion features, a disease trend attention mechanism is introduced to learn the dependencies between overall disease trends and local variation features. Finally, disease prediction visualization techniques are employed to improve the interpretability of the final predictions. Extensive experiments demonstrate that the proposed model achieves state-of-the-art performance in terms of area under the curve (AUC), accuracy, specificity, precision, and F1 score. This study confirms the efficacy of our early diagnostic method, utilizing only two follow-up sMRI scans to predict the disease status of MCI patients 24 months later with an AUC of 79.03%.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142464060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interpretable Multi-Branch Architecture for Spatiotemporal Neural Networks and Its Application in Seizure Prediction. 时空神经网络的可解释多分支架构及其在癫痫发作预测中的应用
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-15 DOI: 10.1109/JBHI.2024.3481005
Baolian Shan, Haiqing Yu, Yongzhi Huang, Minpeng Xu, Dong Ming

Currently, spatiotemporal convolutional neural networks (CNNs) for electroencephalogram (EEG) signals have emerged as promising tools for seizure prediction (SP), which explore the spatiotemporal biomarkers in an epileptic brain. Generally, these CNNs capture spatiotemporal features at single spectral resolution. However, epileptiform EEG signals contain irregular neural oscillations of different frequencies in different brain regions. Therefore, it may be underperforming and uninterpretable for the CNNs without capturing complex spectral properties sufficiently. This study proposed a novel interpretable multi-branch architecture for spatiotemporal CNNs, namely MultiSincNet. On the one hand, the MultiSincNet could directly show the frequency boundaries using the interpretable sinc-convolution layers. On the other hand, it could extract and integrate multiple spatiotemporal features across varying spectral resolutions using parallel branches. Moreover, we also constructed a post-hoc explanation technique for multi-branch CNNs, using the first-order Taylor expansion and chain rule based on the multivariate composite function, which demonstrates the crucial spatiotemporal features learned by the proposed multi-branch spatiotemporal CNN. When combined with the optimal MultiSincNet, ShallowConvNet, DeepConvNet, and EEGWaveNet had significantly improved the subject-specific performance on most metrics. Specifically, the optimal MultiSincNet significantly increased the average accuracy, sensitivity, specificity, binary F1-score, weighted F1-score, and AUC of EEGWaveNet by about 7%, 8%, 7%, 8%, 7%, and 7%, respectively. Besides, the visualization results showed that the optimal model mainly extracts the spectral energy difference from the high gamma band focalized to specific spatial areas as the dominant spatiotemporal EEG feature.

目前,用于脑电图(EEG)信号的时空卷积神经网络(CNN)已成为癫痫发作预测(SP)的有前途的工具,它可以探索癫痫患者大脑中的时空生物标记。一般来说,这些 CNN 可捕捉单频谱分辨率的时空特征。然而,痫样脑电图信号包含不同脑区不同频率的不规则神经振荡。因此,如果不能充分捕捉复杂的频谱特性,CNN 可能会表现不佳且无法解读。本研究为时空 CNN 提出了一种新颖的可解释多分支架构,即 MultiSincNet。一方面,MultiSincNet 可以利用可解释 sinc-convolution 层直接显示频率边界。另一方面,它可以利用并行分支提取和整合不同光谱分辨率的多个时空特征。此外,我们还利用基于多变量复合函数的一阶泰勒扩展和链式规则,构建了多分支 CNN 的事后解释技术,从而展示了所提出的多分支时空 CNN 所学习到的关键时空特征。当与最优 MultiSincNet 结合使用时,ShallowConvNet、DeepConvNet 和 EEGWaveNet 在大多数指标上都显著提高了特定对象的性能。具体来说,最优 MultiSincNet 使 EEGWaveNet 的平均准确率、灵敏度、特异性、二值 F1-score、加权 F1-score 和 AUC 分别大幅提高了约 7%、8%、7%、8%、7% 和 7%。此外,可视化结果表明,最佳模型主要提取了聚焦于特定空间区域的高伽马频带的频谱能量差作为主要的脑电图时空特征。
{"title":"Interpretable Multi-Branch Architecture for Spatiotemporal Neural Networks and Its Application in Seizure Prediction.","authors":"Baolian Shan, Haiqing Yu, Yongzhi Huang, Minpeng Xu, Dong Ming","doi":"10.1109/JBHI.2024.3481005","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3481005","url":null,"abstract":"<p><p>Currently, spatiotemporal convolutional neural networks (CNNs) for electroencephalogram (EEG) signals have emerged as promising tools for seizure prediction (SP), which explore the spatiotemporal biomarkers in an epileptic brain. Generally, these CNNs capture spatiotemporal features at single spectral resolution. However, epileptiform EEG signals contain irregular neural oscillations of different frequencies in different brain regions. Therefore, it may be underperforming and uninterpretable for the CNNs without capturing complex spectral properties sufficiently. This study proposed a novel interpretable multi-branch architecture for spatiotemporal CNNs, namely MultiSincNet. On the one hand, the MultiSincNet could directly show the frequency boundaries using the interpretable sinc-convolution layers. On the other hand, it could extract and integrate multiple spatiotemporal features across varying spectral resolutions using parallel branches. Moreover, we also constructed a post-hoc explanation technique for multi-branch CNNs, using the first-order Taylor expansion and chain rule based on the multivariate composite function, which demonstrates the crucial spatiotemporal features learned by the proposed multi-branch spatiotemporal CNN. When combined with the optimal MultiSincNet, ShallowConvNet, DeepConvNet, and EEGWaveNet had significantly improved the subject-specific performance on most metrics. Specifically, the optimal MultiSincNet significantly increased the average accuracy, sensitivity, specificity, binary F1-score, weighted F1-score, and AUC of EEGWaveNet by about 7%, 8%, 7%, 8%, 7%, and 7%, respectively. Besides, the visualization results showed that the optimal model mainly extracts the spectral energy difference from the high gamma band focalized to specific spatial areas as the dominant spatiotemporal EEG feature.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142464109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feature Separation in Diffuse Lung Disease Image Classification by Using Evolutionary Algorithm-Based NAS. 利用基于进化算法的 NAS 在弥漫性肺病图像分类中进行特征分离
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-15 DOI: 10.1109/JBHI.2024.3481012
Qing Zhang, Dan Shao, Lin Lin, Guoliang Gong, Rui Xu, Shoji Kido, HongWei Cui

In the field of diagnosing lung diseases, the application of neural networks (NNs) in image classification exhibits significant potential. However, NNs are considered "black boxes," making it difficult to discern their decision-making processes, thereby leading to skepticism and concern regarding NNs. This compromises model reliability and hampers intelligent medicine's development. To tackle this issue, we introduce the Evolutionary Neural Architecture Search (EvoNAS). In image classification tasks, EvoNAS initially utilizes an Evolutionary Algorithm to explore various Convolutional Neural Networks, ultimately yielding an optimized network that excels at separating between redundant texture features and the most discriminative ones. Retaining the most discriminative features improves classification accuracy, particularly in distinguishing similar features. This approach illuminates the intrinsic mechanics of classification, thereby enhancing the accuracy of the results. Subsequently, we incorporate a Differential Evolution algorithm based on distribution estimation, significantly enhancing search efficiency. Employing visualization techniques, we demonstrate the effectiveness of EvoNAS, endowing the model with interpretability. Finally, we conduct experiments on the diffuse lung disease texture dataset using EvoNAS. Compared to the original network, the classification accuracy increases by 0.56%. Moreover, our EvoNAS approach demonstrates significant advantages over existing methods in the same dataset.

在肺部疾病诊断领域,神经网络(NN)在图像分类中的应用展现出巨大的潜力。然而,神经网络被认为是 "黑盒子",很难辨别其决策过程,从而导致对神经网络的怀疑和担忧。这损害了模型的可靠性,阻碍了智能医学的发展。为了解决这个问题,我们引入了进化神经架构搜索(EvoNAS)。在图像分类任务中,EvoNAS 最初利用进化算法探索各种卷积神经网络,最终生成一个优化网络,该网络擅长分离冗余纹理特征和最具区分度的特征。保留最具辨别力的特征可以提高分类的准确性,尤其是在区分相似特征方面。这种方法揭示了分类的内在机制,从而提高了分类结果的准确性。随后,我们采用了基于分布估计的差分进化算法,显著提高了搜索效率。利用可视化技术,我们展示了 EvoNAS 的有效性,赋予模型可解释性。最后,我们使用 EvoNAS 对弥漫性肺病纹理数据集进行了实验。与原始网络相比,分类准确率提高了 0.56%。此外,在相同的数据集上,我们的 EvoNAS 方法与现有方法相比具有显著优势。
{"title":"Feature Separation in Diffuse Lung Disease Image Classification by Using Evolutionary Algorithm-Based NAS.","authors":"Qing Zhang, Dan Shao, Lin Lin, Guoliang Gong, Rui Xu, Shoji Kido, HongWei Cui","doi":"10.1109/JBHI.2024.3481012","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3481012","url":null,"abstract":"<p><p>In the field of diagnosing lung diseases, the application of neural networks (NNs) in image classification exhibits significant potential. However, NNs are considered \"black boxes,\" making it difficult to discern their decision-making processes, thereby leading to skepticism and concern regarding NNs. This compromises model reliability and hampers intelligent medicine's development. To tackle this issue, we introduce the Evolutionary Neural Architecture Search (EvoNAS). In image classification tasks, EvoNAS initially utilizes an Evolutionary Algorithm to explore various Convolutional Neural Networks, ultimately yielding an optimized network that excels at separating between redundant texture features and the most discriminative ones. Retaining the most discriminative features improves classification accuracy, particularly in distinguishing similar features. This approach illuminates the intrinsic mechanics of classification, thereby enhancing the accuracy of the results. Subsequently, we incorporate a Differential Evolution algorithm based on distribution estimation, significantly enhancing search efficiency. Employing visualization techniques, we demonstrate the effectiveness of EvoNAS, endowing the model with interpretability. Finally, we conduct experiments on the diffuse lung disease texture dataset using EvoNAS. Compared to the original network, the classification accuracy increases by 0.56%. Moreover, our EvoNAS approach demonstrates significant advantages over existing methods in the same dataset.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142464106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Agnostic-Specific Modality Learning for Cancer Survival Prediction from Multiple Data. 从多种数据中预测癌症生存期的不可知特定模式学习。
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-15 DOI: 10.1109/JBHI.2024.3481310
Honglei Liu, Yi Shi, Ying Xu, Ao Li, Minghui Wang

Cancer is a pressing public health problem and one of the main causes of mortality worldwide. The development of advanced computational methods for predicting cancer survival is pivotal in aiding clinicians to formulate effective treatment strategies and improve patient quality of life. Recent advances in survival prediction methods show that integrating diverse information from various cancer-related data, such as pathological images and genomics, is crucial for improving prediction accuracy. Despite promising results of existing approaches, there are great challenges of modality gap and semantic redundancy presented in multiple cancer data, which could hinder the comprehensive integration and pose substantial obstacles to further enhancing cancer survival prediction. In this study, we propose a novel agnostic-specific modality learning (ASML) framework for accurate cancer survival prediction. To bridge the modality gap and provide a comprehensive view of distinct data modalities, we employ an agnostic-specific learning strategy to learn the commonality across modalities and the uniqueness of each modality. Moreover, a cross-modal fusion network is exerted to integrate multimodal information by modeling modality correlations and diminish semantic redundancy in a divide-and-conquer manner. Extensive experiment results on three TCGA datasets demonstrate that ASML reaches better performance than other existing cancer survival prediction methods for multiple data.

癌症是一个紧迫的公共卫生问题,也是全球死亡的主要原因之一。开发先进的癌症生存期预测计算方法对于帮助临床医生制定有效的治疗策略和提高患者生活质量至关重要。生存预测方法的最新进展表明,整合病理图像和基因组学等各种癌症相关数据中的不同信息对于提高预测准确性至关重要。尽管现有方法取得了可喜的成果,但多种癌症数据中存在的模态差距和语义冗余仍是巨大的挑战,这可能会阻碍全面整合,并对进一步提高癌症生存预测能力构成实质性障碍。在本研究中,我们提出了一种新颖的不可知论特定模式学习(ASML)框架,用于准确预测癌症生存率。为了弥合模态鸿沟并提供不同数据模态的综合视图,我们采用了一种不可知论特定学习策略来学习不同模态的共性和每种模态的独特性。此外,我们还利用跨模态融合网络,通过模态相关性建模来整合多模态信息,并以分而治之的方式减少语义冗余。在三个 TCGA 数据集上进行的广泛实验结果表明,ASML 在多数据癌症生存预测方面的性能优于其他现有方法。
{"title":"Agnostic-Specific Modality Learning for Cancer Survival Prediction from Multiple Data.","authors":"Honglei Liu, Yi Shi, Ying Xu, Ao Li, Minghui Wang","doi":"10.1109/JBHI.2024.3481310","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3481310","url":null,"abstract":"<p><p>Cancer is a pressing public health problem and one of the main causes of mortality worldwide. The development of advanced computational methods for predicting cancer survival is pivotal in aiding clinicians to formulate effective treatment strategies and improve patient quality of life. Recent advances in survival prediction methods show that integrating diverse information from various cancer-related data, such as pathological images and genomics, is crucial for improving prediction accuracy. Despite promising results of existing approaches, there are great challenges of modality gap and semantic redundancy presented in multiple cancer data, which could hinder the comprehensive integration and pose substantial obstacles to further enhancing cancer survival prediction. In this study, we propose a novel agnostic-specific modality learning (ASML) framework for accurate cancer survival prediction. To bridge the modality gap and provide a comprehensive view of distinct data modalities, we employ an agnostic-specific learning strategy to learn the commonality across modalities and the uniqueness of each modality. Moreover, a cross-modal fusion network is exerted to integrate multimodal information by modeling modality correlations and diminish semantic redundancy in a divide-and-conquer manner. Extensive experiment results on three TCGA datasets demonstrate that ASML reaches better performance than other existing cancer survival prediction methods for multiple data.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142464059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint Energy-based Model for Semi-supervised Respiratory Sound Classification: A Method of Insensitive to Distribution Mismatch. 基于联合能量的半监督呼吸声分类模型:一种对分布不匹配不敏感的方法
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-15 DOI: 10.1109/JBHI.2024.3480999
Wenjie Song, Jiqing Han, Shiwen Deng, Tieran Zheng, Guibin Zheng, Yongjun He

Semi-supervised learning effectively mitigates the lack of labeled data by introducing extensive unlabeled data. Despite achieving success in respiratory sound classification, in practice, it usually takes years to acquire a sufficiently sizeable unlabeled set, which consequently results in an extension of the research timeline. Considering that there are also respiratory sounds available in other related tasks, like breath phase detection and COVID-19 detection, it might be an alternative manner to treat these external samples as unlabeled data for respiratory sound classification. However, since these external samples are collected in different scenarios via different devices, there inevitably exists a distribution mismatch between the labeled and external unlabeled data. For existing methods, they usually assume that the labeled and unlabeled data follow the same data distribution. Therefore, they cannot benefit from external samples. To utilize external unlabeled data, we propose a semi-supervised method based on Joint Energy-based Model (JEM) in this paper. During training, the method attempts to use only the essential semantic components within the samples to model the data distribution. When non-semantic components like recording environments and devices vary, as these non-semantic components have a small impact on the model training, a relatively accurate distribution estimation is obtained. Therefore, the method exhibits insensitivity to the distribution mismatch, enabling the model to leverage external unlabeled data to mitigate the lack of labeled data. Taking ICBHI 2017 as the labeled set, HF_Lung_V1 and COVID-19 Sounds as the external unlabeled sets, the proposed method exceeds the baseline by 12.86.

半监督学习通过引入大量非标记数据,有效缓解了标记数据不足的问题。尽管在呼吸声分类方面取得了成功,但在实践中,通常需要数年时间才能获得足够大的未标记数据集,从而导致研究时间的延长。考虑到在呼吸相位检测和 COVID-19 检测等其他相关任务中也存在呼吸声,将这些外部样本作为非标记数据用于呼吸声分类可能是一种替代方法。然而,由于这些外部样本是在不同场景下通过不同设备采集的,因此标注数据和外部非标注数据之间不可避免地存在分布不匹配的问题。对于现有的方法,它们通常假定标注数据和非标注数据遵循相同的数据分布。因此,它们无法从外部样本中获益。为了利用外部非标记数据,我们在本文中提出了一种基于联合能量模型(JEM)的半监督方法。在训练过程中,该方法只尝试使用样本中的基本语义成分来为数据分布建模。当记录环境和设备等非语义成分发生变化时,由于这些非语义成分对模型训练的影响较小,因此可以获得相对准确的分布估计。因此,该方法对分布失配不敏感,使模型能够利用外部非标注数据来缓解标注数据的不足。以 ICBHI 2017 为标注集,HF_Lung_V1 和 COVID-19 Sounds 为外部非标注集,提出的方法比基线高出 12.86。
{"title":"Joint Energy-based Model for Semi-supervised Respiratory Sound Classification: A Method of Insensitive to Distribution Mismatch.","authors":"Wenjie Song, Jiqing Han, Shiwen Deng, Tieran Zheng, Guibin Zheng, Yongjun He","doi":"10.1109/JBHI.2024.3480999","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3480999","url":null,"abstract":"<p><p>Semi-supervised learning effectively mitigates the lack of labeled data by introducing extensive unlabeled data. Despite achieving success in respiratory sound classification, in practice, it usually takes years to acquire a sufficiently sizeable unlabeled set, which consequently results in an extension of the research timeline. Considering that there are also respiratory sounds available in other related tasks, like breath phase detection and COVID-19 detection, it might be an alternative manner to treat these external samples as unlabeled data for respiratory sound classification. However, since these external samples are collected in different scenarios via different devices, there inevitably exists a distribution mismatch between the labeled and external unlabeled data. For existing methods, they usually assume that the labeled and unlabeled data follow the same data distribution. Therefore, they cannot benefit from external samples. To utilize external unlabeled data, we propose a semi-supervised method based on Joint Energy-based Model (JEM) in this paper. During training, the method attempts to use only the essential semantic components within the samples to model the data distribution. When non-semantic components like recording environments and devices vary, as these non-semantic components have a small impact on the model training, a relatively accurate distribution estimation is obtained. Therefore, the method exhibits insensitivity to the distribution mismatch, enabling the model to leverage external unlabeled data to mitigate the lack of labeled data. Taking ICBHI 2017 as the labeled set, HF_Lung_V1 and COVID-19 Sounds as the external unlabeled sets, the proposed method exceeds the baseline by 12.86.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142464110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Aceso-DSAL: Discovering Clinical Evidences from Medical Literature Based on Distant Supervision and Active Learning. Aceso-DSAL:基于远程监督和主动学习从医学文献中发现临床证据
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-15 DOI: 10.1109/JBHI.2024.3480998
Xiang Zhang, Jiaxin Hu, Qian Lu, Lu Niu, Xinqi Wang

Automatic extraction of valuable, structured evidence from the exponentially growing clinical trial literature can help physicians practice evidence-based medicine quickly and accurately. However, current research on evidence extraction has been limited by the lack of generalization ability on various clinical topics and the high cost of manual annotation. In this work, we address these challenges by constructing a PICO-based evidence dataset PICO-DS, covering five clinical topics. This dataset was automatically labeled by a distant supervision based on our proposed textual similarity algorithm called ROUGE-Hybrid. We then present an Aceso-DSAL model, an extension of our previous supervised evidence extraction model - Aceso. In Aceso-DSAL, distantly-labelled and multi-topic PICO-DS was exploited as training corpus, which greatly enhances the generalization of the extraction model. To mitigate the influence of noise unavoidably-introduced in distant supervision, we employ TextCNN and MW-Net models and a paradigm of active learning to weigh the value of each sample. We evaluate the effectiveness of our model on the PICO-DS dataset and find that it outperforms state-of-the-art studies in identifying evidential sentences.

从急剧增长的临床试验文献中自动提取有价值的结构化证据,有助于医生快速准确地实施循证医学。然而,由于缺乏对各种临床主题的概括能力以及人工标注的高成本,目前的证据提取研究一直受到限制。在这项工作中,我们通过构建一个基于 PICO 的证据数据集 PICO-DS,涵盖五个临床主题,来应对这些挑战。该数据集由我们提出的文本相似性算法 ROUGE-Hybrid 进行远距离监督自动标注。然后,我们提出了一个 Aceso-DSAL 模型,它是我们之前的监督证据提取模型 Aceso 的扩展。在Aceso-DSAL中,我们使用了远距离标签和多主题PICO-DS作为训练语料,这大大提高了提取模型的泛化能力。为了减轻远距离监督中不可避免地引入的噪声影响,我们采用了 TextCNN 和 MW-Net 模型以及主动学习范式来权衡每个样本的价值。我们在 PICO-DS 数据集上评估了我们模型的有效性,发现它在识别证据句子方面优于最先进的研究。
{"title":"Aceso-DSAL: Discovering Clinical Evidences from Medical Literature Based on Distant Supervision and Active Learning.","authors":"Xiang Zhang, Jiaxin Hu, Qian Lu, Lu Niu, Xinqi Wang","doi":"10.1109/JBHI.2024.3480998","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3480998","url":null,"abstract":"<p><p>Automatic extraction of valuable, structured evidence from the exponentially growing clinical trial literature can help physicians practice evidence-based medicine quickly and accurately. However, current research on evidence extraction has been limited by the lack of generalization ability on various clinical topics and the high cost of manual annotation. In this work, we address these challenges by constructing a PICO-based evidence dataset PICO-DS, covering five clinical topics. This dataset was automatically labeled by a distant supervision based on our proposed textual similarity algorithm called ROUGE-Hybrid. We then present an Aceso-DSAL model, an extension of our previous supervised evidence extraction model - Aceso. In Aceso-DSAL, distantly-labelled and multi-topic PICO-DS was exploited as training corpus, which greatly enhances the generalization of the extraction model. To mitigate the influence of noise unavoidably-introduced in distant supervision, we employ TextCNN and MW-Net models and a paradigm of active learning to weigh the value of each sample. We evaluate the effectiveness of our model on the PICO-DS dataset and find that it outperforms state-of-the-art studies in identifying evidential sentences.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142464058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Journal of Biomedical and Health Informatics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1