首页 > 最新文献

IEEE Journal of Biomedical and Health Informatics最新文献

英文 中文
Interpretable Dynamic Directed Graph Convolutional Network for Multi-Relational Prediction of Missense Mutation and Drug Response. 用于错义突变和药物反应多关系预测的可解释动态定向图卷积网络
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-18 DOI: 10.1109/JBHI.2024.3483316
Qian Gao, Tao Xu, Xiaodi Li, Wanling Gao, Haoyuan Shi, Youhua Zhang, Jie Chen, Zhenyu Yue

Tumor heterogeneity presents a significant challenge in predicting drug responses, especially as missense mutations within the same gene can lead to varied outcomes such as drug resistance, enhanced sensitivity, or therapeutic ineffectiveness. These complex relationships highlight the need for advanced analytical approaches in oncology. Due to their powerful ability to handle heterogeneous data, graph convolutional networks (GCNs) represent a promising approach for predicting drug responses. However, simple bipartite graphs cannot accurately capture the complex relationships involved in missense mutation and drug response. Furthermore, Deep learning models for drug response are often considered "black boxes", and their interpretability remains a widely discussed issue. To address these challenges, we propose an Interpretable Dynamic Directed Graph Convolutional Network (IDDGCN) framework, which incorporates four key features: (1) the use of directed graphs to differentiate between sensitivity and resistance relationships, (2) the dynamic updating of node weights based on node-specific interactions, (3) the exploration of associations between different mutations within the same gene and drug response, and (4) the enhancement of interpretability models through the integration of a weighted mechanism that accounts for the biological significance, alongside a ground truth construction method to evaluate prediction transparency. The experimental results demonstrate that IDDGCN outperforms existing state-of-the-art models, exhibiting excellent predictive power. Both qualitative and quantitative evaluations of its interpretability further highlight its ability to explain predictions, offering a fresh perspective for precision oncology and targeted drug development.

肿瘤的异质性给预测药物反应带来了巨大挑战,尤其是同一基因的错义突变会导致不同的结果,如耐药性、敏感性增强或治疗无效。这些复杂的关系凸显了肿瘤学对先进分析方法的需求。图卷积网络(GCN)具有处理异构数据的强大能力,是预测药物反应的一种有前途的方法。然而,简单的双向图无法准确捕捉错义突变与药物反应之间的复杂关系。此外,针对药物反应的深度学习模型通常被认为是 "黑盒子",其可解释性仍然是一个被广泛讨论的问题。为了应对这些挑战,我们提出了可解释动态有向图卷积网络(IDDGCN)框架,该框架包含四个关键特征:(1) 使用有向图区分敏感性和耐药性关系;(2) 根据节点特异性相互作用动态更新节点权重;(3) 探索同一基因内不同突变与药物反应之间的关联;(4) 通过整合考虑生物学意义的加权机制来增强可解释性模型,同时采用地面实况构建方法来评估预测的透明度。实验结果表明,IDDGCN 优于现有的先进模型,表现出卓越的预测能力。对其可解释性的定性和定量评估进一步突出了其解释预测的能力,为精准肿瘤学和靶向药物开发提供了一个全新的视角。
{"title":"Interpretable Dynamic Directed Graph Convolutional Network for Multi-Relational Prediction of Missense Mutation and Drug Response.","authors":"Qian Gao, Tao Xu, Xiaodi Li, Wanling Gao, Haoyuan Shi, Youhua Zhang, Jie Chen, Zhenyu Yue","doi":"10.1109/JBHI.2024.3483316","DOIUrl":"10.1109/JBHI.2024.3483316","url":null,"abstract":"<p><p>Tumor heterogeneity presents a significant challenge in predicting drug responses, especially as missense mutations within the same gene can lead to varied outcomes such as drug resistance, enhanced sensitivity, or therapeutic ineffectiveness. These complex relationships highlight the need for advanced analytical approaches in oncology. Due to their powerful ability to handle heterogeneous data, graph convolutional networks (GCNs) represent a promising approach for predicting drug responses. However, simple bipartite graphs cannot accurately capture the complex relationships involved in missense mutation and drug response. Furthermore, Deep learning models for drug response are often considered \"black boxes\", and their interpretability remains a widely discussed issue. To address these challenges, we propose an Interpretable Dynamic Directed Graph Convolutional Network (IDDGCN) framework, which incorporates four key features: (1) the use of directed graphs to differentiate between sensitivity and resistance relationships, (2) the dynamic updating of node weights based on node-specific interactions, (3) the exploration of associations between different mutations within the same gene and drug response, and (4) the enhancement of interpretability models through the integration of a weighted mechanism that accounts for the biological significance, alongside a ground truth construction method to evaluate prediction transparency. The experimental results demonstrate that IDDGCN outperforms existing state-of-the-art models, exhibiting excellent predictive power. Both qualitative and quantitative evaluations of its interpretability further highlight its ability to explain predictions, offering a fresh perspective for precision oncology and targeted drug development.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142464108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
rU-Net, Multi-Scale Feature Fusion and Transfer Learning: Unlocking the Potential of Cuffless Blood Pressure Monitoring With PPG and ECG rU-Net、多尺度特征融合和迁移学习:释放无袖带血压监测与 PPG 和 ECG 的潜力。
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-18 DOI: 10.1109/JBHI.2024.3483301
Jiaming Chen;Xueling Zhou;Lei Feng;Bingo Wing-Kuen Ling;Lianyi Han;Hongtao Zhang
This study introduces an innovative deep-learning model for cuffless blood pressure estimation using PPG and ECG signals, demonstrating state-of-the-art performance on the largest clean dataset, PulseDB. The rU-Net architecture, a fusion of U-Net and ResNet, enhances both generalization and feature extraction accuracy. Accurate multi-scale feature capture is facilitated by short-time Fourier transform (STFT) time-frequency distributions and multi-head attention mechanisms, allowing data-driven feature selection. The inclusion of demographic parameters as supervisory information further elevates performance. On the calibration-based dataset, our model excels, achieving outstanding accuracy (SBP MAE ± std: 4.49 ± 4.86 mmHg, DBP MAE ± std: 2.69 ± 3.10 mmHg), surpassing AAMI standards and earning a BHS Grade A rating. Addressing the challenge of calibration-free data, we propose a fine-tuning-based transfer learning approach. Remarkably, with only 10% data transfer, our model attains exceptional accuracy (SBP MAE ± std: 4.14 ± 5.01 mmHg, DBP MAE ± std: 2.48 ± 2.93 mmHg). This study sets the stage for the development of highly accurate and reliable wearable cuffless blood pressure monitoring devices.
本研究介绍了一种利用 PPG 和 ECG 信号进行无袖带血压估算的创新型深度学习模型,在最大的清洁数据集 PulseDB 上展示了最先进的性能。融合了 U-Net 和 ResNet 的 rU-Net 架构提高了泛化和特征提取的准确性。短时傅立叶变换 (STFT) 时频分布和多头关注机制有助于准确捕捉多尺度特征,从而实现数据驱动的特征选择。将人口统计参数作为监督信息,可进一步提高性能。在基于校准的数据集上,我们的模型表现出色,实现了出色的准确性(SBP MAE ± std:4.49 ± 4.86 mmHg,DBP MAE ± std:2.69 ± 3.10 mmHg),超过了 AAMI 标准,并获得了 BHS A 级评级。为了应对无校准数据的挑战,我们提出了一种基于微调的迁移学习方法。值得注意的是,只需传输 10% 的数据,我们的模型就能达到极高的准确度(SBP MAE ± std:4.14 ± 5.01 mmHg,DBP MAE ± std:2.48 ± 2.93 mmHg)。这项研究为开发高度准确可靠的可穿戴式无袖带血压监测设备奠定了基础。
{"title":"rU-Net, Multi-Scale Feature Fusion and Transfer Learning: Unlocking the Potential of Cuffless Blood Pressure Monitoring With PPG and ECG","authors":"Jiaming Chen;Xueling Zhou;Lei Feng;Bingo Wing-Kuen Ling;Lianyi Han;Hongtao Zhang","doi":"10.1109/JBHI.2024.3483301","DOIUrl":"10.1109/JBHI.2024.3483301","url":null,"abstract":"This study introduces an innovative deep-learning model for cuffless blood pressure estimation using PPG and ECG signals, demonstrating state-of-the-art performance on the largest clean dataset, PulseDB. The rU-Net architecture, a fusion of U-Net and ResNet, enhances both generalization and feature extraction accuracy. Accurate multi-scale feature capture is facilitated by short-time Fourier transform (STFT) time-frequency distributions and multi-head attention mechanisms, allowing data-driven feature selection. The inclusion of demographic parameters as supervisory information further elevates performance. On the calibration-based dataset, our model excels, achieving outstanding accuracy (SBP MAE ± std: 4.49 ± 4.86 mmHg, DBP MAE ± std: 2.69 ± 3.10 mmHg), surpassing AAMI standards and earning a BHS Grade A rating. Addressing the challenge of calibration-free data, we propose a fine-tuning-based transfer learning approach. Remarkably, with only 10% data transfer, our model attains exceptional accuracy (SBP MAE ± std: 4.14 ± 5.01 mmHg, DBP MAE ± std: 2.48 ± 2.93 mmHg). This study sets the stage for the development of highly accurate and reliable wearable cuffless blood pressure monitoring devices.","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"29 1","pages":"166-176"},"PeriodicalIF":6.7,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142464113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Camera-Based Respiratory Imaging System for Monitoring Infant Thoracoabdominal Patterns of Respiration. 基于摄像头的呼吸成像系统,用于监测婴儿胸腹呼吸模式。
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-17 DOI: 10.1109/JBHI.2024.3482569
Dongmin Huang, Yongshen Zeng, Yingen Zhu, Xiaoyan Song, Liping Pan, Jie Yang, Yanrong Wang, Hongzhou Lu, Wenjin Wang

Existing respiratory monitoring techniques primarily focus on respiratory rate measurement, neglecting the potential of using thoracoabdominal patterns of respiration for infant lung health assessment. To bridge this gap, we exploit the unique advantage of spatial redundancy of a camera sensor to analyze the infant thoracoabdominal respiratory motion. Specifically, we propose a camera-based respiratory imaging (CRI) system that utilizes optical flow to construct a spatio-temporal respiratory imager for comparing the infant chest and abdominal respiratory motion, and employs deep learning algorithms to identify infant abdominal, thoracoabdominal synchronous, and thoracoabdominal asynchronous patterns of respiration. To alleviate the challenges posed by limited clinical training data and subject variability, we introduce a novel multiple-expert contrastive learning (MECL) strategy to CRI. It enriches training samples by reversing and pairing different-class data, and promotes the representation consistency of same-class data through multi-expert collaborative optimization. Clinical validation involving 44 infants shows that MECL achieves 70% in sensitivity and 80.21% in specificity, which validates the feasibility of CRI for respiratory pattern recognition. This work investigates a novel video-based approach for assessing the infant thoracoabdominal patterns of respiration, revealing a new value stream of video health monitoring in neonatal care.

现有的呼吸监测技术主要侧重于呼吸频率的测量,忽视了利用胸腹式呼吸模式进行婴儿肺部健康评估的潜力。为了弥补这一不足,我们利用摄像头传感器空间冗余的独特优势来分析婴儿胸腹呼吸运动。具体来说,我们提出了一种基于相机的呼吸成像(CRI)系统,该系统利用光流构建时空呼吸成像仪,用于比较婴儿胸腹呼吸运动,并采用深度学习算法识别婴儿腹部、胸腹同步和胸腹异步呼吸模式。为了缓解有限的临床训练数据和受试者差异性带来的挑战,我们在 CRI 中引入了一种新颖的多专家对比学习(MECL)策略。它通过反转和配对不同类数据来丰富训练样本,并通过多专家协作优化来提高同类数据的表征一致性。44 名婴儿的临床验证表明,MECL 的灵敏度和特异度分别达到了 70% 和 80.21%,验证了呼吸模式识别 CRI 的可行性。这项工作研究了一种基于视频评估婴儿胸腹呼吸模式的新方法,揭示了新生儿护理中视频健康监测的新价值流。
{"title":"Camera-Based Respiratory Imaging System for Monitoring Infant Thoracoabdominal Patterns of Respiration.","authors":"Dongmin Huang, Yongshen Zeng, Yingen Zhu, Xiaoyan Song, Liping Pan, Jie Yang, Yanrong Wang, Hongzhou Lu, Wenjin Wang","doi":"10.1109/JBHI.2024.3482569","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3482569","url":null,"abstract":"<p><p>Existing respiratory monitoring techniques primarily focus on respiratory rate measurement, neglecting the potential of using thoracoabdominal patterns of respiration for infant lung health assessment. To bridge this gap, we exploit the unique advantage of spatial redundancy of a camera sensor to analyze the infant thoracoabdominal respiratory motion. Specifically, we propose a camera-based respiratory imaging (CRI) system that utilizes optical flow to construct a spatio-temporal respiratory imager for comparing the infant chest and abdominal respiratory motion, and employs deep learning algorithms to identify infant abdominal, thoracoabdominal synchronous, and thoracoabdominal asynchronous patterns of respiration. To alleviate the challenges posed by limited clinical training data and subject variability, we introduce a novel multiple-expert contrastive learning (MECL) strategy to CRI. It enriches training samples by reversing and pairing different-class data, and promotes the representation consistency of same-class data through multi-expert collaborative optimization. Clinical validation involving 44 infants shows that MECL achieves 70% in sensitivity and 80.21% in specificity, which validates the feasibility of CRI for respiratory pattern recognition. This work investigates a novel video-based approach for assessing the infant thoracoabdominal patterns of respiration, revealing a new value stream of video health monitoring in neonatal care.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142464061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Cross Attention Approach to Diagnostic Explainability Using Clinical Practice Guidelines for Depression. 使用《抑郁症临床实践指南》对诊断可解释性进行交叉关注。
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-17 DOI: 10.1109/JBHI.2024.3483577
Sumit Dalal, Deepa Tilwani, Manas Gaur, Sarika Jain, Valerie L Shalin, Amit P Sheth

The lack of explainability in using relevant clinical knowledge hinders the adoption of artificial intelligence-powered analysis of unstructured clinical dialogue. A wealth of relevant, untapped Mental Health (MH) data is available in online communities, providing the opportunity to address the explainability problem with substantial potential impact as a screening tool for both online and offline applications. Inspired by how clinicians rely on their expertise when interacting with patients, we leverage relevant clinical knowledge to classify and explain depression-related data, reducing manual review time and engendering trust. We developed a method to enhance attention in contemporary transformer models and generate explanations for classifications that are understandable by mental health practitioners (MHPs) by incorporating external clinical knowledge. We propose a domain-general architecture called ProcesS knowledgeinfused cross ATtention (PSAT) that incorporates clinical practice guidelines (CPG) when computing attention. We transform a CPG resource focused on depression, such as the Patient Health Questionnaire (e.g. PHQ-9) and related questions, into a machine-readable ontology using SNOMED-CT. With this resource, PSAT enhances the ability of models like GPT-3.5 to generate application-relevant explanations. Evaluation of four expert-curated datasets related to depression demonstrates PSAT's applicationrelevant explanations. PSAT surpasses the performance of twelve baseline models and can provide explanations where other baselines fall short.

在使用相关临床知识时缺乏可解释性,这阻碍了对非结构化临床对话进行人工智能分析。在线社区中存在大量相关的、尚未开发的心理健康(MH)数据,这为解决可解释性问题提供了机会,可作为在线和离线应用的筛选工具产生巨大的潜在影响。受临床医生在与患者互动时如何依赖专业知识的启发,我们利用相关临床知识对抑郁症相关数据进行分类和解释,从而减少人工审核时间并赢得信任。我们开发了一种方法来提高当代转换器模型的注意力,并通过结合外部临床知识生成心理健康从业人员(MHPs)可以理解的分类解释。我们提出了一种名为 "ProcesS knowledgeinfused cross ATtention (PSAT) "的领域通用架构,该架构在计算注意力时结合了临床实践指南(CPG)。我们利用 SNOMED-CT 将以抑郁症为重点的 CPG 资源(如患者健康问卷(PHQ-9)和相关问题)转化为机器可读的本体。有了这一资源,PSAT 就能增强 GPT-3.5 等模型生成应用相关解释的能力。对四个由专家编辑的抑郁症相关数据集的评估证明了 PSAT 的应用相关解释能力。PSAT 的性能超过了 12 个基线模型,可以提供其他基线模型无法提供的解释。
{"title":"A Cross Attention Approach to Diagnostic Explainability Using Clinical Practice Guidelines for Depression.","authors":"Sumit Dalal, Deepa Tilwani, Manas Gaur, Sarika Jain, Valerie L Shalin, Amit P Sheth","doi":"10.1109/JBHI.2024.3483577","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3483577","url":null,"abstract":"<p><p>The lack of explainability in using relevant clinical knowledge hinders the adoption of artificial intelligence-powered analysis of unstructured clinical dialogue. A wealth of relevant, untapped Mental Health (MH) data is available in online communities, providing the opportunity to address the explainability problem with substantial potential impact as a screening tool for both online and offline applications. Inspired by how clinicians rely on their expertise when interacting with patients, we leverage relevant clinical knowledge to classify and explain depression-related data, reducing manual review time and engendering trust. We developed a method to enhance attention in contemporary transformer models and generate explanations for classifications that are understandable by mental health practitioners (MHPs) by incorporating external clinical knowledge. We propose a domain-general architecture called ProcesS knowledgeinfused cross ATtention (PSAT) that incorporates clinical practice guidelines (CPG) when computing attention. We transform a CPG resource focused on depression, such as the Patient Health Questionnaire (e.g. PHQ-9) and related questions, into a machine-readable ontology using SNOMED-CT. With this resource, PSAT enhances the ability of models like GPT-3.5 to generate application-relevant explanations. Evaluation of four expert-curated datasets related to depression demonstrates PSAT's applicationrelevant explanations. PSAT surpasses the performance of twelve baseline models and can provide explanations where other baselines fall short.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142464056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CATransformer: A Cycle-Aware Transformer for High-Fidelity ECG Generation From PPG. CATransformer:从 PPG 生成高保真心电图的周期感知变压器。
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-17 DOI: 10.1109/JBHI.2024.3482853
Xiaoyan Yuan, Wei Wang, Xiaohe Li, Yuanting Zhang, Xiping Hu, M Jamal Deen

Electrocardiography (ECG) is the gold standard for monitoring heart function and is crucial for preventing the worsening of cardiovascular diseases (CVDs). However, the inconvenience of ECG acquisition poses challenges for long-term continuous monitoring. Consequently, researchers have explored non-invasive and easily accessible photoplethysmography (PPG) as an alternative, converting it into ECG. Previous studies have focused on peaks or simple mapping to generate ECG, ignoring the inherent periodicity of cardiovascular signals. This results in an inability to accurately extract physiological information during the cycle, thus compromising the generated ECG signals' clinical utility. To this end, we introduce a novel PPG-to-ECG translation model called CATransformer, capable of adaptive modeling based on the cardiac cycle. Specifically, CATransformer automatically extracts the cycle using a cycle-aware module and creates multiple semantic views of the cardiac cycle. It leverages a transformer to capture detailed features within each cycle and the dynamics across cycles. Our method outperforms existing approaches, exhibiting the lowest RMSE across five paired PPG-ECG databases. Additionally, extensive experiments are conducted on four cardiovascular-related tasks to assess the clinical utility of the generated ECG, achieving consistent state-of-the-art performance. Experimental results confirm that CATransformer generates highly faithful ECG signals while preserving their physiological characteristics.

心电图(ECG)是监测心脏功能的黄金标准,对预防心血管疾病(CVD)恶化至关重要。然而,心电图采集的不便给长期连续监测带来了挑战。因此,研究人员探索了一种非侵入性且易于获取的光电血压计(PPG)作为替代方法,将其转换为心电图。以往的研究侧重于峰值或简单映射来生成心电图,忽略了心血管信号固有的周期性。这导致无法准确提取周期内的生理信息,从而影响了生成的心电信号的临床实用性。为此,我们引入了一种名为 CATransformer 的新型 PPG 到 ECG 转换模型,它能够根据心动周期自适应建模。具体来说,CATransformer 使用周期感知模块自动提取周期,并创建多个心动周期语义视图。它利用转换器捕捉每个周期内的详细特征和跨周期的动态变化。我们的方法优于现有方法,在五个配对的 PPG-ECG 数据库中显示出最低的 RMSE。此外,我们还在四项心血管相关任务中进行了广泛的实验,以评估生成的心电图的临床实用性,并取得了一致的先进性能。实验结果证实,CATransformer 可生成高度忠实的心电图信号,同时保留其生理特征。
{"title":"CATransformer: A Cycle-Aware Transformer for High-Fidelity ECG Generation From PPG.","authors":"Xiaoyan Yuan, Wei Wang, Xiaohe Li, Yuanting Zhang, Xiping Hu, M Jamal Deen","doi":"10.1109/JBHI.2024.3482853","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3482853","url":null,"abstract":"<p><p>Electrocardiography (ECG) is the gold standard for monitoring heart function and is crucial for preventing the worsening of cardiovascular diseases (CVDs). However, the inconvenience of ECG acquisition poses challenges for long-term continuous monitoring. Consequently, researchers have explored non-invasive and easily accessible photoplethysmography (PPG) as an alternative, converting it into ECG. Previous studies have focused on peaks or simple mapping to generate ECG, ignoring the inherent periodicity of cardiovascular signals. This results in an inability to accurately extract physiological information during the cycle, thus compromising the generated ECG signals' clinical utility. To this end, we introduce a novel PPG-to-ECG translation model called CATransformer, capable of adaptive modeling based on the cardiac cycle. Specifically, CATransformer automatically extracts the cycle using a cycle-aware module and creates multiple semantic views of the cardiac cycle. It leverages a transformer to capture detailed features within each cycle and the dynamics across cycles. Our method outperforms existing approaches, exhibiting the lowest RMSE across five paired PPG-ECG databases. Additionally, extensive experiments are conducted on four cardiovascular-related tasks to assess the clinical utility of the generated ECG, achieving consistent state-of-the-art performance. Experimental results confirm that CATransformer generates highly faithful ECG signals while preserving their physiological characteristics.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142464062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A LLM-Based Hybrid-Transformer Diagnosis System in Healthcare. 基于 LLM 的混合变压器医疗诊断系统。
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-16 DOI: 10.1109/JBHI.2024.3481412
Dongyuan Wu, Liming Nie, Rao Asad Mumtaz, Kadambri Agarwal

The application of computer vision-powered large language models (LLMs) for medical image diagnosis has significantly advanced healthcare systems. Recent progress in developing symmetrical architectures has greatly impacted various medical imaging tasks. While CNNs or RNNs have demonstrated excellent performance, these architectures often face notable limitations of substantial losses in detailed information, such as requiring to capture global semantic information effectively and relying heavily on deep encoders and aggressive downsampling. This paper introduces a novel LLM-based Hybrid-Transformer Network (HybridTransNet) designed to encode tokenized Big Data patches with the transformer mechanism, which elegantly embeds multimodal data of varying sizes as token sequence inputs of LLMS. Subsequently, the network performs both inter-scale and intra-scale self-attention, processing data features through a transformer-based symmetric architecture with a refining module, which facilitates accurately recovering both local and global context information. Additionally, the output is refined using a novel fuzzy selector. Compared to other existing methods on two distinct datasets, the experimental findings and formal assessment demonstrate that our LLM-based HybridTransNet provides superior performance for brain tumor diagnosis in healthcare informatics.

计算机视觉驱动的大型语言模型(LLM)在医学图像诊断中的应用极大地推动了医疗保健系统的发展。最近在开发对称架构方面取得的进展极大地影响了各种医学成像任务。虽然 CNNs 或 RNNs 表现出了卓越的性能,但这些架构往往面临着显著的局限性,如需要有效捕捉全局语义信息、严重依赖深度编码器和激进的降采样等,从而导致详细信息的大量损失。本文介绍了一种新颖的基于 LLM 的混合变换器网络(HybridTransNet),旨在利用变换器机制对标记化的大数据补丁进行编码,将不同大小的多模态数据优雅地嵌入 LLMS 的标记序列输入中。随后,该网络执行尺度间和尺度内的自我关注,通过基于转换器的对称架构和精炼模块处理数据特征,从而有助于准确恢复局部和全局上下文信息。此外,还使用了一种新颖的模糊选择器来完善输出。在两个不同的数据集上,与其他现有方法相比,实验结果和正式评估表明,我们基于 LLM 的 HybridTransNet 为医疗信息学中的脑肿瘤诊断提供了卓越的性能。
{"title":"A LLM-Based Hybrid-Transformer Diagnosis System in Healthcare.","authors":"Dongyuan Wu, Liming Nie, Rao Asad Mumtaz, Kadambri Agarwal","doi":"10.1109/JBHI.2024.3481412","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3481412","url":null,"abstract":"<p><p>The application of computer vision-powered large language models (LLMs) for medical image diagnosis has significantly advanced healthcare systems. Recent progress in developing symmetrical architectures has greatly impacted various medical imaging tasks. While CNNs or RNNs have demonstrated excellent performance, these architectures often face notable limitations of substantial losses in detailed information, such as requiring to capture global semantic information effectively and relying heavily on deep encoders and aggressive downsampling. This paper introduces a novel LLM-based Hybrid-Transformer Network (HybridTransNet) designed to encode tokenized Big Data patches with the transformer mechanism, which elegantly embeds multimodal data of varying sizes as token sequence inputs of LLMS. Subsequently, the network performs both inter-scale and intra-scale self-attention, processing data features through a transformer-based symmetric architecture with a refining module, which facilitates accurately recovering both local and global context information. Additionally, the output is refined using a novel fuzzy selector. Compared to other existing methods on two distinct datasets, the experimental findings and formal assessment demonstrate that our LLM-based HybridTransNet provides superior performance for brain tumor diagnosis in healthcare informatics.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142464057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Facial Expression Recognition for Healthcare Monitoring Systems Using Neural Random Forest 使用神经随机森林为医疗监控系统识别面部表情。
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-16 DOI: 10.1109/JBHI.2024.3482450
Muhammad Hameed Siddiqi;Irshad Ahmad;Yousef Alhwaiti;Faheem Khan
Facial expressions vary with different health conditions, making a facial expression recognition (FER) system valuable within a healthcare framework. Achieving accurate recognition of facial expressions is a considerable challenge due to the difficulty in capturing subtle features. This research introduced an ensemble neural random forest method that utilizes convolutional neural network (CNN) architecture for feature extraction and optimized random forest for classification. For feature extraction, four convolutional layers with different numbers of filters and kernel sizes are used. Further, the maxpooling, batch normalization, and dropout layers are used in the model to expedite the process of feature extraction and avoid the overfitting of the model. The extracted features are provided to the optimized random forest for classification, which is based on the number of trees, criterion, maximum tree depth, maximum terminal nodes, minimum sample split, and maximum features per tree, and applied to the classification process. To demonstrate the significance of the proposed model, we conducted a thorough assessment of the proposed neural random forest through an extensive experiment encompassing six publicly available datasets. The remarkable weighted average recognition rate of 97.3% achieved across these diverse datasets highlights the effectiveness of our approach in the context of FER systems.
面部表情会随着不同的健康状况而变化,因此面部表情识别(FER)系统在医疗保健框架内非常有价值。由于难以捕捉细微特征,因此实现面部表情的准确识别是一项相当大的挑战。这项研究引入了一种集合神经随机森林方法,利用卷积神经网络(CNN)架构进行特征提取,并利用优化的随机森林进行分类。在特征提取方面,使用了四个具有不同数量过滤器和内核大小的卷积层。此外,模型中还使用了 maxpooling、batch normalization 和 dropout 层,以加快特征提取过程,避免模型的过度拟合。提取的特征将提供给优化的随机森林进行分类,该分类基于树的数量、准则、最大树深、最大终端节点、最小样本分割和每棵树的最大特征,并应用于分类过程。为了证明所提模型的重要意义,我们通过一项包含六个公开数据集的广泛实验,对所提神经随机森林进行了全面评估。这些不同数据集的加权平均识别率高达 97.3%,这充分证明了我们的方法在 FER 系统中的有效性。
{"title":"Facial Expression Recognition for Healthcare Monitoring Systems Using Neural Random Forest","authors":"Muhammad Hameed Siddiqi;Irshad Ahmad;Yousef Alhwaiti;Faheem Khan","doi":"10.1109/JBHI.2024.3482450","DOIUrl":"10.1109/JBHI.2024.3482450","url":null,"abstract":"Facial expressions vary with different health conditions, making a facial expression recognition (FER) system valuable within a healthcare framework. Achieving accurate recognition of facial expressions is a considerable challenge due to the difficulty in capturing subtle features. This research introduced an ensemble neural random forest method that utilizes convolutional neural network (CNN) architecture for feature extraction and optimized random forest for classification. For feature extraction, four convolutional layers with different numbers of filters and kernel sizes are used. Further, the maxpooling, batch normalization, and dropout layers are used in the model to expedite the process of feature extraction and avoid the overfitting of the model. The extracted features are provided to the optimized random forest for classification, which is based on the number of trees, criterion, maximum tree depth, maximum terminal nodes, minimum sample split, and maximum features per tree, and applied to the classification process. To demonstrate the significance of the proposed model, we conducted a thorough assessment of the proposed neural random forest through an extensive experiment encompassing six publicly available datasets. The remarkable weighted average recognition rate of 97.3% achieved across these diverse datasets highlights the effectiveness of our approach in the context of FER systems.","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"29 1","pages":"30-42"},"PeriodicalIF":6.7,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142464063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SBTD: Secured Brain Tumor Detection in IoMT Enabled Smart Healthcare. SBTD:IoMT 智能医疗中的安全脑肿瘤检测。
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-16 DOI: 10.1109/JBHI.2024.3482465
Nishtha Tomar, Parkala Vishnu Bharadwaj Bayari, Gaurav Bhatnagar

Brain tumors are fatal and severely disrupt brain function as they advance. Timely detection and precise monitoring are crucial for improving patient outcomes and survival. A smart healthcare system leveraging the Internet of Medical Things (IoMT) revolutionizes patient care by offering streamlined remote healthcare, especially for individuals with acute medical conditions like brain tumors. However, such systems face significant challenges, such as (1) the increasing prevalence of cyber attacks in the expanding digital healthcare landscape, and (2) the lack of reliability and accuracy in existing tumor detection methods. To address these issues, we propose Secured Brain Tumor Detection (SBTD), the first unified system integrating IoMT with secure tumor detection. SBTD features: (1) a robust security framework, grounded in chaos theory, to safeguard medical data; and (2) a reliable machine learning-based tumor detection framework that accurately localizes tumors using their anatomy. Comprehensive experimental evaluations on different multimodal MRI datasets demonstrate the system's suitability, clinical applicability and superior performance over state-of-the-art algorithms.

脑肿瘤是致命的,随着肿瘤的发展会严重破坏大脑功能。及时发现和精确监测对于改善患者预后和生存率至关重要。利用医疗物联网(IoMT)的智能医疗保健系统通过提供简化的远程医疗保健,尤其是针对脑肿瘤等急性病患者的远程医疗保健,彻底改变了患者的护理方式。然而,这类系统面临着巨大的挑战,例如:(1)在不断扩大的数字医疗领域,网络攻击日益猖獗;(2)现有的肿瘤检测方法缺乏可靠性和准确性。为了解决这些问题,我们提出了安全脑肿瘤检测(SBTD),这是首个将 IoMT 与安全肿瘤检测相结合的统一系统。SBTD 的特点是(1) 以混沌理论为基础的稳健安全框架,以保护医疗数据;(2) 基于机器学习的可靠肿瘤检测框架,利用肿瘤的解剖结构准确定位肿瘤。在不同的多模态磁共振成像数据集上进行的全面实验评估证明了该系统的适用性、临床应用性以及优于最先进算法的性能。
{"title":"SBTD: Secured Brain Tumor Detection in IoMT Enabled Smart Healthcare.","authors":"Nishtha Tomar, Parkala Vishnu Bharadwaj Bayari, Gaurav Bhatnagar","doi":"10.1109/JBHI.2024.3482465","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3482465","url":null,"abstract":"<p><p>Brain tumors are fatal and severely disrupt brain function as they advance. Timely detection and precise monitoring are crucial for improving patient outcomes and survival. A smart healthcare system leveraging the Internet of Medical Things (IoMT) revolutionizes patient care by offering streamlined remote healthcare, especially for individuals with acute medical conditions like brain tumors. However, such systems face significant challenges, such as (1) the increasing prevalence of cyber attacks in the expanding digital healthcare landscape, and (2) the lack of reliability and accuracy in existing tumor detection methods. To address these issues, we propose Secured Brain Tumor Detection (SBTD), the first unified system integrating IoMT with secure tumor detection. SBTD features: (1) a robust security framework, grounded in chaos theory, to safeguard medical data; and (2) a reliable machine learning-based tumor detection framework that accurately localizes tumors using their anatomy. Comprehensive experimental evaluations on different multimodal MRI datasets demonstrate the system's suitability, clinical applicability and superior performance over state-of-the-art algorithms.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142464114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prior Visual-guided Self-supervised Learning Enables Color Vignetting Correction for High-throughput Microscopic Imaging. 先验视觉引导的自我监督学习实现了高通量显微成像的色彩晕影校正。
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-16 DOI: 10.1109/JBHI.2024.3471907
Jianhang Wang, Tianyu Ma, Luhong Jin, Yunqi Zhu, Jiahui Yu, Feng Chen, Shujun Fu, Yingke Xu

Vignetting constitutes a prevalent optical degradation that significantly compromises the quality of biomedical microscopic imaging. However, a robust and efficient vignetting correction methodology in multi-channel microscopic images remains absent at present. In this paper, we take advantage of a prior knowledge about the homogeneity of microscopic images and radial attenuation property of vignetting to develop a self-supervised deep learning algorithm that achieves complex vignetting removal in color microscopic images. Our proposed method, vignetting correction lookup table (VCLUT), is trainable on both single and multiple images, which employs adversarial learning to effectively transfer good imaging conditions from the user visually defined central region of its own light field to the entire image. To illustrate its effectiveness, we performed individual correction experiments on data from five distinct biological specimens. The results demonstrate that VCLUT exhibits enhanced performance compared to classical methods. We further examined its performance as a multi-image-based approach on a pathological dataset, revealing its advantage over other stateof-the-art approaches in both qualitative and quantitative measurements. Moreover, it uniquely possesses the capacity for generalization across various levels of vignetting intensity and an ultra-fast model computation capability, rendering it well-suited for integration into high-throughput imaging pipelines of digital microscopy.

渐晕是一种普遍存在的光学退化现象,严重影响了生物医学显微成像的质量。然而,目前仍缺乏一种稳健高效的多通道显微图像渐晕校正方法。在本文中,我们利用有关显微图像均匀性和渐晕径向衰减特性的先验知识,开发了一种自监督深度学习算法,可实现彩色显微图像中复杂渐晕的去除。我们提出的方法--晕影校正查找表(VCLUT)--可在单幅和多幅图像上进行训练,它采用对抗学习,有效地将良好的成像条件从用户视觉定义的自身光场中心区域转移到整个图像。为了说明其有效性,我们对来自五个不同生物标本的数据进行了单独的校正实验。结果表明,与传统方法相比,VCLUT 的性能有所提高。我们还在病理数据集上进一步检验了它作为基于多图像方法的性能,结果表明它在定性和定量测量方面都优于其他最先进的方法。此外,它还具有跨越不同渐晕强度水平的通用能力和超快的模型计算能力,非常适合集成到数字显微镜的高通量成像管道中。
{"title":"Prior Visual-guided Self-supervised Learning Enables Color Vignetting Correction for High-throughput Microscopic Imaging.","authors":"Jianhang Wang, Tianyu Ma, Luhong Jin, Yunqi Zhu, Jiahui Yu, Feng Chen, Shujun Fu, Yingke Xu","doi":"10.1109/JBHI.2024.3471907","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3471907","url":null,"abstract":"<p><p>Vignetting constitutes a prevalent optical degradation that significantly compromises the quality of biomedical microscopic imaging. However, a robust and efficient vignetting correction methodology in multi-channel microscopic images remains absent at present. In this paper, we take advantage of a prior knowledge about the homogeneity of microscopic images and radial attenuation property of vignetting to develop a self-supervised deep learning algorithm that achieves complex vignetting removal in color microscopic images. Our proposed method, vignetting correction lookup table (VCLUT), is trainable on both single and multiple images, which employs adversarial learning to effectively transfer good imaging conditions from the user visually defined central region of its own light field to the entire image. To illustrate its effectiveness, we performed individual correction experiments on data from five distinct biological specimens. The results demonstrate that VCLUT exhibits enhanced performance compared to classical methods. We further examined its performance as a multi-image-based approach on a pathological dataset, revealing its advantage over other stateof-the-art approaches in both qualitative and quantitative measurements. Moreover, it uniquely possesses the capacity for generalization across various levels of vignetting intensity and an ultra-fast model computation capability, rendering it well-suited for integration into high-throughput imaging pipelines of digital microscopy.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142464112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
mDARTS: Searching ML-Based ECG Classifiers Against Membership Inference Attacks mDARTS:针对成员推断攻击搜索基于 ML 的心电图分类器。
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-16 DOI: 10.1109/JBHI.2024.3481505
Eunbin Park;Youngjoo Lee
This paper addresses the critical need for elctrocardiogram (ECG) classifier architectures that balance high classification performance with robust privacy protection against membership inference attacks (MIA). We introduce a comprehensive approach that innovates in both machine learning efficacy and privacy preservation. Key contributions include the development of a privacy estimator to quantify and mitigate privacy leakage in neural network architectures used for ECG classification. Utilizing this privacy estimator, we propose mDARTS (searching ML-based ECG classifier against MIA), integrating MIA's attack loss into the architecture search process to identify architectures that are both accurate and resilient to MIA threats. Our method achieves significant improvements, with an ECG classification accuracy of 92.1% and a lower privacy score of 54.3%, indicating reduced potential for sensitive information leakage. Heuristic experiments refine architecture search parameters specifically for ECG classification, enhancing classifier performance and privacy scores by up to 3.0% and 1.0%, respectively. The framework's adaptability supports user customization, enabling the extraction of architectures that meet specific criteria such as optimal classification performance with minimal privacy risk. By focusing on the intersection of high-performance ECG classification and the mitigation of privacy risks associated with MIA, our study offers a pioneering solution addressing the limitations of previous approaches.
本文探讨了心电图(ECG)分类器架构的关键需求,这种架构既能兼顾高分类性能,又能保护隐私免受成员推理攻击(MIA)。我们介绍了一种在机器学习效率和隐私保护方面都有所创新的综合方法。主要贡献包括开发了一种隐私估算器,用于量化和减轻用于心电图分类的神经网络架构中的隐私泄露。利用这种隐私估算器,我们提出了 mDARTS(搜索基于 ML 的心电图分类器以对抗 MIA),将 MIA 的攻击损失整合到架构搜索过程中,以识别既准确又能抵御 MIA 威胁的架构。我们的方法取得了重大改进,心电图分类准确率达到 92.1%,隐私得分降低了 54.3%,这表明敏感信息泄漏的可能性降低了。启发式实验改进了专门针对心电图分类的架构搜索参数,使分类器性能和隐私得分分别提高了 3.0% 和 1.0%。该框架的适应性支持用户定制,能够提取符合特定标准的架构,如最佳分类性能和最小隐私风险。通过关注高性能心电图分类与降低与 MIA 相关的隐私风险的交叉点,我们的研究提供了一种开创性的解决方案,解决了以往方法的局限性。
{"title":"mDARTS: Searching ML-Based ECG Classifiers Against Membership Inference Attacks","authors":"Eunbin Park;Youngjoo Lee","doi":"10.1109/JBHI.2024.3481505","DOIUrl":"10.1109/JBHI.2024.3481505","url":null,"abstract":"This paper addresses the critical need for elctrocardiogram (ECG) classifier architectures that balance high classification performance with robust privacy protection against membership inference attacks (MIA). We introduce a comprehensive approach that innovates in both machine learning efficacy and privacy preservation. Key contributions include the development of a privacy estimator to quantify and mitigate privacy leakage in neural network architectures used for ECG classification. Utilizing this privacy estimator, we propose mDARTS (searching ML-based ECG classifier against MIA), integrating MIA's attack loss into the architecture search process to identify architectures that are both accurate and resilient to MIA threats. Our method achieves significant improvements, with an ECG classification accuracy of 92.1% and a lower privacy score of 54.3%, indicating reduced potential for sensitive information leakage. Heuristic experiments refine architecture search parameters specifically for ECG classification, enhancing classifier performance and privacy scores by up to 3.0% and 1.0%, respectively. The framework's adaptability supports user customization, enabling the extraction of architectures that meet specific criteria such as optimal classification performance with minimal privacy risk. By focusing on the intersection of high-performance ECG classification and the mitigation of privacy risks associated with MIA, our study offers a pioneering solution addressing the limitations of previous approaches.","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"29 1","pages":"177-187"},"PeriodicalIF":6.7,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142464111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Journal of Biomedical and Health Informatics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1