首页 > 最新文献

Artificial Intelligence in Medicine最新文献

英文 中文
LCDL: Classification of ICD codes based on disease label co-occurrence dependency and LongFormer with medical knowledge LCDL:基于疾病标签共现依赖和LongFormer与医学知识的ICD代码分类。
IF 6.1 2区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-01 DOI: 10.1016/j.artmed.2024.103041
Yumeng Yang , Hongfei Lin , Zhihao Yang , Yijia Zhang , Di Zhao , Ling Luo
Medical coding involves assigning codes to clinical free-text documents, specifically medical records that average over 3,000 markers, in order to track patient diagnoses and treatments. This is typically accomplished through manual assignments by healthcare professionals. To improve efficiency and accuracy while reducing the workload on these professionals, researchers have employed a multi-label classification approach. Since the long-tail phenomenon impacts tens of thousands of ICD codes, whereby only a few codes (representative of common diseases) are frequently assigned, while the majority of codes (representative of rare diseases) are infrequently assigned, this paper presents an LCDL model that addresses the challenge at hand by examining the LongFormer pre-trained language model and the disease label co-occurrence map. To enhance the performance of automated medical coding in the biomedical domain, hierarchies with medical knowledge, synonyms and abbreviations are introduced, improving the medical knowledge representation. Test evaluations are extensively conducted on the benchmark dataset MIMIC-III, and obtained the competitive performance compared to the previous state-of-the-art methods.
医学编码包括为临床自由文本文档分配代码,特别是平均超过3000个标记的医疗记录,以便跟踪患者的诊断和治疗。这通常是通过医疗保健专业人员的手动任务来完成的。为了提高效率和准确性,同时减少这些专业人员的工作量,研究人员采用了多标签分类方法。由于长尾现象影响了数以万计的ICD代码,其中只有少数代码(代表常见疾病)经常被分配,而大多数代码(代表罕见疾病)不经常被分配,因此本文提出了一个LCDL模型,该模型通过检查LongFormer预训练语言模型和疾病标签共现图来解决当前的挑战。为了提高自动医学编码在生物医学领域的性能,引入了包含医学知识、同义词和缩写的层次结构,改进了医学知识的表示。在基准数据集MIMIC-III上进行了广泛的测试评估,与之前最先进的方法相比,获得了具有竞争力的性能。
{"title":"LCDL: Classification of ICD codes based on disease label co-occurrence dependency and LongFormer with medical knowledge","authors":"Yumeng Yang ,&nbsp;Hongfei Lin ,&nbsp;Zhihao Yang ,&nbsp;Yijia Zhang ,&nbsp;Di Zhao ,&nbsp;Ling Luo","doi":"10.1016/j.artmed.2024.103041","DOIUrl":"10.1016/j.artmed.2024.103041","url":null,"abstract":"<div><div>Medical coding involves assigning codes to clinical free-text documents, specifically medical records that average over 3,000 markers, in order to track patient diagnoses and treatments. This is typically accomplished through manual assignments by healthcare professionals. To improve efficiency and accuracy while reducing the workload on these professionals, researchers have employed a multi-label classification approach. Since the long-tail phenomenon impacts tens of thousands of ICD codes, whereby only a few codes (representative of common diseases) are frequently assigned, while the majority of codes (representative of rare diseases) are infrequently assigned, this paper presents an LCDL model that addresses the challenge at hand by examining the LongFormer pre-trained language model and the disease label co-occurrence map. To enhance the performance of automated medical coding in the biomedical domain, hierarchies with medical knowledge, synonyms and abbreviations are introduced, improving the medical knowledge representation. Test evaluations are extensively conducted on the benchmark dataset MIMIC-III, and obtained the competitive performance compared to the previous state-of-the-art methods.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"160 ","pages":"Article 103041"},"PeriodicalIF":6.1,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142820289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Architecture Search for biomedical image classification: A comparative study across data modalities 生物医学图像分类的神经结构搜索:跨数据模式的比较研究。
IF 6.1 2区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-01 DOI: 10.1016/j.artmed.2024.103064
Zeki Kuş , Musa Aydin , Berna Kiraz , Alper Kiraz
Deep neural networks have significantly advanced medical image classification across various modalities and tasks. However, manually designing these networks is often time-consuming and suboptimal. Neural Architecture Search (NAS) automates this process, potentially finding more efficient and effective models. This study provides a comprehensive comparative analysis of our two NAS methods, PBC-NAS and BioNAS, across multiple biomedical image classification tasks using the MedMNIST dataset. Our experiments evaluate these methods based on classification performance (Accuracy (ACC) and Area Under the Curve (AUC)) and computational complexity (Floating Point Operation Counts). Results demonstrate that BioNAS models slightly outperform PBC-NAS models in accuracy, with BioNAS-2 achieving the highest average accuracy of 0.848. However, PBC-NAS models exhibit superior computational efficiency, with PBC-NAS-2 achieving the lowest average FLOPs of 0.82 GB. Both methods outperform state-of-the-art architectures like ResNet-18 and ResNet-50 and AutoML frameworks such as auto-sklearn, AutoKeras, and Google AutoML. Additionally, PBC-NAS and BioNAS outperform other NAS studies in average ACC results (except MSTF-NAS), and show highly competitive results in average AUC. We conduct extensive ablation studies to investigate the impact of architectural parameters, the effectiveness of fine-tuning, search space efficiency, and the discriminative performance of generated architectures. These studies reveal that larger filter sizes and specific numbers of stacks or modules enhance performance. Fine-tuning existing architectures can achieve nearly optimal results without separating NAS for each dataset. Furthermore, we analyze search space efficiency, uncovering patterns in frequently selected operations and architectural choices. This study highlights the strengths and efficiencies of PBC-NAS and BioNAS, providing valuable insights and guidance for future research and practical applications in biomedical image classification.
深度神经网络在各种模式和任务中具有显著的先进医学图像分类。然而,手动设计这些网络通常是耗时且不理想的。神经架构搜索(NAS)自动化了这一过程,可能会发现更高效和有效的模型。本研究对我们的两种NAS方法,PBC-NAS和BioNAS,在使用MedMNIST数据集的多个生物医学图像分类任务中进行了全面的比较分析。我们的实验基于分类性能(准确率(ACC)和曲线下面积(AUC))和计算复杂度(浮点运算计数)来评估这些方法。结果表明,BioNAS模型在准确率上略优于PBC-NAS模型,其中BioNAS-2的平均准确率最高,为0.848。然而,PBC-NAS模型表现出更高的计算效率,PBC-NAS-2的平均FLOPs最低,为0.82 GB。这两种方法都优于ResNet-18和ResNet-50等最先进的架构和AutoML框架,如auto-sklearn、AutoKeras和谷歌AutoML。此外,PBC-NAS和BioNAS在平均ACC结果上优于其他NAS研究(MSTF-NAS除外),在平均AUC上具有很强的竞争力。我们进行了广泛的消融研究,以调查架构参数的影响、微调的有效性、搜索空间效率和生成架构的判别性能。这些研究表明,较大的滤波器尺寸和特定数量的堆栈或模块可以提高性能。对现有架构进行微调可以在不为每个数据集分离NAS的情况下获得近乎最佳的结果。此外,我们分析了搜索空间效率,揭示了经常选择的操作和架构选择中的模式。本研究突出了PBC-NAS和BioNAS的优势和效率,为未来生物医学图像分类的研究和实际应用提供了有价值的见解和指导。
{"title":"Neural Architecture Search for biomedical image classification: A comparative study across data modalities","authors":"Zeki Kuş ,&nbsp;Musa Aydin ,&nbsp;Berna Kiraz ,&nbsp;Alper Kiraz","doi":"10.1016/j.artmed.2024.103064","DOIUrl":"10.1016/j.artmed.2024.103064","url":null,"abstract":"<div><div>Deep neural networks have significantly advanced medical image classification across various modalities and tasks. However, manually designing these networks is often time-consuming and suboptimal. Neural Architecture Search (NAS) automates this process, potentially finding more efficient and effective models. This study provides a comprehensive comparative analysis of our two NAS methods, PBC-NAS and BioNAS, across multiple biomedical image classification tasks using the MedMNIST dataset. Our experiments evaluate these methods based on classification performance (Accuracy (ACC) and Area Under the Curve (AUC)) and computational complexity (Floating Point Operation Counts). Results demonstrate that BioNAS models slightly outperform PBC-NAS models in accuracy, with BioNAS-2 achieving the highest average accuracy of 0.848. However, PBC-NAS models exhibit superior computational efficiency, with PBC-NAS-2 achieving the lowest average FLOPs of 0.82 GB. Both methods outperform state-of-the-art architectures like ResNet-18 and ResNet-50 and AutoML frameworks such as auto-sklearn, AutoKeras, and Google AutoML. Additionally, PBC-NAS and BioNAS outperform other NAS studies in average ACC results (except MSTF-NAS), and show highly competitive results in average AUC. We conduct extensive ablation studies to investigate the impact of architectural parameters, the effectiveness of fine-tuning, search space efficiency, and the discriminative performance of generated architectures. These studies reveal that larger filter sizes and specific numbers of stacks or modules enhance performance. Fine-tuning existing architectures can achieve nearly optimal results without separating NAS for each dataset. Furthermore, we analyze search space efficiency, uncovering patterns in frequently selected operations and architectural choices. This study highlights the strengths and efficiencies of PBC-NAS and BioNAS, providing valuable insights and guidance for future research and practical applications in biomedical image classification.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"160 ","pages":"Article 103064"},"PeriodicalIF":6.1,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142959368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Glaucoma detection: Binocular approach and clinical data in machine learning 青光眼检测:双目入路和机器学习中的临床数据。
IF 6.1 2区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-01 DOI: 10.1016/j.artmed.2024.103050
Oleksandr Kovalyk-Borodyak, Juan Morales-Sánchez, Rafael Verdú-Monedero, José-Luis Sancho-Gómez
In this work, we present a multi-modal machine learning method to automate early glaucoma diagnosis. The proposed methodology introduces two novel aspects for automated diagnosis not previously explored in the literature: simultaneous use of ocular fundus images from both eyes and integration with the patient’s additional clinical data. We begin by establishing a baseline, termed monocular mode, which adheres to the traditional approach of considering the data from each eye as a separate instance. We then explore the binocular mode, investigating how combining information from both eyes of the same patient can enhance glaucoma diagnosis accuracy. This exploration employs the PAPILA dataset, comprising information from both eyes, clinical data, ocular fundus images, and expert segmentation of these images. Additionally, we compare two image-derived data modalities: direct ocular fundus images and morphological data from manual expert segmentation. Our method integrates Gradient-Boosted Decision Trees (GBDT) and Convolutional Neural Networks (CNN), specifically focusing on the MobileNet, VGG16, ResNet-50, and Inception models. SHAP values are used to interpret GBDT models, while the Deep Explainer method is applied in conjunction with SHAP to analyze the outputs of convolutional-based models. Our findings show the viability of considering both eyes, which improves the model performance. The binocular approach, incorporating information from morphological and clinical data yielded an AUC of 0.796 (±0.003 at a 95% confidence interval), while the CNN, using the same approach (both eyes), achieved an AUC of 0.764 (±0.005 at a 95% confidence interval).
在这项工作中,我们提出了一种多模态机器学习方法来自动化早期青光眼诊断。提出的方法为自动诊断引入了两个新的方面,以前没有在文献中探索:同时使用双眼眼底图像,并与患者的额外临床数据整合。我们首先建立一个基线,称为单眼模式,它坚持将每只眼睛的数据视为单独实例的传统方法。然后我们探索双眼模式,研究如何结合来自同一患者两只眼睛的信息来提高青光眼的诊断准确性。本研究采用PAPILA数据集,包括双眼信息、临床数据、眼底图像以及这些图像的专家分割。此外,我们比较了两种图像衍生的数据模式:直接眼底图像和人工专家分割的形态学数据。我们的方法集成了梯度增强决策树(GBDT)和卷积神经网络(CNN),特别关注MobileNet, VGG16, ResNet-50和盗梦空间模型。SHAP值用于解释GBDT模型,而Deep Explainer方法与SHAP一起用于分析基于卷积的模型的输出。我们的研究结果表明,考虑两只眼睛的可行性,这提高了模型的性能。结合形态学和临床数据信息的双眼入路的AUC为0.796(95%置信区间±0.003),而使用相同方法(双眼)的CNN的AUC为0.764(95%置信区间±0.005)。
{"title":"Glaucoma detection: Binocular approach and clinical data in machine learning","authors":"Oleksandr Kovalyk-Borodyak,&nbsp;Juan Morales-Sánchez,&nbsp;Rafael Verdú-Monedero,&nbsp;José-Luis Sancho-Gómez","doi":"10.1016/j.artmed.2024.103050","DOIUrl":"10.1016/j.artmed.2024.103050","url":null,"abstract":"<div><div>In this work, we present a multi-modal machine learning method to automate early glaucoma diagnosis. The proposed methodology introduces two novel aspects for automated diagnosis not previously explored in the literature: simultaneous use of ocular fundus images from both eyes and integration with the patient’s additional clinical data. We begin by establishing a baseline, termed <em>monocular mode</em>, which adheres to the traditional approach of considering the data from each eye as a separate instance. We then explore the <em>binocular mode</em>, investigating how combining information from both eyes of the same patient can enhance glaucoma diagnosis accuracy. This exploration employs the PAPILA dataset, comprising information from both eyes, clinical data, ocular fundus images, and expert segmentation of these images. Additionally, we compare two image-derived data modalities: direct ocular fundus images and morphological data from manual expert segmentation. Our method integrates Gradient-Boosted Decision Trees (GBDT) and Convolutional Neural Networks (CNN), specifically focusing on the MobileNet, VGG16, ResNet-50, and Inception models. SHAP values are used to interpret GBDT models, while the Deep Explainer method is applied in conjunction with SHAP to analyze the outputs of convolutional-based models. Our findings show the viability of considering both eyes, which improves the model performance. The binocular approach, incorporating information from morphological and clinical data yielded an AUC of 0.796 (<span><math><mrow><mo>±</mo><mn>0</mn><mo>.</mo><mn>003</mn></mrow></math></span> at a 95% confidence interval), while the CNN, using the same approach (both eyes), achieved an AUC of 0.764 (<span><math><mrow><mo>±</mo><mn>0</mn><mo>.</mo><mn>005</mn></mrow></math></span> at a 95% confidence interval).</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"160 ","pages":"Article 103050"},"PeriodicalIF":6.1,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142866491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic classification of HEp-2 specimens by explainable deep learning and Jensen-Shannon reliability index 基于可解释深度学习和Jensen-Shannon可靠性指数的HEp-2样本自动分类
IF 6.1 2区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-01 DOI: 10.1016/j.artmed.2024.103030
A. Mencattini , T. Tocci , M. Nuccetelli , M. Pieri , S. Bernardini , E. Martinelli
The Anti-Nuclear Antibodies (ANA) test using Human Epithelial type 2 (HEp-2) cells in the Indirect Immuno-Fluorescence (IIF) assay protocol is considered the gold standard for detecting Connective Tissue Diseases. Computer-assisted systems for HEp-2 image analysis represent a growing field that harnesses the potential offered by novel machine learning techniques to address the classification of HEp-2 images and ANA patterns.
In this study, we introduce an innovative platform based on transfer learning with pre-trained deep learning models. This platform combines the power of unsupervised deep description of HEp-2 images, a novel feature selection approach designed for unbalanced datasets, and independent testing using two distinct datasets from different hospitals to tackle cross-hardware compatibility issues. To enhance the trustworthiness of our method, we also present a modified version of gradient-weighted class activation mapping for regional explainability and introduce a new sample quality index based on the Jensen-Shannon divergence to enhance method reliability and quantify sample heterogeneity.
The results we provide demonstrate exceptionally high performance in intensity and ANA pattern recognition when compared to state-of-the-art approaches. Our method's ability to eliminate the need for cell segmentation in favor of statistical analysis of the sample makes it applicable, robust, and versatile. Our future work will focus on addressing the challenge of mitotic spindle recognition by expanding our proposed approach to cover mixed patterns.
间接免疫荧光(IIF)检测方案中使用人上皮2型(HEp-2)细胞的抗核抗体(ANA)测试被认为是检测结缔组织疾病的金标准。用于HEp-2图像分析的计算机辅助系统代表了一个不断发展的领域,它利用新型机器学习技术提供的潜力来解决HEp-2图像和ANA模式的分类问题。在这项研究中,我们引入了一个基于迁移学习和预训练深度学习模型的创新平台。该平台结合了HEp-2图像的无监督深度描述功能,为不平衡数据集设计的新颖特征选择方法,以及使用来自不同医院的两个不同数据集进行独立测试以解决跨硬件兼容性问题。为了提高方法的可信度,我们还提出了一种改进的梯度加权类激活映射,用于区域可解释性,并引入了一个基于Jensen-Shannon散度的新样本质量指数,以提高方法的可靠性和量化样本异质性。与最先进的方法相比,我们提供的结果在强度和ANA模式识别方面表现出异常高的性能。我们的方法能够消除对细胞分割的需要,有利于样本的统计分析,使其适用,健壮和通用。我们未来的工作将集中在解决有丝分裂纺锤体识别的挑战,扩大我们提出的方法,以涵盖混合模式。
{"title":"Automatic classification of HEp-2 specimens by explainable deep learning and Jensen-Shannon reliability index","authors":"A. Mencattini ,&nbsp;T. Tocci ,&nbsp;M. Nuccetelli ,&nbsp;M. Pieri ,&nbsp;S. Bernardini ,&nbsp;E. Martinelli","doi":"10.1016/j.artmed.2024.103030","DOIUrl":"10.1016/j.artmed.2024.103030","url":null,"abstract":"<div><div>The Anti-Nuclear Antibodies (ANA) test using Human Epithelial type 2 (HEp-2) cells in the Indirect Immuno-Fluorescence (IIF) assay protocol is considered the gold standard for detecting Connective Tissue Diseases. Computer-assisted systems for HEp-2 image analysis represent a growing field that harnesses the potential offered by novel machine learning techniques to address the classification of HEp-2 images and ANA patterns.</div><div>In this study, we introduce an innovative platform based on transfer learning with pre-trained deep learning models. This platform combines the power of unsupervised deep description of HEp-2 images, a novel feature selection approach designed for unbalanced datasets, and independent testing using two distinct datasets from different hospitals to tackle cross-hardware compatibility issues. To enhance the trustworthiness of our method, we also present a modified version of gradient-weighted class activation mapping for regional explainability and introduce a new sample quality index based on the Jensen-Shannon divergence to enhance method reliability and quantify sample heterogeneity.</div><div>The results we provide demonstrate exceptionally high performance in intensity and ANA pattern recognition when compared to state-of-the-art approaches. Our method's ability to eliminate the need for cell segmentation in favor of statistical analysis of the sample makes it applicable, robust, and versatile. Our future work will focus on addressing the challenge of mitotic spindle recognition by expanding our proposed approach to cover mixed patterns.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"160 ","pages":"Article 103030"},"PeriodicalIF":6.1,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142787959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TransformerLSR: Attentive joint model of longitudinal data, survival, and recurrent events with concurrent latent structure TransformerLSR:纵向数据、生存率和并发潜在结构的复发事件的细心联合模型。
IF 6.1 2区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-01 DOI: 10.1016/j.artmed.2024.103056
Zhiyue Zhang , Yao Zhao , Yanxun Xu
In applications such as biomedical studies, epidemiology, and social sciences, recurrent events often co-occur with longitudinal measurements and a terminal event, such as death. Therefore, jointly modeling longitudinal measurements, recurrent events, and survival data while accounting for their dependencies is critical. While joint models for the three components exist in statistical literature, many of these approaches are limited by heavy parametric assumptions and scalability issues. Recently, incorporating deep learning techniques into joint modeling has shown promising results. However, current methods only address joint modeling of longitudinal measurements at regularly-spaced observation times and survival events, neglecting recurrent events. In this paper, we develop TransformerLSR, a flexible transformer-based deep modeling and inference framework to jointly model all three components simultaneously. TransformerLSR integrates deep temporal point processes into the joint modeling framework, treating recurrent and terminal events as two competing processes dependent on past longitudinal measurements and recurrent event times. Additionally, TransformerLSR introduces a novel trajectory representation and model architecture to potentially incorporate a priori knowledge of known latent structures among concurrent longitudinal variables. We demonstrate the effectiveness and necessity of TransformerLSR through simulation studies and analyzing a real-world medical dataset on patients after kidney transplantation.
在诸如生物医学研究、流行病学和社会科学等应用中,反复发生的事件往往与纵向测量和最终事件(如死亡)同时发生。因此,联合建模纵向测量、复发事件和生存数据,同时考虑它们的依赖性是至关重要的。虽然统计文献中存在这三个组成部分的联合模型,但这些方法中的许多都受到沉重的参数假设和可扩展性问题的限制。最近,将深度学习技术结合到联合建模中已经显示出可喜的结果。然而,目前的方法只解决了定期观察时间和生存事件纵向测量的联合建模,忽略了复发事件。在本文中,我们开发了TransformerLSR,这是一个灵活的基于变压器的深度建模和推理框架,可以同时对这三个组件进行联合建模。TransformerLSR将深度时间点过程集成到联合建模框架中,根据过去的纵向测量和重复事件时间,将重复事件和结束事件视为两个相互竞争的过程。此外,TransformerLSR引入了一种新的轨迹表示和模型架构,以潜在地整合并发纵向变量中已知潜在结构的先验知识。我们通过模拟研究和分析现实世界肾移植患者的医疗数据集来证明TransformerLSR的有效性和必要性。
{"title":"TransformerLSR: Attentive joint model of longitudinal data, survival, and recurrent events with concurrent latent structure","authors":"Zhiyue Zhang ,&nbsp;Yao Zhao ,&nbsp;Yanxun Xu","doi":"10.1016/j.artmed.2024.103056","DOIUrl":"10.1016/j.artmed.2024.103056","url":null,"abstract":"<div><div>In applications such as biomedical studies, epidemiology, and social sciences, recurrent events often co-occur with longitudinal measurements and a terminal event, such as death. Therefore, jointly modeling longitudinal measurements, recurrent events, and survival data while accounting for their dependencies is critical. While joint models for the three components exist in statistical literature, many of these approaches are limited by heavy parametric assumptions and scalability issues. Recently, incorporating deep learning techniques into joint modeling has shown promising results. However, current methods only address joint modeling of longitudinal measurements at regularly-spaced observation times and survival events, neglecting recurrent events. In this paper, we develop TransformerLSR, a flexible transformer-based deep modeling and inference framework to jointly model all three components simultaneously. TransformerLSR integrates deep temporal point processes into the joint modeling framework, treating recurrent and terminal events as two competing processes dependent on past longitudinal measurements and recurrent event times. Additionally, TransformerLSR introduces a novel trajectory representation and model architecture to potentially incorporate <em>a priori</em> knowledge of known latent structures among concurrent longitudinal variables. We demonstrate the effectiveness and necessity of TransformerLSR through simulation studies and analyzing a real-world medical dataset on patients after kidney transplantation.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"160 ","pages":"Article 103056"},"PeriodicalIF":6.1,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142873489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Training and validating a treatment recommender with partial verification evidence 用部分验证证据培训和验证治疗推荐者。
IF 6.1 2区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-01 DOI: 10.1016/j.artmed.2024.103062
Vishnu Unnikrishnan , Clara Puga , Miro Schleicher , Uli Niemann , Berthold Langguth , Stefan Schoisswohl , Birgit Mazurek , Rilana Cima , Jose Antonio Lopez-Escamez , Dimitris Kikidis , Eleftheria Vellidou , Rüdiger Pryss , Winfried Schlee , Myra Spiliopoulou

Background:

Current clinical decision support systems (DSS) are trained and validated on observational data from the clinic in which the DSS is going to be applied. This is problematic for treatments that have already been validated in a randomized clinical trial (RCT), but have not yet been introduced in any clinic. In this work, we report on a method for training and validating the DSS core before introduction to a clinic, using the RCT data themselves. The key challenges we address are of missingness, foremost: missing rationale when assigning a treatment to a patient (the assignment is at random), and missing verification evidence, since the effectiveness of a treatment for a patient can only be verified (ground truth) if the treatment was indeed assigned to the patient — but then the assignment was at random.

Materials:

We use the data of a multi-armed clinical trial that investigated the effectiveness of single treatments and combination treatments for 240+ tinnitus patients recruited and treated in 5 clinical centres.

Methods:

To deal with the ‘missing rationale for treatment assignment’ challenge, we re-model the target variable that measures the outcome of interest, in order to suppress the effect of the individual treatment, which was at random, and control on the effect of treatment in general. To deal with missing features for many patients, we use a learning core that is robust to missing features. Further, we build ensembles that parsimoniously exploit the small patient numbers we have for learning. To deal with the ‘missing verification evidence’ challenge, we introduce counterfactual treatment verification, a verification scheme that juxtaposes the effectiveness of the recommendations of our approach to the effectiveness of the RCT assignments in the cases of agreement/disagreement between the two.

Results and limitations:

We demonstrate that our approach leverages the RCT data for learning and verification, by showing that the DSS suggests treatments that improve the outcome. The results are limited through the small number of patients per treatment; while our ensemble is designed to mitigate this effect, the predictive performance of the methods is affected by the smallness of the data.

Outlook:

We provide a basis for the establishment of decision supporting routines on treatments that have been tested in RCTs but have not yet been deployed clinically. Practitioners can use our approach to train and validate a DSS on new treatments by simply using the RCT data available to them. More work is needed to strengthen the robustness of the predictors. Since there are no further data available to this purpose, but those already used, the potential of synthetic data generation seems an appropriate alternative.
背景:当前的临床决策支持系统(DSS)是在临床应用DSS的观察数据上进行训练和验证的。对于已经在随机临床试验(RCT)中得到验证但尚未在任何临床中引入的治疗方法来说,这是有问题的。在这项工作中,我们报告了一种在引入临床之前使用RCT数据本身对DSS核心进行培训和验证的方法。我们解决的关键挑战是缺失,最重要的是:在为患者分配治疗时缺乏基本原理(分配是随机的),以及缺乏验证证据,因为只有在确实为患者分配治疗时才能验证治疗的有效性(基本事实)-但分配是随机的。材料:我们使用了一项多臂临床试验的数据,该试验调查了在5个临床中心招募和治疗的240+耳鸣患者的单一治疗和联合治疗的有效性。方法:为了应对“治疗分配缺乏基本原理”的挑战,我们对测量感兴趣结果的目标变量进行了重新建模,以抑制随机个体治疗的效果,并控制总体治疗的效果。​此外,我们建立了一个集合体,可以节省地利用我们拥有的少量患者进行学习。为了应对“缺少验证证据”的挑战,我们引入了反事实处理验证,这是一种验证方案,将我们的方法建议的有效性与RCT分配在两者之间同意/不同意的情况下的有效性并置。结果和局限性:我们证明我们的方法利用RCT数据进行学习和验证,通过显示DSS建议的治疗可以改善结果。由于每次治疗的患者数量较少,结果受到限制;虽然我们的集成旨在减轻这种影响,但方法的预测性能受到数据较小的影响。展望:我们为建立已在随机对照试验中测试但尚未在临床应用的治疗方法的决策支持程序提供了基础。从业人员可以使用我们的方法,通过简单地使用他们可用的RCT数据来培训和验证新的治疗方法的DSS。需要做更多的工作来加强预测器的稳健性。由于没有进一步的数据可用于此目的,但已经使用了这些数据,因此合成数据生成的潜力似乎是一个适当的替代方案。
{"title":"Training and validating a treatment recommender with partial verification evidence","authors":"Vishnu Unnikrishnan ,&nbsp;Clara Puga ,&nbsp;Miro Schleicher ,&nbsp;Uli Niemann ,&nbsp;Berthold Langguth ,&nbsp;Stefan Schoisswohl ,&nbsp;Birgit Mazurek ,&nbsp;Rilana Cima ,&nbsp;Jose Antonio Lopez-Escamez ,&nbsp;Dimitris Kikidis ,&nbsp;Eleftheria Vellidou ,&nbsp;Rüdiger Pryss ,&nbsp;Winfried Schlee ,&nbsp;Myra Spiliopoulou","doi":"10.1016/j.artmed.2024.103062","DOIUrl":"10.1016/j.artmed.2024.103062","url":null,"abstract":"<div><h3>Background:</h3><div>Current clinical decision support systems (DSS) are trained and validated on observational data from the clinic in which the DSS is going to be applied. This is problematic for treatments that have already been validated in a randomized clinical trial (RCT), but have not yet been introduced in any clinic. In this work, we report on a method for training and validating the DSS core before introduction to a clinic, using the RCT data themselves. The key challenges we address are of missingness, foremost: missing rationale when assigning a treatment to a patient (the assignment is at random), and missing verification evidence, since the effectiveness of a treatment for a patient can only be verified (ground truth) if the treatment was indeed assigned to the patient — but then the assignment was at random.</div></div><div><h3>Materials:</h3><div>We use the data of a multi-armed clinical trial that investigated the effectiveness of single treatments and combination treatments for 240+ tinnitus patients recruited and treated in 5 clinical centres.</div></div><div><h3>Methods:</h3><div>To deal with the ‘missing rationale for treatment assignment’ challenge, we re-model the target variable that measures the outcome of interest, in order to suppress the effect of the individual treatment, which was at random, and control on the effect of treatment in general. To deal with missing features for many patients, we use a learning core that is robust to missing features. Further, we build ensembles that parsimoniously exploit the small patient numbers we have for learning. To deal with the ‘missing verification evidence’ challenge, we introduce <em>counterfactual treatment verification</em>, a verification scheme that juxtaposes the effectiveness of the recommendations of our approach to the effectiveness of the RCT assignments in the cases of agreement/disagreement between the two.</div></div><div><h3>Results and limitations:</h3><div>We demonstrate that our approach leverages the RCT data for learning and verification, by showing that the DSS suggests treatments that improve the outcome. The results are limited through the small number of patients per treatment; while our ensemble is designed to mitigate this effect, the predictive performance of the methods is affected by the smallness of the data.</div></div><div><h3>Outlook:</h3><div>We provide a basis for the establishment of decision supporting routines on treatments that have been tested in RCTs but have not yet been deployed clinically. Practitioners can use our approach to train and validate a DSS on new treatments by simply using the RCT data available to them. More work is needed to strengthen the robustness of the predictors. Since there are no further data available to this purpose, but those already used, the potential of synthetic data generation seems an appropriate alternative.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"160 ","pages":"Article 103062"},"PeriodicalIF":6.1,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142959321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ECGEFNet: A two-branch deep learning model for calculating left ventricular ejection fraction using electrocardiogram ECGEFNet:利用心电图计算左心室射血分数的双分支深度学习模型。
IF 6.1 2区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-01 DOI: 10.1016/j.artmed.2024.103065
Yiqiu Qi , Guangyuan Li , Jinzhu Yang , Honghe Li , Qi Yu , Mingjun Qu , Hongxia Ning , Yonghuai Wang
Left ventricular systolic dysfunction (LVSD) and its severity are correlated with the prognosis of cardiovascular diseases. Early detection and monitoring of LVSD are of utmost importance. Left ventricular ejection fraction (LVEF) is an essential indicator for evaluating left ventricular function in clinical practice, the current echocardiography-based evaluation method is not avaliable in primary care and difficult to achieve real-time monitoring capabilities for cardiac dysfunction. We propose a two-branch deep learning model (ECGEFNet) for calculating LVEF using electrocardiogram (ECG), which holds the potential to serve as a primary medical screening tool and facilitate long-term dynamic monitoring of cardiac functional impairments. It integrates original numerical signal and waveform plots derived from the signals in an innovative manner, enabling joint calculation of LVEF by incorporating diverse information encompassing temporal, spatial and phase aspects. To address the inadequate information interaction between the two branches and the lack of efficiency in feature fusion, we propose the fusion attention mechanism (FAT) and the two-branch feature fusion module (BFF) to guide the learning, alignment and fusion of features from both branches. We assemble a large internal dataset and perform experimental validation on it. The accuracy of cardiac dysfunction screening is 92.3%, the mean absolute error (MAE) in LVEF calculation is 4.57%. The proposed model performs well and outperforms existing basic models, and is of great significance for real-time monitoring of the degree of cardiac dysfunction.
左室收缩功能障碍(LVSD)及其严重程度与心血管疾病的预后相关。早期发现和监测LVSD是至关重要的。左室射血分数(Left ventricular ejection fraction, LVEF)是临床评价左室功能的重要指标,目前基于超声心动图的评价方法尚不能用于初级保健,难以实现对心功能障碍的实时监测。我们提出了一个双分支深度学习模型(ECGEFNet),用于使用心电图(ECG)计算LVEF,该模型有可能作为主要的医疗筛查工具,并促进心脏功能损伤的长期动态监测。它以一种创新的方式将原始数值信号和从信号中导出的波形图结合起来,通过结合包括时间、空间和相位方面的多种信息,实现LVEF的联合计算。针对两分支间信息交互不足和特征融合效率不高的问题,提出了融合注意机制(FAT)和两分支特征融合模块(BFF)来指导两分支特征的学习、对齐和融合。我们收集了一个大型的内部数据集,并对其进行了实验验证。心功能障碍筛查准确率为92.3%,LVEF计算平均绝对误差(MAE)为4.57%。该模型性能良好,优于现有的基础模型,对心功能障碍程度的实时监测具有重要意义。
{"title":"ECGEFNet: A two-branch deep learning model for calculating left ventricular ejection fraction using electrocardiogram","authors":"Yiqiu Qi ,&nbsp;Guangyuan Li ,&nbsp;Jinzhu Yang ,&nbsp;Honghe Li ,&nbsp;Qi Yu ,&nbsp;Mingjun Qu ,&nbsp;Hongxia Ning ,&nbsp;Yonghuai Wang","doi":"10.1016/j.artmed.2024.103065","DOIUrl":"10.1016/j.artmed.2024.103065","url":null,"abstract":"<div><div>Left ventricular systolic dysfunction (LVSD) and its severity are correlated with the prognosis of cardiovascular diseases. Early detection and monitoring of LVSD are of utmost importance. Left ventricular ejection fraction (LVEF) is an essential indicator for evaluating left ventricular function in clinical practice, the current echocardiography-based evaluation method is not avaliable in primary care and difficult to achieve real-time monitoring capabilities for cardiac dysfunction. We propose a two-branch deep learning model (ECGEFNet) for calculating LVEF using electrocardiogram (ECG), which holds the potential to serve as a primary medical screening tool and facilitate long-term dynamic monitoring of cardiac functional impairments. It integrates original numerical signal and waveform plots derived from the signals in an innovative manner, enabling joint calculation of LVEF by incorporating diverse information encompassing temporal, spatial and phase aspects. To address the inadequate information interaction between the two branches and the lack of efficiency in feature fusion, we propose the fusion attention mechanism (FAT) and the two-branch feature fusion module (BFF) to guide the learning, alignment and fusion of features from both branches. We assemble a large internal dataset and perform experimental validation on it. The accuracy of cardiac dysfunction screening is 92.3%, the mean absolute error (MAE) in LVEF calculation is 4.57%. The proposed model performs well and outperforms existing basic models, and is of great significance for real-time monitoring of the degree of cardiac dysfunction.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"160 ","pages":"Article 103065"},"PeriodicalIF":6.1,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142985312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-based non-invasive imaging technologies for early autism spectrum disorder diagnosis: A short review and future directions
IF 6.1 2区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-31 DOI: 10.1016/j.artmed.2025.103074
Mostafa Abdelrahim , Mohamed Khudri , Ahmed Elnakib , Mohamed Shehata , Kate Weafer , Ashraf Khalil , Gehad A. Saleh , Nihal M. Batouty , Mohammed Ghazal , Sohail Contractor , Gregory Barnes , Ayman El-Baz
Autism Spectrum Disorder (ASD) is a neurological condition, with recent statistics from the CDC indicating a rising prevalence of ASD diagnoses among infants and children. This trend emphasizes the critical importance of early detection, as timely diagnosis facilitates early intervention and enhances treatment outcomes. Consequently, there is an increasing urgency for research to develop innovative tools capable of accurately and objectively identifying ASD in its earliest stages. This paper offers a short overview of recent advancements in non-invasive technology for early ASD diagnosis, focusing on an imaging modality, structural MRI technique, which has shown promising results in early ASD diagnosis. This brief review aims to address several key questions: (i) Which imaging radiomics are associated with ASD? (ii) Is the parcellation step of the brain cortex necessary to improve the diagnostic accuracy of ASD? (iii) What databases are available to researchers interested in developing non-invasive technology for ASD? (iv) How can artificial intelligence tools contribute to improving the diagnostic accuracy of ASD? Finally, our review will highlight future trends in ASD diagnostic efforts.
{"title":"AI-based non-invasive imaging technologies for early autism spectrum disorder diagnosis: A short review and future directions","authors":"Mostafa Abdelrahim ,&nbsp;Mohamed Khudri ,&nbsp;Ahmed Elnakib ,&nbsp;Mohamed Shehata ,&nbsp;Kate Weafer ,&nbsp;Ashraf Khalil ,&nbsp;Gehad A. Saleh ,&nbsp;Nihal M. Batouty ,&nbsp;Mohammed Ghazal ,&nbsp;Sohail Contractor ,&nbsp;Gregory Barnes ,&nbsp;Ayman El-Baz","doi":"10.1016/j.artmed.2025.103074","DOIUrl":"10.1016/j.artmed.2025.103074","url":null,"abstract":"<div><div>Autism Spectrum Disorder (ASD) is a neurological condition, with recent statistics from the CDC indicating a rising prevalence of ASD diagnoses among infants and children. This trend emphasizes the critical importance of early detection, as timely diagnosis facilitates early intervention and enhances treatment outcomes. Consequently, there is an increasing urgency for research to develop innovative tools capable of accurately and objectively identifying ASD in its earliest stages. This paper offers a short overview of recent advancements in non-invasive technology for early ASD diagnosis, focusing on an imaging modality, structural MRI technique, which has shown promising results in early ASD diagnosis. This brief review aims to address several key questions: (i) Which imaging radiomics are associated with ASD? (ii) Is the parcellation step of the brain cortex necessary to improve the diagnostic accuracy of ASD? (iii) What databases are available to researchers interested in developing non-invasive technology for ASD? (iv) How can artificial intelligence tools contribute to improving the diagnostic accuracy of ASD? Finally, our review will highlight future trends in ASD diagnostic efforts.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"161 ","pages":"Article 103074"},"PeriodicalIF":6.1,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143358294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid approach for drug-target interaction predictions in ischemic stroke models
IF 6.1 2区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-22 DOI: 10.1016/j.artmed.2025.103067
Jing-Jie Peng , Yi-Yue Zhang , Rui-Feng Li , Wen-Jun Zhu , Hong-Rui Liu , Hui-Yin Li , Bin Liu , Dong-Sheng Cao , Jun Peng , Xiu-Ju Luo
Multiple cell death mechanisms are triggered during ischemic stroke and they are interconnected in a complex network with extensive crosstalk, complicating the development of targeted therapies. We therefore propose a novel framework for identifying disease-specific drug-target interaction (DTI), named strokeDTI, to extract key nodes within an interconnected graph network of activated pathways via leveraging transcriptomic sequencing data. Our findings reveal that the drugs a model can predict are highly representative of the characteristics of the database the model is trained on. However, models with comparable performance yield diametrically opposite predictions in real testing scenarios. Our analysis reveals a correlation between the reported literature on drug-target pairs and their binding scores. Leveraging this correlation, we introduced an additional module to assess the predictive validity of our model for each unique target, thereby improving the reliability of the framework's predictions. Our framework identified Cerdulatinib as a potential anti-stroke drug via targeting multiple cell death pathways, particularly necroptosis and apoptosis. Experimental validation in in vitro and in vivo models demonstrated that Cerdulatinib significantly attenuated stroke-induced brain injury via inhibiting multiple cell death pathways, improving neurological function, and reducing infarct volume. This highlights strokeDTI's potential for disease-specific drug-target identification and Cerdulatinib's potential as a potent anti-stroke drug.
{"title":"Hybrid approach for drug-target interaction predictions in ischemic stroke models","authors":"Jing-Jie Peng ,&nbsp;Yi-Yue Zhang ,&nbsp;Rui-Feng Li ,&nbsp;Wen-Jun Zhu ,&nbsp;Hong-Rui Liu ,&nbsp;Hui-Yin Li ,&nbsp;Bin Liu ,&nbsp;Dong-Sheng Cao ,&nbsp;Jun Peng ,&nbsp;Xiu-Ju Luo","doi":"10.1016/j.artmed.2025.103067","DOIUrl":"10.1016/j.artmed.2025.103067","url":null,"abstract":"<div><div>Multiple cell death mechanisms are triggered during ischemic stroke and they are interconnected in a complex network with extensive crosstalk, complicating the development of targeted therapies. We therefore propose a novel framework for identifying disease-specific drug-target interaction (DTI), named strokeDTI, to extract key nodes within an interconnected graph network of activated pathways via leveraging transcriptomic sequencing data. Our findings reveal that the drugs a model can predict are highly representative of the characteristics of the database the model is trained on. However, models with comparable performance yield diametrically opposite predictions in real testing scenarios. Our analysis reveals a correlation between the reported literature on drug-target pairs and their binding scores. Leveraging this correlation, we introduced an additional module to assess the predictive validity of our model for each unique target, thereby improving the reliability of the framework's predictions. Our framework identified Cerdulatinib as a potential anti-stroke drug via targeting multiple cell death pathways, particularly necroptosis and apoptosis. Experimental validation in in vitro and in vivo models demonstrated that Cerdulatinib significantly attenuated stroke-induced brain injury via inhibiting multiple cell death pathways, improving neurological function, and reducing infarct volume. This highlights strokeDTI's potential for disease-specific drug-target identification and Cerdulatinib's potential as a potent anti-stroke drug.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"161 ","pages":"Article 103067"},"PeriodicalIF":6.1,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143168061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implementation of artificial intelligence approaches in oncology clinical trials: A systematic review 人工智能方法在肿瘤临床试验中的应用:系统综述。
IF 6.1 2区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-18 DOI: 10.1016/j.artmed.2025.103066
Marwa Saady , Mahmoud Eissa , Ahmed S. Yacoub , Ahmed B. Hamed , Hassan Mohamed El-Said Azzazy

Introduction

There is a growing interest in leveraging artificial intelligence (AI) technologies to enhance various aspects of clinical trials. The goal of this systematic review is to assess the impact of implementing AI approaches on different aspects of oncology clinical trials.

Methods

Pertinent keywords were used to find relevant articles published in PubMed, Scopus, and Google Scholar databases, which described the clinical application of AI approaches. A quality evaluation utilizing a customized checklist specifically adapted was conducted. This study is registered with PROSPERO (CRD42024537153).

Results

Out of the identified 2833 studies, 72 studies satisfied the inclusion criteria. Clinical Trial Enrollment & Eligibility were among the most commonly studied clinical trial aspects with 30 papers. The prediction of outcomes was covered in 25 studies of which 15 addressed the prediction of patients' survival and 10 addressed the prediction of drug outcomes. The trial design was studied in 10 articles. Three studies addressed each of the personalized treatments and decision-making, while one addressed data management. The results demonstrate using AI in cancer clinical trials has the potential to increase clinical trial enrollment, predict clinical outcomes, improve trial design, enhance personalized treatments, and increase concordance in decision-making. Additionally, automating some areas and tasks, clinical trials were made more efficient, and human error was minimized. Nevertheless, concerns and restrictions related to the application of AI in clinical studies are also noted.

Conclusion

AI tools have the potential to revolutionize the design, enrollment rate, and outcome prediction of oncology clinical trials.
导论:人们对利用人工智能(AI)技术来增强临床试验的各个方面越来越感兴趣。本系统综述的目的是评估实施人工智能方法对肿瘤临床试验不同方面的影响。方法:使用相关关键词在PubMed、Scopus、谷歌Scholar数据库中查找描述人工智能方法临床应用的相关文章。利用一份专门定制的检查表进行了质量评估。本研究已在PROSPERO注册(CRD42024537153)。结果:在纳入的2833项研究中,72项研究符合纳入标准。临床试验入组和资格是研究最多的临床试验方面,有30篇论文。结果预测涉及25项研究,其中15项涉及患者生存预测,10项涉及药物结果预测。试验设计共10篇。三项研究分别涉及个性化治疗和决策,而一项研究涉及数据管理。研究结果表明,在癌症临床试验中使用人工智能具有增加临床试验入组、预测临床结果、改进试验设计、增强个性化治疗和提高决策一致性的潜力。此外,自动化了一些领域和任务,提高了临床试验的效率,并最大限度地减少了人为错误。然而,也注意到与人工智能在临床研究中的应用有关的问题和限制。结论:人工智能工具有可能彻底改变肿瘤临床试验的设计、入组率和结果预测。
{"title":"Implementation of artificial intelligence approaches in oncology clinical trials: A systematic review","authors":"Marwa Saady ,&nbsp;Mahmoud Eissa ,&nbsp;Ahmed S. Yacoub ,&nbsp;Ahmed B. Hamed ,&nbsp;Hassan Mohamed El-Said Azzazy","doi":"10.1016/j.artmed.2025.103066","DOIUrl":"10.1016/j.artmed.2025.103066","url":null,"abstract":"<div><h3>Introduction</h3><div>There is a growing interest in leveraging artificial intelligence (AI) technologies to enhance various aspects of clinical trials. The goal of this systematic review is to assess the impact of implementing AI approaches on different aspects of oncology clinical trials.</div></div><div><h3>Methods</h3><div>Pertinent keywords were used to find relevant articles published in PubMed, Scopus, and Google Scholar databases, which described the clinical application of AI approaches. A quality evaluation utilizing a customized checklist specifically adapted was conducted. This study is registered with PROSPERO (CRD42024537153).</div></div><div><h3>Results</h3><div>Out of the identified 2833 studies, 72 studies satisfied the inclusion criteria. Clinical Trial Enrollment &amp; Eligibility were among the most commonly studied clinical trial aspects with 30 papers. The prediction of outcomes was covered in 25 studies of which 15 addressed the prediction of patients' survival and 10 addressed the prediction of drug outcomes. The trial design was studied in 10 articles. Three studies addressed each of the personalized treatments and decision-making, while one addressed data management. The results demonstrate using AI in cancer clinical trials has the potential to increase clinical trial enrollment, predict clinical outcomes, improve trial design, enhance personalized treatments, and increase concordance in decision-making. Additionally, automating some areas and tasks, clinical trials were made more efficient, and human error was minimized. Nevertheless, concerns and restrictions related to the application of AI in clinical studies are also noted.</div></div><div><h3>Conclusion</h3><div>AI tools have the potential to revolutionize the design, enrollment rate, and outcome prediction of oncology clinical trials.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"161 ","pages":"Article 103066"},"PeriodicalIF":6.1,"publicationDate":"2025-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143016910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Artificial Intelligence in Medicine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1