首页 > 最新文献

Medical image analysis最新文献

英文 中文
An objective comparison of methods for augmented reality in laparoscopic liver resection by preoperative-to-intraoperative image fusion from the MICCAI2022 challenge 通过 MICCAI2022 挑战赛的术前到术中图像融合,客观比较腹腔镜肝脏切除术中的增强现实技术。
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-22 DOI: 10.1016/j.media.2024.103371
Sharib Ali , Yamid Espinel , Yueming Jin , Peng Liu , Bianca Güttner , Xukun Zhang , Lihua Zhang , Tom Dowrick , Matthew J. Clarkson , Shiting Xiao , Yifan Wu , Yijun Yang , Lei Zhu , Dai Sun , Lan Li , Micha Pfeiffer , Shahid Farid , Lena Maier-Hein , Emmanuel Buc , Adrien Bartoli
Augmented reality for laparoscopic liver resection is a visualisation mode that allows a surgeon to localise tumours and vessels embedded within the liver by projecting them on top of a laparoscopic image. Preoperative 3D models extracted from Computed Tomography (CT) or Magnetic Resonance (MR) imaging data are registered to the intraoperative laparoscopic images during this process. Regarding 3D–2D fusion, most algorithms use anatomical landmarks to guide registration, such as the liver’s inferior ridge, the falciform ligament, and the occluding contours. These are usually marked by hand in both the laparoscopic image and the 3D model, which is time-consuming and prone to error. Therefore, there is a need to automate this process so that augmented reality can be used effectively in the operating room. We present the Preoperative-to-Intraoperative Laparoscopic Fusion challenge (P2ILF), held during the Medical Image Computing and Computer Assisted Intervention (MICCAI 2022) conference, which investigates the possibilities of detecting these landmarks automatically and using them in registration. The challenge was divided into two tasks: (1) A 2D and 3D landmark segmentation task and (2) a 3D–2D registration task. The teams were provided with training data consisting of 167 laparoscopic images and 9 preoperative 3D models from 9 patients, with the corresponding 2D and 3D landmark annotations. A total of 6 teams from 4 countries participated in the challenge, whose results were assessed for each task independently. All the teams proposed deep learning-based methods for the 2D and 3D landmark segmentation tasks and differentiable rendering-based methods for the registration task. The proposed methods were evaluated on 16 test images and 2 preoperative 3D models from 2 patients. In Task 1, the teams were able to segment most of the 2D landmarks, while the 3D landmarks showed to be more challenging to segment. In Task 2, only one team obtained acceptable qualitative and quantitative registration results. Based on the experimental outcomes, we propose three key hypotheses that determine current limitations and future directions for research in this domain.
用于腹腔镜肝脏切除术的增强现实技术是一种可视化模式,可让外科医生通过在腹腔镜图像上投射肿瘤和嵌入肝脏的血管来定位肿瘤和血管。在此过程中,从计算机断层扫描(CT)或磁共振(MR)成像数据中提取的术前三维模型会与腹腔镜术中图像进行注册。关于三维-二维融合,大多数算法使用解剖标志来指导配准,如肝下脊,镰状韧带和闭孔轮廓。这些标记通常都是用手在腹腔镜图像和三维模型上标注的,既费时又容易出错。因此,有必要将这一过程自动化,以便在手术室中有效使用增强现实技术。我们在医学影像计算和计算机辅助干预(MICCAI 2022)会议期间举办了 "术前到术中腹腔镜融合挑战赛(P2ILF)",研究自动检测这些地标并将其用于注册的可能性。挑战赛分为两项任务:(1)二维和三维地标分割任务;(2)三维和二维配准任务。参赛团队获得的训练数据包括 167 幅腹腔镜图像和 9 个术前三维模型(来自 9 名患者),以及相应的二维和三维地标注释。共有来自 4 个国家的 6 个团队参加了挑战赛,并对每个任务的结果进行了独立评估。所有参赛团队都针对二维和三维地标分割任务提出了基于深度学习的方法,并针对配准任务提出了基于可微分渲染的方法。所提出的方法在 16 张测试图像和 2 名患者的 2 个术前 3D 模型上进行了评估。在任务 1 中,各小组都能分割大部分二维地标,而三维地标的分割则更具挑战性。在任务 2 中,只有一个小组获得了可接受的定性和定量配准结果。根据实验结果,我们提出了三个关键假设,以确定该领域目前的局限性和未来的研究方向。
{"title":"An objective comparison of methods for augmented reality in laparoscopic liver resection by preoperative-to-intraoperative image fusion from the MICCAI2022 challenge","authors":"Sharib Ali ,&nbsp;Yamid Espinel ,&nbsp;Yueming Jin ,&nbsp;Peng Liu ,&nbsp;Bianca Güttner ,&nbsp;Xukun Zhang ,&nbsp;Lihua Zhang ,&nbsp;Tom Dowrick ,&nbsp;Matthew J. Clarkson ,&nbsp;Shiting Xiao ,&nbsp;Yifan Wu ,&nbsp;Yijun Yang ,&nbsp;Lei Zhu ,&nbsp;Dai Sun ,&nbsp;Lan Li ,&nbsp;Micha Pfeiffer ,&nbsp;Shahid Farid ,&nbsp;Lena Maier-Hein ,&nbsp;Emmanuel Buc ,&nbsp;Adrien Bartoli","doi":"10.1016/j.media.2024.103371","DOIUrl":"10.1016/j.media.2024.103371","url":null,"abstract":"<div><div>Augmented reality for laparoscopic liver resection is a visualisation mode that allows a surgeon to localise tumours and vessels embedded within the liver by projecting them on top of a laparoscopic image. Preoperative 3D models extracted from Computed Tomography (CT) or Magnetic Resonance (MR) imaging data are registered to the intraoperative laparoscopic images during this process. Regarding 3D–2D fusion, most algorithms use anatomical landmarks to guide registration, such as the liver’s inferior ridge, the falciform ligament, and the occluding contours. These are usually marked by hand in both the laparoscopic image and the 3D model, which is time-consuming and prone to error. Therefore, there is a need to automate this process so that augmented reality can be used effectively in the operating room. We present the Preoperative-to-Intraoperative Laparoscopic Fusion challenge (P2ILF), held during the Medical Image Computing and Computer Assisted Intervention (MICCAI 2022) conference, which investigates the possibilities of detecting these landmarks automatically and using them in registration. The challenge was divided into two tasks: (1) A 2D and 3D landmark segmentation task and (2) a 3D–2D registration task. The teams were provided with training data consisting of 167 laparoscopic images and 9 preoperative 3D models from 9 patients, with the corresponding 2D and 3D landmark annotations. A total of 6 teams from 4 countries participated in the challenge, whose results were assessed for each task independently. All the teams proposed deep learning-based methods for the 2D and 3D landmark segmentation tasks and differentiable rendering-based methods for the registration task. The proposed methods were evaluated on 16 test images and 2 preoperative 3D models from 2 patients. In Task 1, the teams were able to segment most of the 2D landmarks, while the 3D landmarks showed to be more challenging to segment. In Task 2, only one team obtained acceptable qualitative and quantitative registration results. Based on the experimental outcomes, we propose three key hypotheses that determine current limitations and future directions for research in this domain.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"99 ","pages":"Article 103371"},"PeriodicalIF":10.7,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142564711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ensemble transformer-based multiple instance learning to predict pathological subtypes and tumor mutational burden from histopathological whole slide images of endometrial and colorectal cancer 基于集合变换器的多实例学习,从子宫内膜癌和结肠直肠癌的组织病理学全切片图像预测病理亚型和肿瘤突变负荷。
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-21 DOI: 10.1016/j.media.2024.103372
Ching-Wei Wang , Tzu-Chien Liu , Po-Jen Lai , Hikam Muzakky , Yu-Chi Wang , Mu-Hsien Yu , Chia-Hua Wu , Tai-Kuang Chao
In endometrial cancer (EC) and colorectal cancer (CRC), in addition to microsatellite instability, tumor mutational burden (TMB) has gradually gained attention as a genomic biomarker that can be used clinically to determine which patients may benefit from immune checkpoint inhibitors. High TMB is characterized by a large number of mutated genes, which encode aberrant tumor neoantigens, and implies a better response to immunotherapy. Hence, a part of EC and CRC patients associated with high TMB may have higher chances to receive immunotherapy. TMB measurement was mainly evaluated by whole-exome sequencing or next-generation sequencing, which was costly and difficult to be widely applied in all clinical cases. Therefore, an effective, efficient, low-cost and easily accessible tool is urgently needed to distinguish the TMB status of EC and CRC patients. In this study, we present a deep learning framework, namely Ensemble Transformer-based Multiple Instance Learning with Self-Supervised Learning Vision Transformer feature encoder (ETMIL-SSLViT), to predict pathological subtype and TMB status directly from the H&E stained whole slide images (WSIs) in EC and CRC patients, which is helpful for both pathological classification and cancer treatment planning. Our framework was evaluated on two different cancer cohorts, including an EC cohort with 918 histopathology WSIs from 529 patients and a CRC cohort with 1495 WSIs from 594 patients from The Cancer Genome Atlas. The experimental results show that the proposed methods achieved excellent performance and outperforming seven state-of-the-art (SOTA) methods in cancer subtype classification and TMB prediction on both cancer datasets. Fisher’s exact test further validated that the associations between the predictions of the proposed models and the actual cancer subtype or TMB status are both extremely strong (p<0.001). These promising findings show the potential of our proposed methods to guide personalized treatment decisions by accurately predicting the EC and CRC subtype and the TMB status for effective immunotherapy planning for EC and CRC patients.
在子宫内膜癌(EC)和结直肠癌(CRC)中,除了微卫星不稳定性外,肿瘤突变负荷(TMB)作为一种基因组生物标记物也逐渐受到关注,临床上可用于确定哪些患者可能从免疫检查点抑制剂中获益。高TMB的特征是有大量突变基因,这些基因编码异常的肿瘤新抗原,意味着对免疫疗法有更好的反应。因此,部分高TMB的EC和CRC患者接受免疫疗法的机会可能更大。TMB的测量主要通过全外显子组测序或新一代测序进行评估,成本高昂且难以广泛应用于所有临床病例。因此,急需一种有效、高效、低成本且易于使用的工具来区分EC和CRC患者的TMB状况。在本研究中,我们提出了一种深度学习框架,即基于集合变换器的多实例学习与自我监督学习视觉变换器特征编码器(ETMIL-SSLViT),可直接从H&E染色的EC和CRC患者全切片图像(WSI)中预测病理亚型和TMB状态,这对病理分类和癌症治疗计划都有帮助。我们的框架在两个不同的癌症队列中进行了评估,其中一个队列包含来自 529 名患者的 918 张组织病理学 WSI,另一个队列包含来自《癌症基因组图谱》(The Cancer Genome Atlas)的 594 名患者的 1495 张组织病理学 WSI。实验结果表明,在这两个癌症数据集上,所提出的方法在癌症亚型分类和TMB预测方面都取得了优异的成绩,超过了七种最先进的(SOTA)方法。费雪精确检验进一步验证了所提模型的预测结果与实际癌症亚型或 TMB 状态之间的关联性都非常强(p<0.05)。
{"title":"Ensemble transformer-based multiple instance learning to predict pathological subtypes and tumor mutational burden from histopathological whole slide images of endometrial and colorectal cancer","authors":"Ching-Wei Wang ,&nbsp;Tzu-Chien Liu ,&nbsp;Po-Jen Lai ,&nbsp;Hikam Muzakky ,&nbsp;Yu-Chi Wang ,&nbsp;Mu-Hsien Yu ,&nbsp;Chia-Hua Wu ,&nbsp;Tai-Kuang Chao","doi":"10.1016/j.media.2024.103372","DOIUrl":"10.1016/j.media.2024.103372","url":null,"abstract":"<div><div>In endometrial cancer (EC) and colorectal cancer (CRC), in addition to microsatellite instability, tumor mutational burden (TMB) has gradually gained attention as a genomic biomarker that can be used clinically to determine which patients may benefit from immune checkpoint inhibitors. High TMB is characterized by a large number of mutated genes, which encode aberrant tumor neoantigens, and implies a better response to immunotherapy. Hence, a part of EC and CRC patients associated with high TMB may have higher chances to receive immunotherapy. TMB measurement was mainly evaluated by whole-exome sequencing or next-generation sequencing, which was costly and difficult to be widely applied in all clinical cases. Therefore, an effective, efficient, low-cost and easily accessible tool is urgently needed to distinguish the TMB status of EC and CRC patients. In this study, we present a deep learning framework, namely Ensemble Transformer-based Multiple Instance Learning with Self-Supervised Learning Vision Transformer feature encoder (ETMIL-SSLViT), to predict pathological subtype and TMB status directly from the H&amp;E stained whole slide images (WSIs) in EC and CRC patients, which is helpful for both pathological classification and cancer treatment planning. Our framework was evaluated on two different cancer cohorts, including an EC cohort with 918 histopathology WSIs from 529 patients and a CRC cohort with 1495 WSIs from 594 patients from The Cancer Genome Atlas. The experimental results show that the proposed methods achieved excellent performance and outperforming seven state-of-the-art (SOTA) methods in cancer subtype classification and TMB prediction on both cancer datasets. Fisher’s exact test further validated that the associations between the predictions of the proposed models and the actual cancer subtype or TMB status are both extremely strong (<span><math><mrow><mi>p</mi><mo>&lt;</mo><mn>0</mn><mo>.</mo><mn>001</mn></mrow></math></span>). These promising findings show the potential of our proposed methods to guide personalized treatment decisions by accurately predicting the EC and CRC subtype and the TMB status for effective immunotherapy planning for EC and CRC patients.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"99 ","pages":"Article 103372"},"PeriodicalIF":10.7,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142503533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Harnessing 12-lead ECG and MRI data to personalise repolarisation profiles in cardiac digital twin models for enhanced virtual drug testing 利用12导联心电图和MRI数据在心脏数字双胞胎模型中个性化复极谱,以增强虚拟药物测试
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-18 DOI: 10.1016/j.media.2024.103361
Julia Camps , Zhinuo Jenny Wang , Ruben Doste , Lucas Arantes Berg , Maxx Holmes , Brodie Lawson , Jakub Tomek , Kevin Burrage , Alfonso Bueno-Orovio , Blanca Rodriguez
Cardiac digital twins are computational tools capturing key functional and anatomical characteristics of patient hearts for investigating disease phenotypes and predicting responses to therapy. When paired with large-scale computational resources and large clinical datasets, digital twin technology can enable virtual clinical trials on virtual cohorts to fast-track therapy development. Here, we present an open-source automated pipeline for personalising ventricular electrophysiological function based on routinely acquired magnetic resonance imaging (MRI) data and the standard 12-lead electrocardiogram (ECG).
Using MRI-based anatomical models, a sequential Monte-Carlo approximate Bayesian computational inference method is extended to infer electrical activation and repolarisation characteristics from the ECG. Fast simulations are conducted with a reaction-Eikonal model, including the Purkinje network and biophysically-detailed subcellular ionic current dynamics for repolarisation. For each patient, parameter uncertainty is represented by inferring an envelope of plausible ventricular models rather than a single one, which means that parameter uncertainty can be propagated to therapy evaluation. Furthermore, we have developed techniques for translating from reaction-Eikonal to monodomain simulations, which allows more realistic simulations of cardiac electrophysiology. The pipeline is demonstrated in three healthy subjects, where our inferred pseudo-diffusion reaction-Eikonal models reproduced the patient's ECG with a median Pearson's correlation coefficient of 0.9, and then translated to monodomain simulations with a median correlation coefficient of 0.84 across all subjects. We then demonstrate our digital twins for virtual evaluation of Dofetilide with uncertainty quantification. These evaluations using our cardiac digital twins reproduced dose-dependent QTc and T peak to T end prolongations that are in keeping with large population drug response data.
The methodologies for cardiac digital twinning presented here are a step towards personalised virtual therapy testing and can be scaled to generate virtual populations for clinical trials to fast-track therapy evaluation. The tools developed for this paper are open-source, documented, and made publicly available.
心脏数字双胞胎是一种计算工具,可以捕获患者心脏的关键功能和解剖特征,用于研究疾病表型和预测对治疗的反应。当与大规模计算资源和大型临床数据集相结合时,数字孪生技术可以使虚拟队列的虚拟临床试验快速跟踪治疗开发。在这里,我们提出了一个开源的自动化管道,用于个性化心室电生理功能,基于常规获得的磁共振成像(MRI)数据和标准12导联心电图(ECG)。利用基于mri的解剖模型,扩展了顺序蒙特卡罗近似贝叶斯计算推理方法,以推断ECG的电激活和复极特征。使用反应- eikonal模型进行快速模拟,包括浦肯野网络和重极化的生物物理详细亚细胞离子电流动力学。对于每个患者,参数不确定性是通过推断可行的心室模型的包络来表示的,而不是单一的,这意味着参数不确定性可以传播到治疗评估中。此外,我们已经开发了从反应- eikonal转换到单域模拟的技术,这允许更真实的心脏电生理模拟。该管道在三个健康受试者中得到了证明,我们推断的伪扩散反应- eikonal模型再现了患者的心电图,Pearson相关系数中值为0.9,然后在所有受试者中转化为单域模拟,相关系数中值为0.84。然后,我们展示了我们的数字双胞胎对不确定度量化的多非利特的虚拟评估。这些评估使用我们的心脏数字双胞胎再现了剂量依赖性QTc和T峰到T端延长,与大群体药物反应数据保持一致。这里介绍的心脏数字配对方法是向个性化虚拟治疗测试迈出的一步,可以扩展到为临床试验生成虚拟人群,以快速跟踪治疗评估。为本文开发的工具是开源的,有文档记录的,并且是公开可用的。
{"title":"Harnessing 12-lead ECG and MRI data to personalise repolarisation profiles in cardiac digital twin models for enhanced virtual drug testing","authors":"Julia Camps ,&nbsp;Zhinuo Jenny Wang ,&nbsp;Ruben Doste ,&nbsp;Lucas Arantes Berg ,&nbsp;Maxx Holmes ,&nbsp;Brodie Lawson ,&nbsp;Jakub Tomek ,&nbsp;Kevin Burrage ,&nbsp;Alfonso Bueno-Orovio ,&nbsp;Blanca Rodriguez","doi":"10.1016/j.media.2024.103361","DOIUrl":"10.1016/j.media.2024.103361","url":null,"abstract":"<div><div>Cardiac digital twins are computational tools capturing key functional and anatomical characteristics of patient hearts for investigating disease phenotypes and predicting responses to therapy. When paired with large-scale computational resources and large clinical datasets, digital twin technology can enable virtual clinical trials on virtual cohorts to fast-track therapy development. Here, we present an open-source automated pipeline for personalising ventricular electrophysiological function based on routinely acquired magnetic resonance imaging (MRI) data and the standard 12-lead electrocardiogram (ECG).</div><div>Using MRI-based anatomical models, a sequential Monte-Carlo approximate Bayesian computational inference method is extended to infer electrical activation and repolarisation characteristics from the ECG. Fast simulations are conducted with a reaction-Eikonal model, including the Purkinje network and biophysically-detailed subcellular ionic current dynamics for repolarisation. For each patient, parameter uncertainty is represented by inferring an envelope of plausible ventricular models rather than a single one, which means that parameter uncertainty can be propagated to therapy evaluation. Furthermore, we have developed techniques for translating from reaction-Eikonal to monodomain simulations, which allows more realistic simulations of cardiac electrophysiology. The pipeline is demonstrated in three healthy subjects, where our inferred pseudo-diffusion reaction-Eikonal models reproduced the patient's ECG with a median Pearson's correlation coefficient of 0.9, and then translated to monodomain simulations with a median correlation coefficient of 0.84 across all subjects. We then demonstrate our digital twins for virtual evaluation of Dofetilide with uncertainty quantification. These evaluations using our cardiac digital twins reproduced dose-dependent QTc and T peak to T end prolongations that are in keeping with large population drug response data.</div><div>The methodologies for cardiac digital twinning presented here are a step towards personalised virtual therapy testing and can be scaled to generate virtual populations for clinical trials to fast-track therapy evaluation. The tools developed for this paper are open-source, documented, and made publicly available.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"100 ","pages":"Article 103361"},"PeriodicalIF":10.7,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142744160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TopoTxR: A topology-guided deep convolutional network for breast parenchyma learning on DCE-MRIs TopoTxR:拓扑引导的深度卷积网络,用于在 DCE-MRI 上学习乳腺实质。
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-16 DOI: 10.1016/j.media.2024.103373
Fan Wang , Zhilin Zou , Nicole Sakla , Luke Partyka , Nil Rawal , Gagandeep Singh , Wei Zhao , Haibin Ling , Chuan Huang , Prateek Prasanna , Chao Chen
Characterization of breast parenchyma in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a challenging task owing to the complexity of underlying tissue structures. Existing quantitative approaches, like radiomics and deep learning models, lack explicit quantification of intricate and subtle parenchymal structures, including fibroglandular tissue. To address this, we propose a novel topological approach that explicitly extracts multi-scale topological structures to better approximate breast parenchymal structures, and then incorporates these structures into a deep-learning-based prediction model via an attention mechanism. Our topology-informed deep learning model, TopoTxR, leverages topology to provide enhanced insights into tissues critical for disease pathophysiology and treatment response. We empirically validate TopoTxR using the VICTRE phantom breast dataset, showing that the topological structures extracted by our model effectively approximate the breast parenchymal structures. We further demonstrate TopoTxR’s efficacy in predicting response to neoadjuvant chemotherapy. Our qualitative and quantitative analyses suggest differential topological behavior of breast tissue in treatment-naïve imaging, in patients who respond favorably to therapy as achieving pathological complete response (pCR) versus those who do not. In a comparative analysis with several baselines on the publicly available I-SPY 1 dataset (N = 161, including 47 patients with pCR and 114 without) and the Rutgers proprietary dataset (N = 120, with 69 patients achieving pCR and 51 not), TopoTxR demonstrates a notable improvement, achieving a 2.6% increase in accuracy and a 4.6% enhancement in AUC compared to the state-of-the-art method.
动态对比增强磁共振成像(DCE-MRI)中乳腺实质的特征描述是一项具有挑战性的任务,因为底层组织结构非常复杂。现有的定量方法,如放射组学和深度学习模型,缺乏对包括纤维腺体组织在内的复杂而微妙的实质结构的明确量化。为了解决这个问题,我们提出了一种新颖的拓扑方法,它能明确提取多尺度拓扑结构,以更好地逼近乳腺实质结构,然后通过注意力机制将这些结构纳入基于深度学习的预测模型。我们的基于拓扑结构的深度学习模型 TopoTxR 利用拓扑结构增强了对疾病病理生理学和治疗反应关键组织的洞察力。我们使用 VICTRE 模型乳腺数据集对 TopoTxR 进行了经验验证,结果表明我们的模型提取的拓扑结构有效地接近了乳腺实质结构。我们进一步证明了 TopoTxR 在预测新辅助化疗反应方面的功效。我们的定性和定量分析结果表明,在未经治疗的成像中,乳腺组织的拓扑行为存在差异,对治疗反应良好的患者可获得病理完全反应 (pCR),而对治疗反应不佳的患者则无法获得病理完全反应 (pCR)。在对公开的 I-SPY 1 数据集(N = 161,包括 47 名获得病理完全反应的患者和 114 名未获得病理完全反应的患者)和罗格斯专有数据集(N = 120,包括 69 名获得病理完全反应的患者和 51 名未获得病理完全反应的患者)与几种基线进行的比较分析中,TopoTxR 显示出明显的改进,与最先进的方法相比,准确率提高了 2.6%,AUC 提高了 4.6%。
{"title":"TopoTxR: A topology-guided deep convolutional network for breast parenchyma learning on DCE-MRIs","authors":"Fan Wang ,&nbsp;Zhilin Zou ,&nbsp;Nicole Sakla ,&nbsp;Luke Partyka ,&nbsp;Nil Rawal ,&nbsp;Gagandeep Singh ,&nbsp;Wei Zhao ,&nbsp;Haibin Ling ,&nbsp;Chuan Huang ,&nbsp;Prateek Prasanna ,&nbsp;Chao Chen","doi":"10.1016/j.media.2024.103373","DOIUrl":"10.1016/j.media.2024.103373","url":null,"abstract":"<div><div>Characterization of breast parenchyma in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a challenging task owing to the complexity of underlying tissue structures. Existing quantitative approaches, like radiomics and deep learning models, lack explicit quantification of intricate and subtle parenchymal structures, including fibroglandular tissue. To address this, we propose a novel topological approach that explicitly extracts multi-scale topological structures to better approximate breast parenchymal structures, and then incorporates these structures into a deep-learning-based prediction model via an attention mechanism. Our topology-informed deep learning model, <em>TopoTxR</em>, leverages topology to provide enhanced insights into tissues critical for disease pathophysiology and treatment response. We empirically validate <em>TopoTxR</em> using the VICTRE phantom breast dataset, showing that the topological structures extracted by our model effectively approximate the breast parenchymal structures. We further demonstrate <em>TopoTxR</em>’s efficacy in predicting response to neoadjuvant chemotherapy. Our qualitative and quantitative analyses suggest differential topological behavior of breast tissue in treatment-naïve imaging, in patients who respond favorably to therapy as achieving pathological complete response (pCR) versus those who do not. In a comparative analysis with several baselines on the publicly available I-SPY 1 dataset (N = 161, including 47 patients with pCR and 114 without) and the Rutgers proprietary dataset (N = 120, with 69 patients achieving pCR and 51 not), <em>TopoTxR</em> demonstrates a notable improvement, achieving a 2.6% increase in accuracy and a 4.6% enhancement in AUC compared to the state-of-the-art method.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"99 ","pages":"Article 103373"},"PeriodicalIF":10.7,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142503536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SpinDoctor-IVIM: A virtual imaging framework for intravoxel incoherent motion MRI SpinDoctor-IVIM:体内非相干运动磁共振成像的虚拟成像框架。
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-16 DOI: 10.1016/j.media.2024.103369
Mojtaba Lashgari , Zheyi Yang , Miguel O. Bernabeu , Jing-Rebecca Li , Alejandro F. Frangi
Intravoxel incoherent motion (IVIM) imaging is increasingly recognised as an important tool in clinical MRI, where tissue perfusion and diffusion information can aid disease diagnosis, monitoring of patient recovery, and treatment outcome assessment. Currently, the discovery of biomarkers based on IVIM imaging, similar to other medical imaging modalities, is dependent on long preclinical and clinical validation pathways to link observable markers derived from images with the underlying pathophysiological mechanisms. To speed up this process, virtual IVIM imaging is proposed. This approach provides an efficient virtual imaging tool to design, evaluate, and optimise novel approaches for IVIM imaging. In this work, virtual IVIM imaging is developed through a new finite element solver, SpinDoctor-IVIM, which extends SpinDoctor, a diffusion MRI simulation toolbox. SpinDoctor-IVIM simulates IVIM imaging signals by solving the generalised Bloch–Torrey partial differential equation. The input velocity to SpinDoctor-IVIM is computed using HemeLB, an established Lattice Boltzmann blood flow simulator. Contrary to previous approaches, SpinDoctor-IVIM accounts for volumetric microvasculature during blood flow simulations, incorporates diffusion phenomena in the intravascular space, and accounts for the permeability between the intravascular and extravascular spaces. The above-mentioned features of the proposed framework are illustrated with simulations on a realistic microvasculature model.
体细胞内非相干运动(IVIM)成像越来越被认为是临床核磁共振成像的重要工具,其组织灌注和弥散信息有助于疾病诊断、患者康复监测和治疗效果评估。目前,基于 IVIM 成像的生物标记物的发现与其他医学成像模式类似,都依赖于漫长的临床前和临床验证途径,以便将从图像中获得的可观察标记物与潜在的病理生理机制联系起来。为了加快这一过程,我们提出了虚拟 IVIM 成像。这种方法提供了一种高效的虚拟成像工具,用于设计、评估和优化 IVIM 成像的新方法。在这项工作中,虚拟 IVIM 成像是通过新的有限元求解器 SpinDoctor-IVIM 开发的,它扩展了扩散磁共振成像仿真工具箱 SpinDoctor。SpinDoctor-IVIM 通过求解广义布洛赫-托雷偏微分方程来模拟 IVIM 成像信号。SpinDoctor-IVIM 的输入速度是通过 HemeLB 计算的,HemeLB 是一种成熟的格子玻尔兹曼血流模拟器。与之前的方法不同,SpinDoctor-IVIM 在血流模拟过程中考虑了体积微血管,纳入了血管内空间的扩散现象,并考虑了血管内和血管外空间之间的渗透性。通过对一个逼真的微血管模型进行模拟,说明了拟议框架的上述特点。
{"title":"SpinDoctor-IVIM: A virtual imaging framework for intravoxel incoherent motion MRI","authors":"Mojtaba Lashgari ,&nbsp;Zheyi Yang ,&nbsp;Miguel O. Bernabeu ,&nbsp;Jing-Rebecca Li ,&nbsp;Alejandro F. Frangi","doi":"10.1016/j.media.2024.103369","DOIUrl":"10.1016/j.media.2024.103369","url":null,"abstract":"<div><div>Intravoxel incoherent motion (IVIM) imaging is increasingly recognised as an important tool in clinical MRI, where tissue perfusion and diffusion information can aid disease diagnosis, monitoring of patient recovery, and treatment outcome assessment. Currently, the discovery of biomarkers based on IVIM imaging, similar to other medical imaging modalities, is dependent on long preclinical and clinical validation pathways to link observable markers derived from images with the underlying pathophysiological mechanisms. To speed up this process, virtual IVIM imaging is proposed. This approach provides an efficient virtual imaging tool to design, evaluate, and optimise novel approaches for IVIM imaging. In this work, virtual IVIM imaging is developed through a new finite element solver, SpinDoctor-IVIM, which extends SpinDoctor, a diffusion MRI simulation toolbox. SpinDoctor-IVIM simulates IVIM imaging signals by solving the generalised Bloch–Torrey partial differential equation. The input velocity to SpinDoctor-IVIM is computed using HemeLB, an established Lattice Boltzmann blood flow simulator. Contrary to previous approaches, SpinDoctor-IVIM accounts for volumetric microvasculature during blood flow simulations, incorporates diffusion phenomena in the intravascular space, and accounts for the permeability between the intravascular and extravascular spaces. The above-mentioned features of the proposed framework are illustrated with simulations on a realistic microvasculature model.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"99 ","pages":"Article 103369"},"PeriodicalIF":10.7,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142503535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MedLSAM: Localize and segment anything model for 3D CT images MedLSAM:为三维 CT 图像定位和分割任何模型。
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-15 DOI: 10.1016/j.media.2024.103370
Wenhui Lei , Wei Xu , Kang Li , Xiaofan Zhang , Shaoting Zhang
Recent advancements in foundation models have shown significant potential in medical image analysis. However, there is still a gap in models specifically designed for medical image localization. To address this, we introduce MedLAM, a 3D medical foundation localization model that accurately identifies any anatomical part within the body using only a few template scans. MedLAM employs two self-supervision tasks: unified anatomical mapping (UAM) and multi-scale similarity (MSS) across a comprehensive dataset of 14,012 CT scans. Furthermore, we developed MedLSAM by integrating MedLAM with the Segment Anything Model (SAM). This innovative framework requires extreme point annotations across three directions on several templates to enable MedLAM to locate the target anatomical structure in the image, with SAM performing the segmentation. It significantly reduces the amount of manual annotation required by SAM in 3D medical imaging scenarios. We conducted extensive experiments on two 3D datasets covering 38 distinct organs. Our findings are twofold: (1) MedLAM can directly localize anatomical structures using just a few template scans, achieving performance comparable to fully supervised models; (2) MedLSAM closely matches the performance of SAM and its specialized medical adaptations with manual prompts, while minimizing the need for extensive point annotations across the entire dataset. Moreover, MedLAM has the potential to be seamlessly integrated with future 3D SAM models, paving the way for enhanced segmentation performance. Our code is public at https://github.com/openmedlab/MedLSAM.
最近在基础模型方面取得的进展显示了医学图像分析的巨大潜力。然而,专为医学影像定位设计的模型仍是空白。为了解决这个问题,我们引入了三维医学基础定位模型 MedLAM,该模型只需少量模板扫描就能准确识别人体的任何解剖部位。MedLAM 采用了两项自我监督任务:统一解剖映射(UAM)和多尺度相似性(MSS),涵盖 14,012 个 CT 扫描的综合数据集。此外,我们还将 MedLAM 与 Segment Anything Model (SAM) 相结合,开发了 MedLSAM。这一创新框架需要在多个模板的三个方向上进行极值点注释,以便 MedLAM 能够定位图像中的目标解剖结构,并由 SAM 进行分割。它大大减少了 SAM 在三维医学成像场景中所需的人工标注量。我们在涵盖 38 个不同器官的两个三维数据集上进行了广泛的实验。我们的研究结果有两个方面:(1)MedLAM 只需使用少量模板扫描就能直接定位解剖结构,其性能可与完全监督模型相媲美;(2)MedLSAM 的性能与 SAM 及其专业医疗适配器的人工提示性能非常接近,同时最大限度地减少了对整个数据集进行大量点标注的需求。此外,MedLAM 还有可能与未来的 3D SAM 模型无缝集成,为提高分割性能铺平道路。我们的代码已在 https://github.com/openmedlab/MedLSAM 公开。
{"title":"MedLSAM: Localize and segment anything model for 3D CT images","authors":"Wenhui Lei ,&nbsp;Wei Xu ,&nbsp;Kang Li ,&nbsp;Xiaofan Zhang ,&nbsp;Shaoting Zhang","doi":"10.1016/j.media.2024.103370","DOIUrl":"10.1016/j.media.2024.103370","url":null,"abstract":"<div><div>Recent advancements in foundation models have shown significant potential in medical image analysis. However, there is still a gap in models specifically designed for medical image localization. To address this, we introduce MedLAM, a 3D medical foundation localization model that accurately identifies any anatomical part within the body using only a few template scans. MedLAM employs two self-supervision tasks: unified anatomical mapping (UAM) and multi-scale similarity (MSS) across a comprehensive dataset of 14,012 CT scans. Furthermore, we developed MedLSAM by integrating MedLAM with the Segment Anything Model (SAM). This innovative framework requires extreme point annotations across three directions on several templates to enable MedLAM to locate the target anatomical structure in the image, with SAM performing the segmentation. It significantly reduces the amount of manual annotation required by SAM in 3D medical imaging scenarios. We conducted extensive experiments on two 3D datasets covering 38 distinct organs. Our findings are twofold: (1) MedLAM can directly localize anatomical structures using just a few template scans, achieving performance comparable to fully supervised models; (2) MedLSAM closely matches the performance of SAM and its specialized medical adaptations with manual prompts, while minimizing the need for extensive point annotations across the entire dataset. Moreover, MedLAM has the potential to be seamlessly integrated with future 3D SAM models, paving the way for enhanced segmentation performance. Our code is public at <span><span>https://github.com/openmedlab/MedLSAM</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"99 ","pages":"Article 103370"},"PeriodicalIF":10.7,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142503534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HAGMN-UQ: Hyper association graph matching network with uncertainty quantification for coronary artery semantic labeling HAGMN-UQ:用于冠状动脉语义标注的带不确定性量化的超关联图匹配网络
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-11 DOI: 10.1016/j.media.2024.103374
Chen Zhao , Michele Esposito , Zhihui Xu , Weihua Zhou
Coronary artery disease (CAD) is one of the leading causes of death worldwide. Accurate extraction of individual arterial branches from invasive coronary angiograms (ICA) is critical for CAD diagnosis and detection of stenosis. Generating semantic segmentation for coronary arteries through deep learning-based models presents challenges due to the morphological similarity among different types of coronary arteries, making it difficult to maintain high accuracy while keeping low computational complexity. To address this challenge, we propose an innovative approach using the hyper association graph-matching neural network with uncertainty quantification (HAGMN-UQ) for coronary artery semantic labeling on ICAs. The graph-matching procedure maps the arterial branches between two individual graphs, so that the unlabeled arterial segments are classified by the labeled segments, and the coronary artery semantic labeling is achieved. Leveraging hypergraphs not only extends representation capabilities beyond pairwise relationships, but also improves the robustness and accuracy of the graph matching by enabling the modeling of higher-order associations. In addition, employing the uncertainty quantification to determine the trustworthiness of graph matching reduces the required number of comparisons, so as to accelerate the inference speed. Consequently, our model achieved an accuracy of 0.9211 for coronary artery semantic labeling with a fast inference speed, leading to an effective and efficient prediction in real-time clinical decision-making scenarios.
冠状动脉疾病(CAD)是导致全球死亡的主要原因之一。从有创冠状动脉造影(ICA)中准确提取单个动脉分支对于冠状动脉疾病诊断和狭窄检测至关重要。由于不同类型的冠状动脉在形态上具有相似性,因此通过基于深度学习的模型生成冠状动脉语义分割面临挑战,很难在保持高准确性的同时降低计算复杂度。为应对这一挑战,我们提出了一种创新方法,即利用具有不确定性量化功能的超关联图匹配神经网络(HAGMN-UQ)在 ICA 上进行冠状动脉语义标注。图匹配程序在两个单独的图之间映射动脉分支,这样未标记的动脉段就可以被标记的动脉段分类,从而实现冠状动脉语义标记。利用超图不仅扩展了成对关系以外的表示能力,而且通过建立高阶关联模型,提高了图匹配的稳健性和准确性。此外,利用不确定性量化来确定图匹配的可信度,减少了所需的比较次数,从而加快了推理速度。因此,我们的模型在冠状动脉语义标注方面的准确率达到了 0.9211,推理速度也很快,从而在实时临床决策场景中实现了有效和高效的预测。
{"title":"HAGMN-UQ: Hyper association graph matching network with uncertainty quantification for coronary artery semantic labeling","authors":"Chen Zhao ,&nbsp;Michele Esposito ,&nbsp;Zhihui Xu ,&nbsp;Weihua Zhou","doi":"10.1016/j.media.2024.103374","DOIUrl":"10.1016/j.media.2024.103374","url":null,"abstract":"<div><div>Coronary artery disease (CAD) is one of the leading causes of death worldwide. Accurate extraction of individual arterial branches from invasive coronary angiograms (ICA) is critical for CAD diagnosis and detection of stenosis. Generating semantic segmentation for coronary arteries through deep learning-based models presents challenges due to the morphological similarity among different types of coronary arteries, making it difficult to maintain high accuracy while keeping low computational complexity. To address this challenge, we propose an innovative approach using the hyper association graph-matching neural network with uncertainty quantification (HAGMN-UQ) for coronary artery semantic labeling on ICAs. The graph-matching procedure maps the arterial branches between two individual graphs, so that the unlabeled arterial segments are classified by the labeled segments, and the coronary artery semantic labeling is achieved. Leveraging hypergraphs not only extends representation capabilities beyond pairwise relationships, but also improves the robustness and accuracy of the graph matching by enabling the modeling of higher-order associations. In addition, employing the uncertainty quantification to determine the trustworthiness of graph matching reduces the required number of comparisons, so as to accelerate the inference speed. Consequently, our model achieved an accuracy of 0.9211 for coronary artery semantic labeling with a fast inference speed, leading to an effective and efficient prediction in real-time clinical decision-making scenarios.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"99 ","pages":"Article 103374"},"PeriodicalIF":10.7,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142442814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Knowledge-driven multi-graph convolutional network for brain network analysis and potential biomarker discovery 用于脑网络分析和潜在生物标记物发现的知识驱动多图卷积网络
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-09 DOI: 10.1016/j.media.2024.103368
Xianhua Zeng, Jianhua Gong, Weisheng Li, Zhuoya Yang
In brain network analysis, individual-level data can provide biological features of individuals, while population-level data can provide demographic information of populations. However, existing methods mostly utilize either individual- or population-level features separately, inevitably neglecting the multi-level characteristics of brain disorders. To address this issue, we propose an end-to-end multi-graph neural network model called KMGCN. This model simultaneously leverages individual- and population-level features for brain network analysis. At the individual level, we construct multi-graph using both knowledge-driven and data-driven approaches. Knowledge-driven refers to constructing a knowledge graph based on prior knowledge, while data-driven involves learning a data graph from the data itself. At the population level, we construct multi-graph using both imaging and phenotypic data. Additionally, we devise a pooling method tailored for brain networks, capable of selecting brain regions that impact brain disorders. We evaluate the performance of our model on two large datasets, ADNI and ABIDE, and experimental results demonstrate that it achieves state-of-the-art performance, with 86.87% classification accuracy for ADNI and 86.40% for ABIDE, accompanied by around 10% improvements in all evaluation metrics compared to the state-of-the-art models. Additionally, the biomarkers identified by our model align well with recent neuroscience research, indicating the effectiveness of our model in brain network analysis and potential biomarker discovery. The code is available at https://github.com/GN-gjh/KMGCN.
在脑网络分析中,个体层面的数据可以提供个体的生物学特征,而群体层面的数据则可以提供群体的人口学信息。然而,现有的方法大多是分别利用个体或群体层面的特征,难免忽略了脑疾病的多层次特征。针对这一问题,我们提出了一种端到端的多图神经网络模型,称为 KMGCN。该模型可同时利用个体和群体层面的特征进行脑网络分析。在个体层面,我们采用知识驱动和数据驱动两种方法构建多图。知识驱动指的是根据已有知识构建知识图谱,而数据驱动指的是从数据本身学习数据图谱。在群体层面,我们利用成像和表型数据构建多图。此外,我们还设计了一种专为大脑网络量身定制的池化方法,能够筛选出影响大脑疾病的大脑区域。我们在两个大型数据集(ADNI 和 ABIDE)上评估了我们模型的性能,实验结果表明它达到了最先进的性能,ADNI 和 ABIDE 的分类准确率分别为 86.87% 和 86.40%,与最先进的模型相比,所有评估指标都提高了约 10%。此外,我们的模型识别出的生物标记物与最近的神经科学研究非常吻合,这表明我们的模型在大脑网络分析和潜在生物标记物发现方面非常有效。代码见 https://github.com/GN-gjh/KMGCN。
{"title":"Knowledge-driven multi-graph convolutional network for brain network analysis and potential biomarker discovery","authors":"Xianhua Zeng,&nbsp;Jianhua Gong,&nbsp;Weisheng Li,&nbsp;Zhuoya Yang","doi":"10.1016/j.media.2024.103368","DOIUrl":"10.1016/j.media.2024.103368","url":null,"abstract":"<div><div>In brain network analysis, individual-level data can provide biological features of individuals, while population-level data can provide demographic information of populations. However, existing methods mostly utilize either individual- or population-level features separately, inevitably neglecting the multi-level characteristics of brain disorders. To address this issue, we propose an end-to-end multi-graph neural network model called KMGCN. This model simultaneously leverages individual- and population-level features for brain network analysis. At the individual level, we construct multi-graph using both knowledge-driven and data-driven approaches. Knowledge-driven refers to constructing a knowledge graph based on prior knowledge, while data-driven involves learning a data graph from the data itself. At the population level, we construct multi-graph using both imaging and phenotypic data. Additionally, we devise a pooling method tailored for brain networks, capable of selecting brain regions that impact brain disorders. We evaluate the performance of our model on two large datasets, ADNI and ABIDE, and experimental results demonstrate that it achieves state-of-the-art performance, with 86.87% classification accuracy for ADNI and 86.40% for ABIDE, accompanied by around 10% improvements in all evaluation metrics compared to the state-of-the-art models. Additionally, the biomarkers identified by our model align well with recent neuroscience research, indicating the effectiveness of our model in brain network analysis and potential biomarker discovery. The code is available at <span><span>https://github.com/GN-gjh/KMGCN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"99 ","pages":"Article 103368"},"PeriodicalIF":10.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142442815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RFMiD: Retinal Image Analysis for multi-Disease Detection challenge RFMiD:用于多种疾病检测的视网膜图像分析挑战
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-09 DOI: 10.1016/j.media.2024.103365
Samiksha Pachade , Prasanna Porwal , Manesh Kokare , Girish Deshmukh , Vivek Sahasrabuddhe , Zhengbo Luo , Feng Han , Zitang Sun , Li Qihan , Sei-ichiro Kamata , Edward Ho , Edward Wang , Asaanth Sivajohan , Saerom Youn , Kevin Lane , Jin Chun , Xinliang Wang , Yunchao Gu , Sixu Lu , Young-tack Oh , Fabrice Mériaudeau
In the last decades, many publicly available large fundus image datasets have been collected for diabetic retinopathy, glaucoma, and age-related macular degeneration, and a few other frequent pathologies. These publicly available datasets were used to develop a computer-aided disease diagnosis system by training deep learning models to detect these frequent pathologies. One challenge limiting the adoption of a such system by the ophthalmologist is, computer-aided disease diagnosis system ignores sight-threatening rare pathologies such as central retinal artery occlusion or anterior ischemic optic neuropathy and others that ophthalmologists currently detect. Aiming to advance the state-of-the-art in automatic ocular disease classification of frequent diseases along with the rare pathologies, a grand challenge on “Retinal Image Analysis for multi-Disease Detection” was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI - 2021). This paper, reports the challenge organization, dataset, top-performing participants solutions, evaluation measures, and results based on a new “Retinal Fundus Multi-disease Image Dataset” (RFMiD). There were two principal sub-challenges: disease screening (i.e. presence versus absence of pathology — a binary classification problem) and disease/pathology classification (a 28-class multi-label classification problem). It received a positive response from the scientific community with 74 submissions by individuals/teams that effectively entered in this challenge. The top-performing methodologies utilized a blend of data-preprocessing, data augmentation, pre-trained model, and model ensembling. This multi-disease (frequent and rare pathologies) detection will enable the development of generalizable models for screening the retina, unlike the previous efforts that focused on the detection of specific diseases.
在过去几十年中,针对糖尿病视网膜病变、青光眼、老年性黄斑变性和其他一些常见病,收集了许多公开的大型眼底图像数据集。这些公开的数据集被用来开发计算机辅助疾病诊断系统,通过训练深度学习模型来检测这些常见病。限制眼科医生采用此类系统的一个挑战是,计算机辅助疾病诊断系统忽略了视网膜中央动脉闭塞或前部缺血性视神经病变等威胁视力的罕见病症,而眼科医生目前检测到的是其他病症。为了推动常见疾病和罕见病的眼科疾病自动分类技术的发展,在电气和电子工程师学会生物医学成像国际研讨会(ISBI - 2021)期间组织了 "用于多种疾病检测的视网膜图像分析 "大型挑战赛。本文报告了基于新的 "视网膜眼底多疾病图像数据集"(RFMiD)的挑战赛组织、数据集、表现最佳的参赛者解决方案、评估措施和结果。其中有两个主要的子挑战:疾病筛查(即是否存在病理--二元分类问题)和疾病/病理分类(28 类多标签分类问题)。挑战赛得到了科学界的积极响应,共有 74 个个人/团队提交了有效的参赛作品。表现最出色的方法综合利用了数据预处理、数据增强、预训练模型和模型组合。这种多疾病(常见病和罕见病)检测方法将有助于开发用于视网膜筛查的通用模型,而不像以前的方法那样只侧重于检测特定疾病。
{"title":"RFMiD: Retinal Image Analysis for multi-Disease Detection challenge","authors":"Samiksha Pachade ,&nbsp;Prasanna Porwal ,&nbsp;Manesh Kokare ,&nbsp;Girish Deshmukh ,&nbsp;Vivek Sahasrabuddhe ,&nbsp;Zhengbo Luo ,&nbsp;Feng Han ,&nbsp;Zitang Sun ,&nbsp;Li Qihan ,&nbsp;Sei-ichiro Kamata ,&nbsp;Edward Ho ,&nbsp;Edward Wang ,&nbsp;Asaanth Sivajohan ,&nbsp;Saerom Youn ,&nbsp;Kevin Lane ,&nbsp;Jin Chun ,&nbsp;Xinliang Wang ,&nbsp;Yunchao Gu ,&nbsp;Sixu Lu ,&nbsp;Young-tack Oh ,&nbsp;Fabrice Mériaudeau","doi":"10.1016/j.media.2024.103365","DOIUrl":"10.1016/j.media.2024.103365","url":null,"abstract":"<div><div>In the last decades, many publicly available large fundus image datasets have been collected for diabetic retinopathy, glaucoma, and age-related macular degeneration, and a few other frequent pathologies. These publicly available datasets were used to develop a computer-aided disease diagnosis system by training deep learning models to detect these frequent pathologies. One challenge limiting the adoption of a such system by the ophthalmologist is, computer-aided disease diagnosis system ignores sight-threatening rare pathologies such as central retinal artery occlusion or anterior ischemic optic neuropathy and others that ophthalmologists currently detect. Aiming to advance the state-of-the-art in automatic ocular disease classification of frequent diseases along with the rare pathologies, a grand challenge on “Retinal Image Analysis for multi-Disease Detection” was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI - 2021). This paper, reports the challenge organization, dataset, top-performing participants solutions, evaluation measures, and results based on a new “Retinal Fundus Multi-disease Image Dataset” (RFMiD). There were two principal sub-challenges: disease screening (i.e. presence versus absence of pathology — a binary classification problem) and disease/pathology classification (a 28-class multi-label classification problem). It received a positive response from the scientific community with 74 submissions by individuals/teams that effectively entered in this challenge. The top-performing methodologies utilized a blend of data-preprocessing, data augmentation, pre-trained model, and model ensembling. This multi-disease (frequent and rare pathologies) detection will enable the development of generalizable models for screening the retina, unlike the previous efforts that focused on the detection of specific diseases.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"99 ","pages":"Article 103365"},"PeriodicalIF":10.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142416675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual structure-aware image filterings for semi-supervised medical image segmentation 用于半监督医学图像分割的双重结构感知图像滤波技术
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-09 DOI: 10.1016/j.media.2024.103364
Yuliang Gu , Zhichao Sun , Tian Chen , Xin Xiao , Yepeng Liu , Yongchao Xu , Laurent Najman
Semi-supervised image segmentation has attracted great attention recently. The key is how to leverage unlabeled images in the training process. Most methods maintain consistent predictions of the unlabeled images under variations (e.g., adding noise/perturbations, or creating alternative versions) in the image and/or model level. In most image-level variation, medical images often have prior structure information, which has not been well explored. In this paper, we propose novel dual structure-aware image filterings (DSAIF) as the image-level variations for semi-supervised medical image segmentation. Motivated by connected filtering that simplifies image via filtering in structure-aware tree-based image representation, we resort to the dual contrast invariant Max-tree and Min-tree representation. Specifically, we propose a novel connected filtering that removes topologically equivalent nodes (i.e. connected components) having no siblings in the Max/Min-tree. This results in two filtered images preserving topologically critical structure. Applying the proposed DSAIF to mutually supervised networks decreases the consensus of their erroneous predictions on unlabeled images. This helps to alleviate the confirmation bias issue of overfitting to noisy pseudo labels of unlabeled images, and thus effectively improves the segmentation performance. Extensive experimental results on three benchmark datasets demonstrate that the proposed method significantly/consistently outperforms some state-of-the-art methods. The source codes will be publicly available.
半监督图像分割最近引起了极大的关注。关键在于如何在训练过程中利用未标记图像。大多数方法都能在图像和/或模型水平变化(如添加噪声/扰动或创建替代版本)的情况下保持对未标记图像的一致预测。在大多数图像级变化中,医学图像往往具有先验结构信息,而这一点尚未得到很好的探索。在本文中,我们提出了新颖的双结构感知图像滤波(DSAIF),作为半监督医学图像分割的图像级变化。连通滤波通过基于结构感知的树状图像表示中的滤波来简化图像,受此启发,我们采用了双对比不变最大树和最小树表示法。具体来说,我们提出了一种新颖的连通滤波方法,它能去除最大/最小树中没有同级节点的拓扑等价节点(即连通组件)。这样,两幅经过过滤的图像就保留了拓扑上的关键结构。将所提出的 DSAIF 应用于相互监督的网络,可减少其对未标记图像的错误预测共识。这有助于缓解因过度拟合无标签图像的噪声伪标签而产生的确认偏差问题,从而有效提高分割性能。在三个基准数据集上的大量实验结果表明,所提出的方法显著/持续地优于一些最先进的方法。源代码将公开发布。
{"title":"Dual structure-aware image filterings for semi-supervised medical image segmentation","authors":"Yuliang Gu ,&nbsp;Zhichao Sun ,&nbsp;Tian Chen ,&nbsp;Xin Xiao ,&nbsp;Yepeng Liu ,&nbsp;Yongchao Xu ,&nbsp;Laurent Najman","doi":"10.1016/j.media.2024.103364","DOIUrl":"10.1016/j.media.2024.103364","url":null,"abstract":"<div><div>Semi-supervised image segmentation has attracted great attention recently. The key is how to leverage unlabeled images in the training process. Most methods maintain consistent predictions of the unlabeled images under variations (<em>e.g.</em>, adding noise/perturbations, or creating alternative versions) in the image and/or model level. In most image-level variation, medical images often have prior structure information, which has not been well explored. In this paper, we propose novel dual structure-aware image filterings (DSAIF) as the image-level variations for semi-supervised medical image segmentation. Motivated by connected filtering that simplifies image via filtering in structure-aware tree-based image representation, we resort to the dual contrast invariant Max-tree and Min-tree representation. Specifically, we propose a novel connected filtering that removes topologically equivalent nodes (<em>i.e.</em> connected components) having no siblings in the Max/Min-tree. This results in two filtered images preserving topologically critical structure. Applying the proposed DSAIF to mutually supervised networks decreases the consensus of their erroneous predictions on unlabeled images. This helps to alleviate the confirmation bias issue of overfitting to noisy pseudo labels of unlabeled images, and thus effectively improves the segmentation performance. Extensive experimental results on three benchmark datasets demonstrate that the proposed method significantly/consistently outperforms some state-of-the-art methods. The source codes will be publicly available.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"99 ","pages":"Article 103364"},"PeriodicalIF":10.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142442818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Medical image analysis
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1