Purpose The performance of vision-language models (VLMs) with image interpretation capabilities, such as GPT-4 omni (GPT-4o), GPT-4 vision (GPT-4V), and Claude-3, has not been compared and remains unexplored in specialized radiological fields, including nuclear medicine and interventional radiology. This study aimed to evaluate and compare the diagnostic accuracy of various VLMs, including GPT-4 + GPT-4V, GPT-4o, Claude-3 Sonnet, and Claude-3 Opus, using Japanese diagnostic radiology, nuclear medicine, and interventional radiology (JDR, JNM, and JIR, respectively) board certification tests.
{"title":"Diagnostic Accuracy of Vision-Language Models on Japanese Diagnostic Radiology, Nuclear Medicine, and Interventional Radiology Specialty Board Examinations","authors":"Tatsushi Oura, Hiroyuki Tatekawa, Daisuke Horiuchi, Shu Matsushita, Hirotaka Takita, Natsuko Atsukawa, Yasuhito Mitsuyama, Atsushi Yoshida, Kazuki Murai, Rikako Tanaka, Taro Shimono, Akira Yamamoto, Yukio Miki, Daiju Ueda","doi":"10.1101/2024.05.31.24308072","DOIUrl":"https://doi.org/10.1101/2024.05.31.24308072","url":null,"abstract":"<strong>Purpose</strong> The performance of vision-language models (VLMs) with image interpretation capabilities, such as GPT-4 omni (GPT-4o), GPT-4 vision (GPT-4V), and Claude-3, has not been compared and remains unexplored in specialized radiological fields, including nuclear medicine and interventional radiology. This study aimed to evaluate and compare the diagnostic accuracy of various VLMs, including GPT-4 + GPT-4V, GPT-4o, Claude-3 Sonnet, and Claude-3 Opus, using Japanese diagnostic radiology, nuclear medicine, and interventional radiology (JDR, JNM, and JIR, respectively) board certification tests.","PeriodicalId":501358,"journal":{"name":"medRxiv - Radiology and Imaging","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141253548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-30DOI: 10.1101/2024.05.28.24308027
Alfredo Lucas, Chetan Vadali, Sofia Mouchtaris, T. Campbell Arnold, James J Gugger, Catherine V. Kulick-Soper, Mariam Josyula, Nina Petillo, Sandhitsu Das, Jacob Dubroff, John A. Detre, Joel M. Stein, Kathryn A. Davis
Background and Significance: Positron Emission Tomography (PET) using fluorodeoxyglucose (FDG-PET) is a standard imaging modality for detecting areas of hypometabolism associated with the seizure onset zone (SOZ) in temporal lobe epilepsy (TLE). However, FDG-PET is costly and involves the use of a radioactive tracer. Arterial Spin Labeling (ASL) offers an MRI-based quantification of cerebral blood flow (CBF) that could also help localize the SOZ, but its performance in doing so, relative to FDG-PET, is limited. In this study, we seek to improve ASL's diagnostic performance by developing a deep learning framework for synthesizing FDG-PET-like images from ASL and structural MRI inputs. Methods: We included 68 epilepsy patients, out of which 36 had well lateralized TLE. We compared the coupling between FDG-PET and ASL CBF values in different brain regions, as well as the asymmetry of these values across the brain. We additionally assessed each modality's ability to lateralize the SOZ across brain regions. Using our paired PET-ASL data, we developed FlowGAN, a generative adversarial neural network (GAN) that synthesizes PET-like images from ASL and T1-weighted MRI inputs. We tested our synthetic PET images against the actual PET images of subjects to assess their ability to reproduce clinically meaningful hypometabolism and asymmetries in TLE. Results: We found variable coupling between PET and ASL CBF values across brain regions. PET and ASL had high coupling in neocortical temporal and frontal brain regions (Spearman's r > 0.30, p < 0.05) but low coupling in mesial temporal structures (Spearman's r < 0.30, p > 0.05). Both whole brain PET and ASL CBF asymmetry values provided good separability between left and right TLE subjects, but PET (AUC = 0.96, 95% CI: [0.88, 1.00]) outperformed ASL (AUC = 0.81; 95% CI: [0.65, 0.96]). FlowGAN-generated images demonstrated high structural similarity to actual PET images (SSIM = 0.85). Globally, asymmetry values were better correlated between synthetic PET and original PET than between ASL CBF and original PET, with a mean correlation increase of 0.15 (95% CI: [0.07, 0.24], p<0.001, Cohen's d = 0.91). Furthermore, regions that had poor ASL-PET correlation (e.g. mesial temporal structures) showed the greatest improvement with synthetic PET images. Conclusions: FlowGAN improves ASL's diagnostic performance, generating synthetic PET images that closely mimic actual FDG-PET in depicting hypometabolism associated with TLE. This approach could improve non-invasive SOZ localization, offering a promising tool for epilepsy presurgical assessment. It potentially broadens the applicability of ASL in clinical practice and could reduce reliance on FDG-PET for epilepsy and other neurological disorders.
背景和意义:使用氟脱氧葡萄糖的正电子发射断层扫描(PET)(FDG-PET)是检测颞叶癫痫(TLE)发作起始区(SOZ)相关代谢低下区域的标准成像模式。然而,FDG-PET 费用昂贵,而且需要使用放射性示踪剂。动脉自旋标记(ASL)提供了一种基于核磁共振成像的脑血流(CBF)量化方法,也能帮助定位 SOZ,但相对于 FDG-PET 而言,ASL 的性能有限。在本研究中,我们试图通过开发一种深度学习框架,从 ASL 和结构 MRI 输入中合成类似 FDG-PET 的图像,从而提高 ASL 的诊断性能。研究方法我们纳入了 68 名癫痫患者,其中 36 人患有侧位性良好的 TLE。我们比较了不同脑区的 FDG-PET 和 ASL CBF 值之间的耦合,以及这些值在整个大脑中的不对称性。此外,我们还评估了每种模式在不同脑区侧化 SOZ 的能力。利用成对的 PET-ASL 数据,我们开发了一种生成对抗神经网络 (GAN)--FlowGAN,它能根据 ASL 和 T1 加权 MRI 输入合成类似 PET 的图像。我们将合成的 PET 图像与受试者的实际 PET 图像进行了对比测试,以评估其再现临床上有意义的 TLE 低代谢和不对称性的能力。结果:我们发现 PET 和 ASL CBF 值在不同脑区的耦合度各不相同。PET 和 ASL 在新皮层颞叶和额叶脑区的耦合度较高(Spearman's r > 0.30, p <0.05),但在中颞叶结构的耦合度较低(Spearman's r < 0.30, p >0.05)。全脑 PET 和 ASL CBF 不对称值都能很好地区分左右 TLE 受试者,但 PET(AUC = 0.96,95% CI:[0.88, 1.00])优于 ASL(AUC = 0.81;95% CI:[0.65, 0.96])。FlowGAN 生成的图像与实际 PET 图像具有很高的结构相似性(SSIM = 0.85)。总体而言,合成 PET 与原始 PET 之间的不对称值相关性要好于 ASL CBF 与原始 PET 之间的不对称值相关性,平均相关性增加了 0.15(95% CI:[0.07, 0.24],p<0.001,Cohen's d = 0.91)。此外,ASL-PET 相关性较差的区域(如颞中叶结构)在使用合成 PET 图像后改善最大。结论:FlowGANFlowGAN提高了ASL的诊断性能,生成的合成PET图像与实际的FDG-PET图像非常相似,能够描述与TLE相关的代谢低下。这种方法可以改善非侵入性 SOZ 定位,为癫痫术前评估提供了一种前景广阔的工具。它有可能拓宽 ASL 在临床实践中的应用范围,减少癫痫和其他神经系统疾病对 FDG-PET 的依赖。
{"title":"Enhancing the Diagnostic Utility of ASL Imaging in Temporal Lobe Epilepsy through FlowGAN: An ASL to PET Image Translation Framework","authors":"Alfredo Lucas, Chetan Vadali, Sofia Mouchtaris, T. Campbell Arnold, James J Gugger, Catherine V. Kulick-Soper, Mariam Josyula, Nina Petillo, Sandhitsu Das, Jacob Dubroff, John A. Detre, Joel M. Stein, Kathryn A. Davis","doi":"10.1101/2024.05.28.24308027","DOIUrl":"https://doi.org/10.1101/2024.05.28.24308027","url":null,"abstract":"Background and Significance: Positron Emission Tomography (PET) using fluorodeoxyglucose (FDG-PET) is a standard imaging modality for detecting areas of hypometabolism associated with the seizure onset zone (SOZ) in temporal lobe epilepsy (TLE). However, FDG-PET is costly and involves the use of a radioactive tracer. Arterial Spin Labeling (ASL) offers an MRI-based quantification of cerebral blood flow (CBF) that could also help localize the SOZ, but its performance in doing so, relative to FDG-PET, is limited. In this study, we seek to improve ASL's diagnostic performance by developing a deep learning framework for synthesizing FDG-PET-like images from ASL and structural MRI inputs. Methods: We included 68 epilepsy patients, out of which 36 had well lateralized TLE. We compared the coupling between FDG-PET and ASL CBF values in different brain regions, as well as the asymmetry of these values across the brain. We additionally assessed each modality's ability to lateralize the SOZ across brain regions. Using our paired PET-ASL data, we developed FlowGAN, a generative adversarial neural network (GAN) that synthesizes PET-like images from ASL and T1-weighted MRI inputs. We tested our synthetic PET images against the actual PET images of subjects to assess their ability to reproduce clinically meaningful hypometabolism and asymmetries in TLE. Results: We found variable coupling between PET and ASL CBF values across brain regions. PET and ASL had high coupling in neocortical temporal and frontal brain regions (Spearman's r > 0.30, p < 0.05) but low coupling in mesial temporal structures (Spearman's r < 0.30, p > 0.05). Both whole brain PET and ASL CBF asymmetry values provided good separability between left and right TLE subjects, but PET (AUC = 0.96, 95% CI: [0.88, 1.00]) outperformed ASL (AUC = 0.81; 95% CI: [0.65, 0.96]). FlowGAN-generated images demonstrated high structural similarity to actual PET images (SSIM = 0.85). Globally, asymmetry values were better correlated between synthetic PET and original PET than between ASL CBF and original PET, with a mean correlation increase of 0.15 (95% CI: [0.07, 0.24], p<0.001, Cohen's d = 0.91). Furthermore, regions that had poor ASL-PET correlation (e.g. mesial temporal structures) showed the greatest improvement with synthetic PET images. Conclusions: FlowGAN improves ASL's diagnostic performance, generating synthetic PET images that closely mimic actual FDG-PET in depicting hypometabolism associated with TLE. This approach could improve non-invasive SOZ localization, offering a promising tool for epilepsy presurgical assessment. It potentially broadens the applicability of ASL in clinical practice and could reduce reliance on FDG-PET for epilepsy and other neurological disorders.","PeriodicalId":501358,"journal":{"name":"medRxiv - Radiology and Imaging","volume":"118 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141193099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background Spontaneous intracranial hemorrhages are life-threatening conditions that require fast and accurate diagnosis. We hypothesized that deep learning (DL) could be utilized to detect these hemorrhages with a high accuracy.
{"title":"Accuracy of Combined Deep Learning Algorithms in Detecting Spontaneous Intracranial Hemorrhage on Emergent Head CT Scans","authors":"Takala Juuso, Peura Heikki, Riku Pirinen, Väätäinen Katri, Sergei Terjajev, Ziyuan Lin, Rahul Raj, Korja Miikka","doi":"10.1101/2024.05.28.24308084","DOIUrl":"https://doi.org/10.1101/2024.05.28.24308084","url":null,"abstract":"<strong>Background</strong> Spontaneous intracranial hemorrhages are life-threatening conditions that require fast and accurate diagnosis. We hypothesized that deep learning (DL) could be utilized to detect these hemorrhages with a high accuracy.","PeriodicalId":501358,"journal":{"name":"medRxiv - Radiology and Imaging","volume":"61 8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141193180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-27DOI: 10.1101/2024.05.24.24307873
Dan Liu, Yiqi Mi, Menghan Li, Anna Nigri, Marina Grisoli, Keith M Kendrick, Benjamin Becker, Stefania Ferraro
Objective Despite the promising results of neurofeedback with real-time functional magnetic resonance imaging (rt-fMRI-NF) in the treatment of various psychiatric and neurological disorders, few studies have investigated its effects in acute and chronic pain and with mixed results. The lack of clear neuromodulation targets, rooted in the still poorly understood neurophysiopathology of chronic pain, has probably contributed to these inconsistent findings. In contrast, functional neurosurgery (funcSurg) approaches targeting specific brain regions have been shown to reduce pain in a considerable number of patients with chronic pain, however, their invasiveness limits their use to patients in critical situations. In this work, we sought to redefine, in an unbiased manner, rt-fMRI-NF future targets informed by the long tradition of funcSurg approaches.
{"title":"A surgery-informed precision approach to determining brain targets for real-time fMRI neurofeedback modulation in chronic pain","authors":"Dan Liu, Yiqi Mi, Menghan Li, Anna Nigri, Marina Grisoli, Keith M Kendrick, Benjamin Becker, Stefania Ferraro","doi":"10.1101/2024.05.24.24307873","DOIUrl":"https://doi.org/10.1101/2024.05.24.24307873","url":null,"abstract":"<strong>Objective</strong> Despite the promising results of neurofeedback with real-time functional magnetic resonance imaging (rt-fMRI-NF) in the treatment of various psychiatric and neurological disorders, few studies have investigated its effects in acute and chronic pain and with mixed results. The lack of clear neuromodulation targets, rooted in the still poorly understood neurophysiopathology of chronic pain, has probably contributed to these inconsistent findings. In contrast, functional neurosurgery (funcSurg) approaches targeting specific brain regions have been shown to reduce pain in a considerable number of patients with chronic pain, however, their invasiveness limits their use to patients in critical situations. In this work, we sought to redefine, in an unbiased manner, rt-fMRI-NF future targets informed by the long tradition of funcSurg approaches.","PeriodicalId":501358,"journal":{"name":"medRxiv - Radiology and Imaging","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141165748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-27DOI: 10.1101/2024.05.26.24307915
Yuki Sonoda, Ryo Kurokawa, Yuta Nakamura, Jun Kanzawa, Mariko Kurokawa, Yuji Ohizumi, Wataru Gonoi, Osamu Abe
Backgrounds Large language models (LLMs) are rapidly advancing and demonstrating high performance in understanding textual information, suggesting potential applications in interpreting patient histories and documented imaging findings. LLMs are advancing rapidly and an improvement in their diagnostic ability is expected. Furthermore, there has been a lack of comprehensive comparisons between LLMs from various manufacturers.
{"title":"Diagnostic Performances of GPT-4o, Claude 3 Opus, and Gemini 1.5 Pro in “Diagnosis Please” Cases","authors":"Yuki Sonoda, Ryo Kurokawa, Yuta Nakamura, Jun Kanzawa, Mariko Kurokawa, Yuji Ohizumi, Wataru Gonoi, Osamu Abe","doi":"10.1101/2024.05.26.24307915","DOIUrl":"https://doi.org/10.1101/2024.05.26.24307915","url":null,"abstract":"<strong>Backgrounds</strong> Large language models (LLMs) are rapidly advancing and demonstrating high performance in understanding textual information, suggesting potential applications in interpreting patient histories and documented imaging findings. LLMs are advancing rapidly and an improvement in their diagnostic ability is expected. Furthermore, there has been a lack of comprehensive comparisons between LLMs from various manufacturers.","PeriodicalId":501358,"journal":{"name":"medRxiv - Radiology and Imaging","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141165746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aim The Computer Tomography (CT) imaging equipment varies across facilities, leading to inconsistent image conditions. This poses challenges for deep learning analysis using collected CT images. To standardize the shape of the matrix, the creation of intermediate slice images with the same width is necessary. This study aimed to generate inter-slice images from two existing CT images.
{"title":"Generating intermediate slices with U-nets in craniofacial CT images","authors":"Soh Nishimoto, Kenichiro Kawai, Koyo Nakajima, Hisako Ishise, Masao Kakibuchi","doi":"10.1101/2024.05.08.24307089","DOIUrl":"https://doi.org/10.1101/2024.05.08.24307089","url":null,"abstract":"<strong>Aim</strong> The Computer Tomography (CT) imaging equipment varies across facilities, leading to inconsistent image conditions. This poses challenges for deep learning analysis using collected CT images. To standardize the shape of the matrix, the creation of intermediate slice images with the same width is necessary. This study aimed to generate inter-slice images from two existing CT images.","PeriodicalId":501358,"journal":{"name":"medRxiv - Radiology and Imaging","volume":"44 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140935931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-09DOI: 10.1101/2024.05.07.24304092
Alexey Shevtsov, Iaroslav Tominin, Vladislav Tominin, Vsevolod Malevanniy, Yury Esakov, Zurab Tukvadze, Andrey Nefedov, Piotr Yablonskii, Pavel Gavrilov, Vadim Kozlov, Mariya Blokhina, Elena Nalivkina, Victor Gombolevskiy, Yuriy Vasilev, Mariya Dugova, Valeria Chernina, Olga Omelyanskaya, Roman Reshetnikov, Ivan Blokhin, Mikhail Belyaev
Lung cancer is the second most common type of cancer worldwide, making up about 20% of all cancer deaths with less than 10% 5-year survival rate for the very late stage. The recent guidelines for the most common non-small-cell lung cancer (NSCLC) type recommend performing staging based on the 8th edition of TNM classification, where the mediastinal lymph node involvement plays a key role. However, most of the non-invasive methods have a very limited level of sensitivity and are relatively accurate, but invasive methods can be contradicted for some patients. Current advances in Deep Learning show great potential in solving such problems. Still, most of these works focus on the algorithmic side of the problem, not the clinical relevance. Moreover, none of them addressed individual lymph node malignancy classification problem, restricting the indirect analysis of the whole study, and limiting the interpretability of the result without giving an option for cliniciansto validate the result. This work mitigates these gaps, proposing a multi-step algorithm for each visible mediastinal lymph node segmentation and assessing the probability of its involvement in themetastatic process, using the results of histological verification on training. The developed pipelineshows 0.74 ± 0.01 average Recall with 0.53 ± 0.26 object Dice Score for the clinically relevant lymph nodes segmentation task and 0.73 ROC AUC for patient’s N-stage prediction, outperformingtraditional size-based criteria.
{"title":"Automatic Lymph Nodes Segmentation and Histological Status Classification on Computed Tomography Scans Using Convolutional Neural Network","authors":"Alexey Shevtsov, Iaroslav Tominin, Vladislav Tominin, Vsevolod Malevanniy, Yury Esakov, Zurab Tukvadze, Andrey Nefedov, Piotr Yablonskii, Pavel Gavrilov, Vadim Kozlov, Mariya Blokhina, Elena Nalivkina, Victor Gombolevskiy, Yuriy Vasilev, Mariya Dugova, Valeria Chernina, Olga Omelyanskaya, Roman Reshetnikov, Ivan Blokhin, Mikhail Belyaev","doi":"10.1101/2024.05.07.24304092","DOIUrl":"https://doi.org/10.1101/2024.05.07.24304092","url":null,"abstract":"Lung cancer is the second most common type of cancer worldwide, making up about 20% of all cancer deaths with less than 10% 5-year survival rate for the very late stage. The recent guidelines for the most common non-small-cell lung cancer (NSCLC) type recommend performing staging based on the 8th edition of TNM classification, where the mediastinal lymph node involvement plays a key role. However, most of the non-invasive methods have a very limited level of sensitivity and are relatively accurate, but invasive methods can be contradicted for some patients. Current advances in Deep Learning show great potential in solving such problems. Still, most of these works focus on the algorithmic side of the problem, not the clinical relevance. Moreover, none of them addressed individual lymph node malignancy classification problem, restricting the indirect analysis of the whole study, and limiting the interpretability of the result without giving an option for cliniciansto validate the result. This work mitigates these gaps, proposing a multi-step algorithm for each visible mediastinal lymph node segmentation and assessing the probability of its involvement in themetastatic process, using the results of histological verification on training. The developed pipelineshows 0.74 ± 0.01 average Recall with 0.53 ± 0.26 object Dice Score for the clinically relevant lymph nodes segmentation task and 0.73 ROC AUC for patient’s N-stage prediction, outperformingtraditional size-based criteria.","PeriodicalId":501358,"journal":{"name":"medRxiv - Radiology and Imaging","volume":"42 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140935782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-08DOI: 10.1101/2024.05.06.24306829
Ines Horvat-Menih, Alixander S Khan, Mary A McLean, Joao Duarte, Eva Serrao, Stephan Ursprung, Joshua D Kaggie, Andrew B Gill, Andrew N Priest, Mireia Crispin-Ortuzar, Anne Y Warren, Sarah J Welsh, Thomas J Mitchell, Grant D Stewart, Ferdia A Gallagher
Purpose Conventional renal mass biopsy approaches are inaccurate, potentially leading to undergrading. This study explored using hyperpolarised [1-13C]pyruvate MRI (HP 13C-MRI) to identify the most aggressive areas within the tumour of patients with clear cell renal cell carcinoma (ccRCC).
{"title":"K-means clustering of hyperpolarised 13C-MRI identifies intratumoural perfusion/metabolism mismatch in renal cell carcinoma as best predictor of highest grade","authors":"Ines Horvat-Menih, Alixander S Khan, Mary A McLean, Joao Duarte, Eva Serrao, Stephan Ursprung, Joshua D Kaggie, Andrew B Gill, Andrew N Priest, Mireia Crispin-Ortuzar, Anne Y Warren, Sarah J Welsh, Thomas J Mitchell, Grant D Stewart, Ferdia A Gallagher","doi":"10.1101/2024.05.06.24306829","DOIUrl":"https://doi.org/10.1101/2024.05.06.24306829","url":null,"abstract":"<strong>Purpose</strong> Conventional renal mass biopsy approaches are inaccurate, potentially leading to undergrading. This study explored using hyperpolarised [1-<sup>13</sup>C]pyruvate MRI (HP <sup>13</sup>C-MRI) to identify the most aggressive areas within the tumour of patients with clear cell renal cell carcinoma (ccRCC).","PeriodicalId":501358,"journal":{"name":"medRxiv - Radiology and Imaging","volume":"160 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140935851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-08DOI: 10.1101/2024.05.06.24306817
Ines Horvat-Menih, Ruth Casey, James Denholm, Gregory Hamm, Heather Hulme, John Gallon, Alixander S Khan, Joshua Kaggie, Andrew B Gill, Andrew N Priest, Joao A G Duarte, Cissy Yong, Cara Brodie, James Whitworth, Simon T Barry, Richard J A Goodwin, Shubha Anand, Marc Dodd, Katherine Honan, Sarah J Welsh, Anne Y Warren, Tevita Aho, Grant D Stewart, Thomas J Mitchell, Mary A McLean, Ferdia A Gallagher
Background Fumarate hydratase-deficient renal cell carcinoma (FHd-RCC) is a rare and aggressive renal cancer subtype characterised by increased fumarate accumulation and upregulated lactate production. Renal tumours demonstrate significant intratumoral metabolic heterogeneity, which may contribute to treatment failure. Emerging non-invasive metabolic imaging techniques have clinical potential to more accurately phenotype tumour metabolism and its heterogeneity.
{"title":"Probing intratumoral metabolic compartmentalisation in fumarate hydratase-deficient renal cancer using clinical hyperpolarised 13C-MRI and mass spectrometry imaging","authors":"Ines Horvat-Menih, Ruth Casey, James Denholm, Gregory Hamm, Heather Hulme, John Gallon, Alixander S Khan, Joshua Kaggie, Andrew B Gill, Andrew N Priest, Joao A G Duarte, Cissy Yong, Cara Brodie, James Whitworth, Simon T Barry, Richard J A Goodwin, Shubha Anand, Marc Dodd, Katherine Honan, Sarah J Welsh, Anne Y Warren, Tevita Aho, Grant D Stewart, Thomas J Mitchell, Mary A McLean, Ferdia A Gallagher","doi":"10.1101/2024.05.06.24306817","DOIUrl":"https://doi.org/10.1101/2024.05.06.24306817","url":null,"abstract":"<strong>Background</strong> Fumarate hydratase-deficient renal cell carcinoma (FHd-RCC) is a rare and aggressive renal cancer subtype characterised by increased fumarate accumulation and upregulated lactate production. Renal tumours demonstrate significant intratumoral metabolic heterogeneity, which may contribute to treatment failure. Emerging non-invasive metabolic imaging techniques have clinical potential to more accurately phenotype tumour metabolism and its heterogeneity.","PeriodicalId":501358,"journal":{"name":"medRxiv - Radiology and Imaging","volume":"66 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140935776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-07DOI: 10.1101/2024.05.06.24306965
Anthony A. Gatti, Louis Blankemeier, Dave Van Veen, Brian Hargreaves, Scott L. Delp, Garry E. Gold, Feliks Kogan, Akshay S. Chaudhari
Analyzing anatomic shapes of tissues and organs is pivotal for accurate disease diagnostics and clinical decision-making. One prominent disease that depends on anatomic shape analysis is osteoarthritis, which affects 30 million Americans. To advance osteoarthritis diagnostics and prognostics, we introduce ShapeMed-Knee, a 3D shape dataset with 9,376 high-resolution, medical-imaging-based 3D shapes of both femur bone and cartilage. Besides data, ShapeMed-Knee includes two benchmarks for assessing reconstruction accuracy and five clinical prediction tasks that assess the utility of learned shape representations. Leveraging ShapeMed-Knee, we develop and evaluate a novel hybrid explicit-implicit neural shape model which achieves up to 40% better reconstruction accuracy than a statistical shape model and implicit neural shape model. Our hybrid models achieve state-of-the-art performance for preserving cartilage biomarkers; they’re also the first models to successfully predict localized structural features of osteoarthritis, outperforming shape models and convolutional neural networks applied to raw magnetic resonance images and segmentations. The ShapeMed-Knee dataset provides medical evaluations to reconstruct multiple anatomic surfaces and embed meaningful disease-specific information. ShapeMed-Knee reduces barriers to applying 3D modeling in medicine, and our benchmarks highlight that advancements in 3D modeling can enhance the diagnosis and risk stratification for complex diseases. The dataset, code, and benchmarks will be made freely accessible.
{"title":"ShapeMed-Knee: A Dataset and Neural Shape Model Benchmark for Modeling 3D Femurs","authors":"Anthony A. Gatti, Louis Blankemeier, Dave Van Veen, Brian Hargreaves, Scott L. Delp, Garry E. Gold, Feliks Kogan, Akshay S. Chaudhari","doi":"10.1101/2024.05.06.24306965","DOIUrl":"https://doi.org/10.1101/2024.05.06.24306965","url":null,"abstract":"Analyzing anatomic shapes of tissues and organs is pivotal for accurate disease diagnostics and clinical decision-making. One prominent disease that depends on anatomic shape analysis is osteoarthritis, which affects 30 million Americans. To advance osteoarthritis diagnostics and prognostics, we introduce <em>ShapeMed-Knee</em>, a 3D shape dataset with 9,376 high-resolution, medical-imaging-based 3D shapes of both femur bone and cartilage. Besides data, ShapeMed-Knee includes two benchmarks for assessing reconstruction accuracy and five clinical prediction tasks that assess the utility of learned shape representations. Leveraging ShapeMed-Knee, we develop and evaluate a novel hybrid explicit-implicit neural shape model which achieves up to 40% better reconstruction accuracy than a statistical shape model and implicit neural shape model. Our hybrid models achieve state-of-the-art performance for preserving cartilage biomarkers; they’re also the first models to successfully predict localized structural features of osteoarthritis, outperforming shape models and convolutional neural networks applied to raw magnetic resonance images and segmentations. The ShapeMed-Knee dataset provides medical evaluations to reconstruct multiple anatomic surfaces and embed meaningful disease-specific information. ShapeMed-Knee reduces barriers to applying 3D modeling in medicine, and our benchmarks highlight that advancements in 3D modeling can enhance the diagnosis and risk stratification for complex diseases. The dataset, code, and benchmarks will be made freely accessible.","PeriodicalId":501358,"journal":{"name":"medRxiv - Radiology and Imaging","volume":"72 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140935777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}