Pub Date : 2025-03-01Epub Date: 2025-01-27DOI: 10.1016/j.metrad.2025.100133
Huiting Wu , Meizhi Yi , Hong Zhou
The lenticulostriate arteries (LSAs) are crucial cerebral microvasculature vessels, supplying blood to the basal ganglia and internal capsule. Diagnosing, recognizing, and treating cognitive impairment in cerebral small vessel disease (CSVD) is challenging due to complex pathogenesis and unstudied mechanisms. This section reviews LSAs-related CSVD literature, analyzes microvascular injury mechanisms, explores pathophysiological events in T2DM patients, including blood flow disorders, neurovascular unit dysfunction, and blood-brain barrier disruption, and their relationship to cognitive impairment. We investigate LSAs structure and vascular changes to identify biomarkers for cognitive impairment. We explain CSVD's role in cognitive symptoms and brain network disconnections, and study vascular risk factors' impact on LSAs and cognitive decline, considering LSAs as potential therapeutic targets.
{"title":"A noval neuro-biomarker of cognitive impairment related to cerebral small vessel disease in patients with T2DM: Lenticulostriate arteries","authors":"Huiting Wu , Meizhi Yi , Hong Zhou","doi":"10.1016/j.metrad.2025.100133","DOIUrl":"10.1016/j.metrad.2025.100133","url":null,"abstract":"<div><div>The lenticulostriate arteries (LSAs) are crucial cerebral microvasculature vessels, supplying blood to the basal ganglia and internal capsule. Diagnosing, recognizing, and treating cognitive impairment in cerebral small vessel disease (CSVD) is challenging due to complex pathogenesis and unstudied mechanisms. This section reviews LSAs-related CSVD literature, analyzes microvascular injury mechanisms, explores pathophysiological events in T2DM patients, including blood flow disorders, neurovascular unit dysfunction, and blood-brain barrier disruption, and their relationship to cognitive impairment. We investigate LSAs structure and vascular changes to identify biomarkers for cognitive impairment. We explain CSVD's role in cognitive symptoms and brain network disconnections, and study vascular risk factors' impact on LSAs and cognitive decline, considering LSAs as potential therapeutic targets.</div></div>","PeriodicalId":100921,"journal":{"name":"Meta-Radiology","volume":"3 1","pages":"Article 100133"},"PeriodicalIF":0.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143528916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2025-01-02DOI: 10.1016/j.metrad.2024.100124
Saher Verma , Leander Maerkisch , Alberto Paderno , Leonard Gilberg , Bianca Teodorescu , Mathias Meyer
In an era where early detection of diseases is paramount, integrating artificial intelligence (AI) into routine lung cancer screening offers a groundbreaking approach to simultaneously uncover multiple health conditions from a single scan. The fact that lung cancer is still the most common cause of cancer-related deaths globally emphasizes how important early detection is to raising survival rates. Traditional low dose computed tomography (LDCT) focuses primarily on identifying lung malignancies, often missing the opportunity to detect other clinically relevant biomarkers. This review explores the expanding role of AI in radiology, where AI-driven algorithms can simultaneously detect multiple biomarkers and composite health measures, facilitating the opportunistic identification of conditions beyond lung cancer. These include musculoskeletal disorders, cardiovascular diseases, pulmonary conditions, hepatic steatosis, and malignancies in the adrenal and thyroid glands, as well as breast tissue. Through an extensive review of current literature sourced from PubMed, the review highlights advancements in AI-driven biomarker detection, evaluates the potential benefits of a broader diagnostic approach, and addresses challenges related to model standardization and clinical integration. AI-enhanced LDCT screening shows significant promise in augmenting routine screenings, potentially advancing early detection, comprehensive patient assessments, and overall disease management across multiple health conditions.
{"title":"One scan, multiple insights: A review of AI-Driven biomarker imaging and composite measure detection in lung cancer screening","authors":"Saher Verma , Leander Maerkisch , Alberto Paderno , Leonard Gilberg , Bianca Teodorescu , Mathias Meyer","doi":"10.1016/j.metrad.2024.100124","DOIUrl":"10.1016/j.metrad.2024.100124","url":null,"abstract":"<div><div>In an era where early detection of diseases is paramount, integrating artificial intelligence (AI) into routine lung cancer screening offers a groundbreaking approach to simultaneously uncover multiple health conditions from a single scan. The fact that lung cancer is still the most common cause of cancer-related deaths globally emphasizes how important early detection is to raising survival rates. Traditional low dose computed tomography (LDCT) focuses primarily on identifying lung malignancies, often missing the opportunity to detect other clinically relevant biomarkers. This review explores the expanding role of AI in radiology, where AI-driven algorithms can simultaneously detect multiple biomarkers and composite health measures, facilitating the opportunistic identification of conditions beyond lung cancer. These include musculoskeletal disorders, cardiovascular diseases, pulmonary conditions, hepatic steatosis, and malignancies in the adrenal and thyroid glands, as well as breast tissue. Through an extensive review of current literature sourced from PubMed, the review highlights advancements in AI-driven biomarker detection, evaluates the potential benefits of a broader diagnostic approach, and addresses challenges related to model standardization and clinical integration. AI-enhanced LDCT screening shows significant promise in augmenting routine screenings, potentially advancing early detection, comprehensive patient assessments, and overall disease management across multiple health conditions.</div></div>","PeriodicalId":100921,"journal":{"name":"Meta-Radiology","volume":"3 1","pages":"Article 100124"},"PeriodicalIF":0.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143143991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-10-16DOI: 10.1016/j.metrad.2024.100112
Ruowei Tang, Pengfei Zhao, Jia Li, Zhixiang Wang, Ning Xu, Zhenchang Wang
The human ear, possessing complex structures like the ossicular chain, cochlea, and auditory nerve, plays a crucial role in hearing and balance. Common ear diseases, such as hearing loss, tinnitus, facial paralysis and vertigo, affect the quality of life of millions in China. Computed tomography (CT) has made significant advancements since its introduction to China in 2000. The resolution improves from millimeter to sub-millimeter levels, and further, to 10 μm through bone-dedicated CT technology. The advancements have made CT become the preferred method for diagnosing various ear conditions, including congenital malformations, trauma, inflammation, and neoplasm. Artificial intelligence (AI) has brought significant breakthroughs in the CT diagnosis. The performance of automatic segmentation of ear structures has dramatically improved with the advent of ultra-high-resolution computed tomography (U-HRCT). AI-driven measurement tools are enhancing the precision and personalization of surgical planning, while deep learning-based anomaly detection is utilized to address the challenges of detecting diverse ear lesions. Furthermore, AI-driven natural language processing and large language models are revolutionizing the generation of radiology reports, providing accurate and standardized diagnostic information. Despite the ongoing challenges, the application of AI in CT is expected to faciliate the otological field, leading to more precise and personalized treatment for ear diseases.
{"title":"Artificial intelligence in CT diagnosis: Current status and future prospects for ear diseases","authors":"Ruowei Tang, Pengfei Zhao, Jia Li, Zhixiang Wang, Ning Xu, Zhenchang Wang","doi":"10.1016/j.metrad.2024.100112","DOIUrl":"10.1016/j.metrad.2024.100112","url":null,"abstract":"<div><div>The human ear, possessing complex structures like the ossicular chain, cochlea, and auditory nerve, plays a crucial role in hearing and balance. Common ear diseases, such as hearing loss, tinnitus, facial paralysis and vertigo, affect the quality of life of millions in China. Computed tomography (CT) has made significant advancements since its introduction to China in 2000. The resolution improves from millimeter to sub-millimeter levels, and further, to 10 μm through bone-dedicated CT technology. The advancements have made CT become the preferred method for diagnosing various ear conditions, including congenital malformations, trauma, inflammation, and neoplasm. Artificial intelligence (AI) has brought significant breakthroughs in the CT diagnosis. The performance of automatic segmentation of ear structures has dramatically improved with the advent of ultra-high-resolution computed tomography (U-HRCT). AI-driven measurement tools are enhancing the precision and personalization of surgical planning, while deep learning-based anomaly detection is utilized to address the challenges of detecting diverse ear lesions. Furthermore, AI-driven natural language processing and large language models are revolutionizing the generation of radiology reports, providing accurate and standardized diagnostic information. Despite the ongoing challenges, the application of AI in CT is expected to faciliate the otological field, leading to more precise and personalized treatment for ear diseases.</div></div>","PeriodicalId":100921,"journal":{"name":"Meta-Radiology","volume":"2 4","pages":"Article 100112"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142748191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-07-15DOI: 10.1016/j.metrad.2024.100099
Yunyi Liu , Yingshu Li , Zhanyu Wang , Xinyu Liang , Lingqiao Liu , Lei Wang , Leyang Cui , Zhaopeng Tu , Longyue Wang , Luping Zhou
This work evaluates GPT-4V's multimodal capability for medical image analysis, focusing on three representative tasks radiology report generation, medical visual question answering, and medical visual grounding. For the evaluation, a set of prompts is designed for each task to induce the corresponding capability of GPT-4V to produce sufficiently good outputs. Three evaluation ways including quantitative analysis, human evaluation, and case study are employed to achieve an in-depth and extensive evaluation. Our evaluation shows that GPT-4V excels in understanding medical images can generate high-quality radiology reports and effectively answer questions about medical images. Meanwhile, it is found that its performance for medical visual grounding needs to be substantially improved. In addition, we observe the discrepancy between the evaluation outcome from quantitative analysis and that from human evaluation. This discrepancy suggests the limitations of conventional metrics in assessing the performance of large language models like GPT-4V and the necessity of developing new metrics for automatic quantitative analysis.
{"title":"A systematic evaluation of GPT-4V's multimodal capability for chest X-ray image analysis","authors":"Yunyi Liu , Yingshu Li , Zhanyu Wang , Xinyu Liang , Lingqiao Liu , Lei Wang , Leyang Cui , Zhaopeng Tu , Longyue Wang , Luping Zhou","doi":"10.1016/j.metrad.2024.100099","DOIUrl":"10.1016/j.metrad.2024.100099","url":null,"abstract":"<div><div>This work evaluates GPT-4V's multimodal capability for medical image analysis, focusing on three representative tasks radiology report generation, medical visual question answering, and medical visual grounding. For the evaluation, a set of prompts is designed for each task to induce the corresponding capability of GPT-4V to produce sufficiently good outputs. Three evaluation ways including quantitative analysis, human evaluation, and case study are employed to achieve an in-depth and extensive evaluation. Our evaluation shows that GPT-4V excels in understanding medical images can generate high-quality radiology reports and effectively answer questions about medical images. Meanwhile, it is found that its performance for medical visual grounding needs to be substantially improved. In addition, we observe the discrepancy between the evaluation outcome from quantitative analysis and that from human evaluation. This discrepancy suggests the limitations of conventional metrics in assessing the performance of large language models like GPT-4V and the necessity of developing new metrics for automatic quantitative analysis.</div></div>","PeriodicalId":100921,"journal":{"name":"Meta-Radiology","volume":"2 4","pages":"Article 100099"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141711609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-09-18DOI: 10.1016/j.metrad.2024.100102
Zhifeng Wang, Renjiao Yi, Xin Wen, Chenyang Zhu, Kai Xu
With the rapid development of 3D vision and computer graphics technology, the way humans interact with the world has undergone significant transformations. 3D vision-related technologies have profoundly impacted the analysis of cardiovascular diseases (CVD) based on medical imaging diagnosis. In this paper, we provide a comprehensive review of CVD analysis based on 3D vision. First, we delineate cardiovascular imaging and cardiovascular data types from both medical and computational perspectives. Then, we introduce a systematic taxonomy to comprehensively review the current practices of 3D vision in cardiovascular applications, covering aspects such as 3D vascular segmentation, 3D vascular map generation, 3D vascular reconstruction, and 3D vascular super-resolution. Additionally, we compile a list of publicly accessible cardiac image datasets and code repositories to support the reproduction of related algorithms and foster data and algorithm sharing within the community. Finally, we discuss the inherent challenges and limitations of cardiovascular imaging methods based on 3D vision and their potential and propose directions for overcoming these obstacles in future research.
{"title":"Cardiovascular medical image and analysis based on 3D vision: A comprehensive survey","authors":"Zhifeng Wang, Renjiao Yi, Xin Wen, Chenyang Zhu, Kai Xu","doi":"10.1016/j.metrad.2024.100102","DOIUrl":"10.1016/j.metrad.2024.100102","url":null,"abstract":"<div><div>With the rapid development of 3D vision and computer graphics technology, the way humans interact with the world has undergone significant transformations. 3D vision-related technologies have profoundly impacted the analysis of cardiovascular diseases (CVD) based on medical imaging diagnosis. In this paper, we provide a comprehensive review of CVD analysis based on 3D vision. First, we delineate cardiovascular imaging and cardiovascular data types from both medical and computational perspectives. Then, we introduce a systematic taxonomy to comprehensively review the current practices of 3D vision in cardiovascular applications, covering aspects such as 3D vascular segmentation, 3D vascular map generation, 3D vascular reconstruction, and 3D vascular super-resolution. Additionally, we compile a list of publicly accessible cardiac image datasets and code repositories to support the reproduction of related algorithms and foster data and algorithm sharing within the community. Finally, we discuss the inherent challenges and limitations of cardiovascular imaging methods based on 3D vision and their potential and propose directions for overcoming these obstacles in future research.</div></div>","PeriodicalId":100921,"journal":{"name":"Meta-Radiology","volume":"2 4","pages":"Article 100102"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142651968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-10-01DOI: 10.1016/j.metrad.2024.100111
Boyang Mao , Hong Wang , Hongxi Zhang , Xueliang Shang , Zhi Yang
Background
The corpus callosum plays a crucial role in integrated brain functions, and its development in childhood is strongly associated with subsequent cognitive, emotional, and behavioral development. However, there is still a lack of clear understanding regarding the developmental trends of the corpus callosum in preschool children. This study aims to comprehensively investigate age and sex differences in the thickness of the corpus callosum in typical developing children between 1 and 6 years old.
Methods
T1-weighted structural MRI data were collected from a sample of 295 neurologically normal children aged 1–6 years. Utilizing the specialized corpus callosum segmentation software Yuki, thickness measurements of the mid-sagittal plane of the corpus callosum were obtained.
Results
The anterior part exhibited faster growth compared to the middle and posterior sections, while growth at the extremities was not statistically significant. Furthermore, gender differences were identified, with males showing earlier development of the corpus callosum, particularly between ages 1 and 3. Conversely, females exhibited the most notable increase in thickness between ages 3 and 5.
Conclusion
This study provides significant insights into the developmental trends of the mid-sagittal plane of the corpus callosum in preschool children. It reveals distinct non-linear developmental patterns in different sections of the corpus callosum and highlights the influence of sex on these developmental patterns.
{"title":"Developmental trends in corpus callosum thickness among preschool children","authors":"Boyang Mao , Hong Wang , Hongxi Zhang , Xueliang Shang , Zhi Yang","doi":"10.1016/j.metrad.2024.100111","DOIUrl":"10.1016/j.metrad.2024.100111","url":null,"abstract":"<div><h3>Background</h3><div>The corpus callosum plays a crucial role in integrated brain functions, and its development in childhood is strongly associated with subsequent cognitive, emotional, and behavioral development. However, there is still a lack of clear understanding regarding the developmental trends of the corpus callosum in preschool children. This study aims to comprehensively investigate age and sex differences in the thickness of the corpus callosum in typical developing children between 1 and 6 years old.</div></div><div><h3>Methods</h3><div>T1-weighted structural MRI data were collected from a sample of 295 neurologically normal children aged 1–6 years. Utilizing the specialized corpus callosum segmentation software Yuki, thickness measurements of the mid-sagittal plane of the corpus callosum were obtained.</div></div><div><h3>Results</h3><div>The anterior part exhibited faster growth compared to the middle and posterior sections, while growth at the extremities was not statistically significant. Furthermore, gender differences were identified, with males showing earlier development of the corpus callosum, particularly between ages 1 and 3. Conversely, females exhibited the most notable increase in thickness between ages 3 and 5.</div></div><div><h3>Conclusion</h3><div>This study provides significant insights into the developmental trends of the mid-sagittal plane of the corpus callosum in preschool children. It reveals distinct non-linear developmental patterns in different sections of the corpus callosum and highlights the influence of sex on these developmental patterns.</div></div>","PeriodicalId":100921,"journal":{"name":"Meta-Radiology","volume":"2 4","pages":"Article 100111"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142651900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-10-16DOI: 10.1016/j.metrad.2024.100113
Xinrui Song , Jiajin Zhang , Pingkun Yan, Juergen Hahn, Uwe Kruger, Hisham Mohamed, Ge Wang
The integration of artificial intelligence (AI) chatbots into higher education marks a shift towards a new generation of pedagogical tools, mirroring the arrival of milestones like the internet. With the launch of ChatGPT-4 Turbo in November 2023, we developed a ChatGPT-based teaching application (https://chat.openai.com/g/g-1imx1py4K-chatge-medical-imaging) and integrated it into our undergraduate medical imaging course in the Spring 2024 semester. This study investigates the use of ChatGPT throughout a semester-long trial, providing insights into students' engagement, perception, and the overall educational effectiveness of the technology. We systematically collected and analyzed data concerning students’ interaction with ChatGPT, focusing on their attitudes, concerns, and usage patterns. The findings indicate that ChatGPT offers significant advantages such as improved information access and increased interactivity, but its adoption is accompanied by concerns about the accuracy of the information provided and the necessity for well-defined guidelines to optimize its use.
{"title":"Integrating AI in college education: Positive yet mixed experiences with ChatGPT","authors":"Xinrui Song , Jiajin Zhang , Pingkun Yan, Juergen Hahn, Uwe Kruger, Hisham Mohamed, Ge Wang","doi":"10.1016/j.metrad.2024.100113","DOIUrl":"10.1016/j.metrad.2024.100113","url":null,"abstract":"<div><div>The integration of artificial intelligence (AI) chatbots into higher education marks a shift towards a new generation of pedagogical tools, mirroring the arrival of milestones like the internet. With the launch of ChatGPT-4 Turbo in November 2023, we developed a ChatGPT-based teaching application (<span><span>https://chat.openai.com/g/g-1imx1py4K-chatge-medical-imaging</span><svg><path></path></svg></span>) and integrated it into our undergraduate medical imaging course in the Spring 2024 semester. This study investigates the use of ChatGPT throughout a semester-long trial, providing insights into students' engagement, perception, and the overall educational effectiveness of the technology. We systematically collected and analyzed data concerning students’ interaction with ChatGPT, focusing on their attitudes, concerns, and usage patterns. The findings indicate that ChatGPT offers significant advantages such as improved information access and increased interactivity, but its adoption is accompanied by concerns about the accuracy of the information provided and the necessity for well-defined guidelines to optimize its use.</div></div>","PeriodicalId":100921,"journal":{"name":"Meta-Radiology","volume":"2 4","pages":"Article 100113"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142748190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-09-21DOI: 10.1016/j.metrad.2024.100103
Yutong Zhang , Yi Pan , Tianyang Zhong , Peixin Dong , Kangni Xie , Yuxiao Liu , Hanqi Jiang , Zihao Wu , Zhengliang Liu , Wei Zhao , Wei Zhang , Shijie Zhao , Tuo Zhang , Xi Jiang , Dinggang Shen , Tianming Liu , Xin Zhang
Medical images and radiology reports are essential for physicians to diagnose medical conditions. However, the vast diversity and cross-source heterogeneity inherent in these data have posed significant challenges to the generalizability of current data-mining methods for clinical decision-making. Recently, multimodal large language models (MLLMs), especially Gemini-Vision-series (Gemini) and GPT-4-series (GPT-4) models, have revolutionized numerous domains, significantly impacting the medical field. In this study, we conducted a detailed evaluation of the performance of the Gemini series models (including Gemini-1.0-Pro-Vision, Gemini-1.5-Pro, and Gemini-1.5-Flash) and GPT series models (including GPT-4o, GPT-4-Turbo, and GPT-3.5-Turbo) across 14 medical datasets, covering 5 medical imaging categories (dermatology, radiology, dentistry, ophthalmology, and endoscopy) and 3 radiology report datasets. The investigated tasks encompass disease classification, lesion segmentation, anatomical localization, disease diagnosis, report generation, and lesion detection. Moreover, we also validated the performance of the Claude-3-Opus, Yi-Large, Yi-Large-Turbo, and LLaMA 3 models to gain a comprehensive understanding of the MLLM models in the medical field. Our experimental results demonstrated that Gemini-series models excelled in report generation and lesion detection but faces challenges in disease classification and anatomical localization. Conversely, GPT-series models exhibited proficiency in lesion segmentation and anatomical localization but encountered difficulties in disease diagnosis and lesion detection. Additionally, both the Gemini series and GPT series contain models that have demonstrated commendable generation efficiency. While both models hold promise in reducing physician workload, alleviating pressure on limited healthcare resources, and fostering collaboration between clinical practitioners and artificial intelligence technologies, substantial enhancements and comprehensive validations remain imperative before clinical deployment.
{"title":"Potential of multimodal large language models for data mining of medical images and free-text reports","authors":"Yutong Zhang , Yi Pan , Tianyang Zhong , Peixin Dong , Kangni Xie , Yuxiao Liu , Hanqi Jiang , Zihao Wu , Zhengliang Liu , Wei Zhao , Wei Zhang , Shijie Zhao , Tuo Zhang , Xi Jiang , Dinggang Shen , Tianming Liu , Xin Zhang","doi":"10.1016/j.metrad.2024.100103","DOIUrl":"10.1016/j.metrad.2024.100103","url":null,"abstract":"<div><div>Medical images and radiology reports are essential for physicians to diagnose medical conditions. However, the vast diversity and cross-source heterogeneity inherent in these data have posed significant challenges to the generalizability of current data-mining methods for clinical decision-making. Recently, multimodal large language models (MLLMs), especially Gemini-Vision-series (Gemini) and GPT-4-series (GPT-4) models, have revolutionized numerous domains, significantly impacting the medical field. In this study, we conducted a detailed evaluation of the performance of the Gemini series models (including Gemini-1.0-Pro-Vision, Gemini-1.5-Pro, and Gemini-1.5-Flash) and GPT series models (including GPT-4o, GPT-4-Turbo, and GPT-3.5-Turbo) across 14 medical datasets, covering 5 medical imaging categories (dermatology, radiology, dentistry, ophthalmology, and endoscopy) and 3 radiology report datasets. The investigated tasks encompass disease classification, lesion segmentation, anatomical localization, disease diagnosis, report generation, and lesion detection. Moreover, we also validated the performance of the Claude-3-Opus, Yi-Large, Yi-Large-Turbo, and LLaMA 3 models to gain a comprehensive understanding of the MLLM models in the medical field. Our experimental results demonstrated that Gemini-series models excelled in report generation and lesion detection but faces challenges in disease classification and anatomical localization. Conversely, GPT-series models exhibited proficiency in lesion segmentation and anatomical localization but encountered difficulties in disease diagnosis and lesion detection. Additionally, both the Gemini series and GPT series contain models that have demonstrated commendable generation efficiency. While both models hold promise in reducing physician workload, alleviating pressure on limited healthcare resources, and fostering collaboration between clinical practitioners and artificial intelligence technologies, substantial enhancements and comprehensive validations remain imperative before clinical deployment.</div></div>","PeriodicalId":100921,"journal":{"name":"Meta-Radiology","volume":"2 4","pages":"Article 100103"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142651946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-10-22DOI: 10.1016/j.metrad.2024.100114
Li-Miao Zou, Ke-Ting Xu, Yi-Ning Wang
Coronary artery disease (CAD) remains the leading cause of morbidity and mortality globally. The recent years have witnessed a steep increase in the number of cardiac CT examinations, including coronary CT angiography (CCTA) and non-contrast ECG-gated cardiac CT, which put a heavy load on the radiologists. Artificial intelligence (AI), which aims to automate tasks that resembles human intelligence, presents itself as a promising solution. AI has played an increasingly important role in the field of cardiac CT, from advanced image reconstruction to coronary stenosis and plaque analysis, predicting flow, and potentially better risk stratification and event prediction. In this review, we aim to summarize state-of-the-art AI approaches applied to cardiac CT and their future implications.
{"title":"Research advances and applications of artificial intelligence in cardiac CT","authors":"Li-Miao Zou, Ke-Ting Xu, Yi-Ning Wang","doi":"10.1016/j.metrad.2024.100114","DOIUrl":"10.1016/j.metrad.2024.100114","url":null,"abstract":"<div><div>Coronary artery disease (CAD) remains the leading cause of morbidity and mortality globally. The recent years have witnessed a steep increase in the number of cardiac CT examinations, including coronary CT angiography (CCTA) and non-contrast ECG-gated cardiac CT, which put a heavy load on the radiologists. Artificial intelligence (AI), which aims to automate tasks that resembles human intelligence, presents itself as a promising solution. AI has played an increasingly important role in the field of cardiac CT, from advanced image reconstruction to coronary stenosis and plaque analysis, predicting flow, and potentially better risk stratification and event prediction. In this review, we aim to summarize state-of-the-art AI approaches applied to cardiac CT and their future implications.</div></div>","PeriodicalId":100921,"journal":{"name":"Meta-Radiology","volume":"2 4","pages":"Article 100114"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142593666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-06-26DOI: 10.1016/j.metrad.2024.100098
Qianyun Liu , Wenwei Zhu , Fulong Song , Tuo Lou , Lei He , Wenming Zhou , Zhichao Feng
Hepatocellular carcinoma (HCC) ranks as the sixth most prevalent and the fourth most lethal malignancy worldwide, frequently manifesting at advanced stages with limited therapeutic options. Despite notable therapeutic advancements, challenges persist in precisely identifying patients likely to respond to immune-checkpoint inhibitors (ICIs). The tumor immune microenvironment (TIME) plays a pivotal role in the biological behavior of HCC, necessitating non-invasive methods for a comprehensive assessment prior to treatment initiation. Spatiotemporal molecular medicine, particularly radio-immunomics, emerges as a promising approach through integrating multi-omics data to decode the TIME. This review delineates the intricate TIME characteristics of HCC, summarizes recent advancements in radiomics for immune profiling within the framework of spatiotemporal molecular medicine, and delves into challenges and future prospects of radio-immunomics, highlighting the dynamic interplay of radiomics, genomics, and immunobiology. The evolving field of radio-immunomics holds unparalleled potential for non-invasive, personalized characterization of TIME in HCC, providing avenues to inform tailored treatments and optimize patient outcomes.
肝细胞癌(HCC)是全球发病率第六高、致死率第四高的恶性肿瘤,常表现为晚期,治疗方案有限。尽管在治疗方面取得了显著进展,但在精确识别可能对免疫检查点抑制剂(ICIs)产生反应的患者方面仍然存在挑战。肿瘤免疫微环境(TIME)在 HCC 的生物学行为中起着举足轻重的作用,因此有必要在开始治疗前采用非侵入性方法进行全面评估。时空分子医学,尤其是放射免疫组学,通过整合多组学数据解码 TIME,成为一种前景广阔的方法。这篇综述描述了 HCC 错综复杂的 TIME 特征,总结了时空分子医学框架下放射免疫组学分析的最新进展,并深入探讨了放射免疫组学面临的挑战和未来前景,强调了放射组学、基因组学和免疫生物学的动态相互作用。不断发展的放射免疫组学领域具有无与伦比的潜力,可用于对 HCC 中的 TIME 进行无创、个性化的表征,为提供有针对性的治疗和优化患者预后提供了途径。
{"title":"Radio-immunomics in hepatocellular carcinoma: Unraveling the tumor immune microenvironment","authors":"Qianyun Liu , Wenwei Zhu , Fulong Song , Tuo Lou , Lei He , Wenming Zhou , Zhichao Feng","doi":"10.1016/j.metrad.2024.100098","DOIUrl":"10.1016/j.metrad.2024.100098","url":null,"abstract":"<div><p>Hepatocellular carcinoma (HCC) ranks as the sixth most prevalent and the fourth most lethal malignancy worldwide, frequently manifesting at advanced stages with limited therapeutic options. Despite notable therapeutic advancements, challenges persist in precisely identifying patients likely to respond to immune-checkpoint inhibitors (ICIs). The tumor immune microenvironment (TIME) plays a pivotal role in the biological behavior of HCC, necessitating non-invasive methods for a comprehensive assessment prior to treatment initiation. Spatiotemporal molecular medicine, particularly radio-immunomics, emerges as a promising approach through integrating multi-omics data to decode the TIME. This review delineates the intricate TIME characteristics of HCC, summarizes recent advancements in radiomics for immune profiling within the framework of spatiotemporal molecular medicine, and delves into challenges and future prospects of radio-immunomics, highlighting the dynamic interplay of radiomics, genomics, and immunobiology. The evolving field of radio-immunomics holds unparalleled potential for non-invasive, personalized characterization of TIME in HCC, providing avenues to inform tailored treatments and optimize patient outcomes.</p></div>","PeriodicalId":100921,"journal":{"name":"Meta-Radiology","volume":"2 3","pages":"Article 100098"},"PeriodicalIF":0.0,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2950162824000523/pdfft?md5=6bba357be0bd56ec90742ffef845a8c2&pid=1-s2.0-S2950162824000523-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141961952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}