Pub Date : 2025-12-01Epub Date: 2025-09-10DOI: 10.1007/s13246-025-01633-y
Shiho Kuwajima, Daisuke Oura
In lung CT imaging, motion artifacts caused by cardiac motion and respiration are common. Recently, CLEAR Motion, a deep learning-based reconstruction method that applies motion correction technology, has been developed. This study aims to quantitatively evaluate the clinical usefulness of CLEAR Motion. A total of 129 lung CT was analyzed, and heart rate, height, weight, and BMI of all patients were obtained from medical records. Images with and without CLEAR Motion were reconstructed, and quantitative evaluation was performed using variance of Laplacian (VL) and PSNR. The difference in VL (DVL) between the two reconstruction methods was used to evaluate which part of the lung field (upper, middle, or lower) CLEAR Motion is effective. To evaluate the effect of motion correction based on patient characteristics, the correlation between body mass index (BMI), heart rate and DVL was determined. Visual assessment of motion artifacts was performed using paired comparisons by 9 radiological technologists. With the exception of one case, VL was higher in CLEAR Motion. Almost all the cases (110 cases) showed large DVL in the lower part. BMI showed a positive correlation with DVL (r = 0.55, p < 0.05), while no differences in DVL were observed based on heart rate. The average PSNR was 35.8 ± 0.92 dB. Visual assessments indicated that CLEAR Motion was preferred in most cases, with an average preference score of 0.96 (p < 0.05). Using Clear Motion allows for obtaining images with fewer motion artifacts in lung CT.
在肺部CT成像中,由心脏运动和呼吸引起的运动伪影是常见的。最近,一种基于深度学习的、应用运动校正技术的重建方法CLEAR Motion被开发出来。本研究旨在定量评估CLEAR Motion的临床应用价值。共分析129例肺CT,并从病历中获取所有患者的心率、身高、体重和BMI。对有无CLEAR运动的图像进行重构,并利用拉普拉斯方差(VL)和PSNR进行定量评价。两种重建方法之间的VL (DVL)差异用于评估肺野的哪个部分(上、中、下)CLEAR Motion有效。为了根据患者的特点评估运动矫正的效果,我们确定了身体质量指数(BMI)、心率和DVL之间的相关性。运动伪影的视觉评估由9名放射技术人员进行配对比较。除一例外,在CLEAR Motion中VL更高。几乎所有病例(110例)均表现为下肢大DVL。BMI与DVL呈正相关(r = 0.55, p
{"title":"Clinical evaluation of motion robust reconstruction using deep learning in lung CT.","authors":"Shiho Kuwajima, Daisuke Oura","doi":"10.1007/s13246-025-01633-y","DOIUrl":"10.1007/s13246-025-01633-y","url":null,"abstract":"<p><p>In lung CT imaging, motion artifacts caused by cardiac motion and respiration are common. Recently, CLEAR Motion, a deep learning-based reconstruction method that applies motion correction technology, has been developed. This study aims to quantitatively evaluate the clinical usefulness of CLEAR Motion. A total of 129 lung CT was analyzed, and heart rate, height, weight, and BMI of all patients were obtained from medical records. Images with and without CLEAR Motion were reconstructed, and quantitative evaluation was performed using variance of Laplacian (VL) and PSNR. The difference in VL (DVL) between the two reconstruction methods was used to evaluate which part of the lung field (upper, middle, or lower) CLEAR Motion is effective. To evaluate the effect of motion correction based on patient characteristics, the correlation between body mass index (BMI), heart rate and DVL was determined. Visual assessment of motion artifacts was performed using paired comparisons by 9 radiological technologists. With the exception of one case, VL was higher in CLEAR Motion. Almost all the cases (110 cases) showed large DVL in the lower part. BMI showed a positive correlation with DVL (r = 0.55, p < 0.05), while no differences in DVL were observed based on heart rate. The average PSNR was 35.8 ± 0.92 dB. Visual assessments indicated that CLEAR Motion was preferred in most cases, with an average preference score of 0.96 (p < 0.05). Using Clear Motion allows for obtaining images with fewer motion artifacts in lung CT.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"1949-1954"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145030968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-07-28DOI: 10.1007/s13246-025-01604-3
Kasia Bobrowski, Jonathon Lee
Immediate breast Reconstruction is increasing in use in Australia and accounts for almost 10% of breast cancer patients (Roder in Breast 22:1220-1225, 2013). Many treatments include a bolus to increase dose to the skin surface. Air gaps under bolus increase uncertainty in dosimetry and many bolus types are unable to conform to the shape of the breast or are not flexible throughout treatment if there is a swelling induced contour change. This study investigates the use of two bolus types that can be manufactured in house-wet combine and ThermoBolus. Wet combine is a material composed of several water soaked dressings. ThermoBolus is a product developed in-house that consists of thermoplastic encased in silicone. Plans using a volumetric arc therapy technique were created for each bolus and dosimetry performed with thermoluminescent detectors (TLDs) and EBT-3 film over three fractions. Wax was used to simulate swelling and allow analysis of the flexibility of the bolus materials. ThermoBolus had a range of agreement with calculation from -2 to 4% for film measurement and -5.6 to 1.0% for TLDs. Wet combine had a range of agreement with calculation from 1.6 to 10.5% for film measurement and -13.5 to 13.1% for TLDs. It showed consistent conformity and flexibility for all fractions and with induced contour but air gaps of 2-3 mm were observed between layers of the material. ThermoBolus and wet combine are able to conform to contour change without the introduction of large air gaps between the patient surface and bolus. ThermoBolus is reusable and can be remoulded if the patient undergoes significant contour change during the course of treatment. It is able to be modelled accurately by the treatment planning system. Wet combine shows inconsistency in manufacture and requires more than one bolus to be made over the course of treatment, reducing accuracy in modelling and dosimetry.
在澳大利亚,立即乳房重建的使用越来越多,几乎占乳腺癌患者的10% (Roder in breast 22:20 -1225, 2013)。许多治疗方法包括给皮肤表面注射一剂以增加剂量。丸下的气隙增加了剂量测定的不确定性,如果有肿胀引起的轮廓改变,许多丸类型不能符合乳房的形状或在整个治疗过程中不灵活。本研究探讨了两种可在室内湿式联合收割机和ThermoBolus中生产的丸剂的使用情况。湿式混合料是由几种水浸泡过的敷料组成的材料。ThermoBolus是一种内部开发的产品,由硅树脂包裹的热塑性塑料组成。使用体积弧治疗技术为每个丸创建计划,并使用热释光探测器(TLDs)和EBT-3薄膜对三个组分进行剂量测定。蜡被用来模拟膨胀,并允许分析弹丸材料的柔韧性。ThermoBolus的计算范围与薄膜测量的-2至4%一致,与tld的- 5.6%至1.0%一致。湿式联合收割机的计算结果与薄膜测量结果的一致性范围为1.6 ~ 10.5%,与tld测量结果的一致性范围为-13.5 ~ 13.1%。它表现出一致的一致性和柔韧性,所有部分和诱导轮廓,但在材料层之间观察到2-3毫米的气隙。ThermoBolus和wet组合能够符合轮廓变化,而不会在患者表面和丸之间引入大的气隙。ThermoBolus是可重复使用的,如果患者在治疗过程中经历了显著的轮廓变化,可以重新塑造。它可以通过治疗计划系统精确地建模。湿联合剂在生产过程中表现出不一致性,并且在治疗过程中需要多次注射,从而降低了建模和剂量测定的准确性。
{"title":"A comparison of two bolus types for radiotherapy following immediate breast reconstruction.","authors":"Kasia Bobrowski, Jonathon Lee","doi":"10.1007/s13246-025-01604-3","DOIUrl":"10.1007/s13246-025-01604-3","url":null,"abstract":"<p><p>Immediate breast Reconstruction is increasing in use in Australia and accounts for almost 10% of breast cancer patients (Roder in Breast 22:1220-1225, 2013). Many treatments include a bolus to increase dose to the skin surface. Air gaps under bolus increase uncertainty in dosimetry and many bolus types are unable to conform to the shape of the breast or are not flexible throughout treatment if there is a swelling induced contour change. This study investigates the use of two bolus types that can be manufactured in house-wet combine and ThermoBolus. Wet combine is a material composed of several water soaked dressings. ThermoBolus is a product developed in-house that consists of thermoplastic encased in silicone. Plans using a volumetric arc therapy technique were created for each bolus and dosimetry performed with thermoluminescent detectors (TLDs) and EBT-3 film over three fractions. Wax was used to simulate swelling and allow analysis of the flexibility of the bolus materials. ThermoBolus had a range of agreement with calculation from -2 to 4% for film measurement and -5.6 to 1.0% for TLDs. Wet combine had a range of agreement with calculation from 1.6 to 10.5% for film measurement and -13.5 to 13.1% for TLDs. It showed consistent conformity and flexibility for all fractions and with induced contour but air gaps of 2-3 mm were observed between layers of the material. ThermoBolus and wet combine are able to conform to contour change without the introduction of large air gaps between the patient surface and bolus. ThermoBolus is reusable and can be remoulded if the patient undergoes significant contour change during the course of treatment. It is able to be modelled accurately by the treatment planning system. Wet combine shows inconsistency in manufacture and requires more than one bolus to be made over the course of treatment, reducing accuracy in modelling and dosimetry.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"1601-1609"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144734066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-08-04DOI: 10.1007/s13246-025-01619-w
Subhash Mondal, Amitava Nag
Artificial Intelligence has shown great promise in healthcare, particularly in non-invasive diagnostics using bio signals. This study focuses on classifying eye states (open or closed) using Electroencephalogram (EEG) signals captured via a 14-electrode neuroheadset, recorded through a Brain-Computer Interface (BCI). A publicly available dataset comprising 14,980 instances was used, where each sample represents EEG signals corresponding to eye activity. Fourteen classical machine learning (ML) models were evaluated using a tenfold cross-validation approach. The preprocessing pipeline involved removing outliers using the Z-score method, addressing class imbalance with SMOTETomek, and applying a bandpass filter to reduce signal noise. Significant EEG features were selected using a two-sample independent t-test (p < 0.05), ensuring only statistically relevant electrodes were retained. Additionally, the Common Spatial Pattern (CSP) method was used for feature extraction to enhance class separability by maximizing variance differences between eye states. Experimental results demonstrate that several classifiers achieved strong performance, with accuracy above 90%. The k-Nearest Neighbours classifier yielded the highest accuracy of 97.92% with CSP, and 97.75% without CSP. The application of CSP also enhanced the performance of Multi-Layer Perceptron and Support Vector Machine, reaching accuracies of 95.30% and 93.93%, respectively. The results affirm that integrating statistical validation, signal processing, and ML techniques can enable accurate and efficient EEG-based eye state classification, with practical implications for real-time BCI systems and offering a lightweight solution for real-time healthcare wearable applications healthcare applications.
{"title":"A computational eye state classification model using EEG signal based on data mining techniques: comparative analysis.","authors":"Subhash Mondal, Amitava Nag","doi":"10.1007/s13246-025-01619-w","DOIUrl":"10.1007/s13246-025-01619-w","url":null,"abstract":"<p><p>Artificial Intelligence has shown great promise in healthcare, particularly in non-invasive diagnostics using bio signals. This study focuses on classifying eye states (open or closed) using Electroencephalogram (EEG) signals captured via a 14-electrode neuroheadset, recorded through a Brain-Computer Interface (BCI). A publicly available dataset comprising 14,980 instances was used, where each sample represents EEG signals corresponding to eye activity. Fourteen classical machine learning (ML) models were evaluated using a tenfold cross-validation approach. The preprocessing pipeline involved removing outliers using the Z-score method, addressing class imbalance with SMOTETomek, and applying a bandpass filter to reduce signal noise. Significant EEG features were selected using a two-sample independent t-test (p < 0.05), ensuring only statistically relevant electrodes were retained. Additionally, the Common Spatial Pattern (CSP) method was used for feature extraction to enhance class separability by maximizing variance differences between eye states. Experimental results demonstrate that several classifiers achieved strong performance, with accuracy above 90%. The k-Nearest Neighbours classifier yielded the highest accuracy of 97.92% with CSP, and 97.75% without CSP. The application of CSP also enhanced the performance of Multi-Layer Perceptron and Support Vector Machine, reaching accuracies of 95.30% and 93.93%, respectively. The results affirm that integrating statistical validation, signal processing, and ML techniques can enable accurate and efficient EEG-based eye state classification, with practical implications for real-time BCI systems and offering a lightweight solution for real-time healthcare wearable applications healthcare applications.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"1761-1774"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144785685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-10-20DOI: 10.1007/s13246-025-01645-8
Lifeng Yang, Shaojie Gu, Binbin Liu, Junjie Wang, Junwei Cheng, Yuanxi Zhang, Zhengan Xia, Yan Yang
Blood pressure is an essential indicator of cardiovascular health in the human body, and regular and accurate blood pressure measurement is essential for preventing cardiovascular diseases. The emergence of photoplethysmography (PPG) and the advancement of machine learning offers new opportunities for noninvasive blood pressure measurement. This paper proposes a non-contact method for measuring blood pressure using face video and machine learning. This method extracts facial remote photoplethysmography (RPPG) signals from face video captured by a camera, and enhances the signal quality of RPPG through a set of filtering processes. The blood pressure regression model is constructed using the extreme gradient boosting tree (XGBoost) method to estimate blood pressure from RPPG signals. This approach achieved accurate blood pressure measurement, with a measurement error of 4.8893 ± 6.6237 mmHg for systolic pressure and 4.0805 ± 5.5821 mmHg for diastolic pressure. Experimental results show that this method fully complies with the American Medical Instrumentation Association (AAMI).Our proposed method has minor errors in predicting the systolic and diastolic blood pressures and achieves grade A evaluation for both systolic and diastolic blood pressures according to the British Hypertension Society (BHS) standards.
{"title":"A non-contact blood pressure measurement method based on face video.","authors":"Lifeng Yang, Shaojie Gu, Binbin Liu, Junjie Wang, Junwei Cheng, Yuanxi Zhang, Zhengan Xia, Yan Yang","doi":"10.1007/s13246-025-01645-8","DOIUrl":"10.1007/s13246-025-01645-8","url":null,"abstract":"<p><p>Blood pressure is an essential indicator of cardiovascular health in the human body, and regular and accurate blood pressure measurement is essential for preventing cardiovascular diseases. The emergence of photoplethysmography (PPG) and the advancement of machine learning offers new opportunities for noninvasive blood pressure measurement. This paper proposes a non-contact method for measuring blood pressure using face video and machine learning. This method extracts facial remote photoplethysmography (RPPG) signals from face video captured by a camera, and enhances the signal quality of RPPG through a set of filtering processes. The blood pressure regression model is constructed using the extreme gradient boosting tree (XGBoost) method to estimate blood pressure from RPPG signals. This approach achieved accurate blood pressure measurement, with a measurement error of 4.8893 ± 6.6237 mmHg for systolic pressure and 4.0805 ± 5.5821 mmHg for diastolic pressure. Experimental results show that this method fully complies with the American Medical Instrumentation Association (AAMI).Our proposed method has minor errors in predicting the systolic and diastolic blood pressures and achieves grade A evaluation for both systolic and diastolic blood pressures according to the British Hypertension Society (BHS) standards.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"2059-2067"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145330455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-09-23DOI: 10.1007/s13246-025-01626-x
Manas K Nag, Anup K Sadhu, Samiran Das, Chandan Kumar, Sandeep Choudhary
Segmenting ischemic stroke lesions from Non-Contrast CT (NCCT) scans is a complex task due to the hypo-intense nature of these lesions compared to surrounding healthy brain tissue and their iso-intensity with lateral ventricles in many cases. Identifying early acute ischemic stroke lesions in NCCT remains particularly challenging. Computer-assisted detection and segmentation can serve as valuable tools to support clinicians in stroke diagnosis. This paper introduces CoAt U SegNet, a novel deep learning model designed to detect and segment acute ischemic stroke lesions from NCCT scans. Unlike conventional 3D segmentation models, this study presents an advanced 3D deep learning approach to enhance delineation accuracy. Traditional machine learning models have struggled to achieve satisfactory segmentation performance, highlighting the need for more sophisticated techniques. For model training, 50 NCCT scans were used, with 10 scans for validation and 500 scans for testing. The encoder convolution blocks incorporated dilation rates of 1, 3, and 5 to capture multi-scale features effectively. Performance evaluation on 500 unseen NCCT scans yielded a Dice similarity score of 75% and a Jaccard index of 70%, demonstrating notable improvement in segmentation accuracy. An enhanced similarity index was employed to refine lesion segmentation, which can further aid in distinguishing the penumbra from the core infarct area, contributing to improved clinical decision-making.
从非对比CT (NCCT)扫描中分割缺血性脑卒中病变是一项复杂的任务,因为与周围健康脑组织相比,这些病变的强度较低,而且在许多情况下,它们与侧脑室的强度相同。在NCCT中识别早期急性缺血性脑卒中病变仍然特别具有挑战性。计算机辅助检测和分割可以作为有价值的工具来支持临床医生在脑卒中诊断。本文介绍了CoAt U SegNet,这是一种新的深度学习模型,旨在从NCCT扫描中检测和分割急性缺血性脑卒中病变。与传统的3D分割模型不同,本研究提出了一种先进的3D深度学习方法来提高描绘精度。传统的机器学习模型很难达到令人满意的分割性能,这凸显了对更复杂技术的需求。对于模型训练,使用了50次NCCT扫描,其中10次扫描用于验证,500次扫描用于测试。编码器卷积块结合了1、3和5的膨胀率,有效地捕获了多尺度特征。对500次未见过的NCCT扫描的性能评估结果显示,Dice相似度得分为75%,Jaccard指数为70%,显示了分割精度的显着提高。采用增强的相似指数来细化病灶分割,这可以进一步帮助区分半暗区和核心梗死区,有助于改善临床决策。
{"title":"3D CoAt U SegNet-enhanced deep learning framework for accurate segmentation of acute ischemic stroke lesions from non-contrast CT scans.","authors":"Manas K Nag, Anup K Sadhu, Samiran Das, Chandan Kumar, Sandeep Choudhary","doi":"10.1007/s13246-025-01626-x","DOIUrl":"10.1007/s13246-025-01626-x","url":null,"abstract":"<p><p>Segmenting ischemic stroke lesions from Non-Contrast CT (NCCT) scans is a complex task due to the hypo-intense nature of these lesions compared to surrounding healthy brain tissue and their iso-intensity with lateral ventricles in many cases. Identifying early acute ischemic stroke lesions in NCCT remains particularly challenging. Computer-assisted detection and segmentation can serve as valuable tools to support clinicians in stroke diagnosis. This paper introduces CoAt U SegNet, a novel deep learning model designed to detect and segment acute ischemic stroke lesions from NCCT scans. Unlike conventional 3D segmentation models, this study presents an advanced 3D deep learning approach to enhance delineation accuracy. Traditional machine learning models have struggled to achieve satisfactory segmentation performance, highlighting the need for more sophisticated techniques. For model training, 50 NCCT scans were used, with 10 scans for validation and 500 scans for testing. The encoder convolution blocks incorporated dilation rates of 1, 3, and 5 to capture multi-scale features effectively. Performance evaluation on 500 unseen NCCT scans yielded a Dice similarity score of 75% and a Jaccard index of 70%, demonstrating notable improvement in segmentation accuracy. An enhanced similarity index was employed to refine lesion segmentation, which can further aid in distinguishing the penumbra from the core infarct area, contributing to improved clinical decision-making.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"1853-1863"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145126342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-07-21DOI: 10.1007/s13246-025-01602-5
Luis Muñoz, Peter McLoone, Peter Metcalfe, Anatoly B Rosenfeld, Giordano Biasi
This study assesses the updated Monaco TPS virtual source model (VSM) 2.0, which removes multileaf collimator (MLC) and jaw characterization as editable factors from the MLC geometry section within Monaco. The focus is on the impact of changes to stereotactic radiotherapy (SRT) cases for spinal and intracranial treatments for two beam matched linear accelerators. A validated custom VSM 1.6 model optimized for SRT was compared with the Elekta Accelerated Go Live 6 MV flattening filter-free (FFF) and VSM 2.0. Evaluations included measured MLC characteristics with a high-resolution detector, measured output factors (OPF), ion chamber fields in the thorax phantom, and recalculations of clinically relevant SRT cases. VSM 2.0 improves MLC modelling. Ion chamber measurements for IAEA TD1583 measurements were found to be within expected tolerances. Gamma pass rates for two matched LINACs evidenced improvement at 1%, 1 mm and 10% threshold for single and multi-SRS brain and SABR Spine treatments. VSM 2.0 represents a meaningful advancement in beam modelling within a Monte Carlo-based TPS environment, offering improved dosimetric performance and operational simplicity. Commercially available detectors were used to demonstrate that VSM 2.0 enhances agility MLC modelling, supporting more precise SRT and SABR delivery for matched LINACs. Removing configurable dependencies from the beam model will result in more consistent high quality beam models, an improves workflows for commissioning of the Monaco TPS.
本研究评估了更新的摩纳哥TPS虚拟源模型(VSM) 2.0,该模型从摩纳哥的MLC几何部分中删除了多叶准直器(MLC)和下颌特征作为可编辑因素。重点是改变立体定向放疗(SRT)的情况下,脊柱和颅内治疗的两个束匹配线性加速器的影响。针对SRT优化的定制VSM 1.6模型与Elekta Accelerated Go Live 6 MV平坦化无滤波器(FFF)和VSM 2.0进行了比较。评估包括用高分辨率检测器测量的MLC特征、测量的输出因子(OPF)、胸腔幻象中的离子室场,以及临床相关SRT病例的重新计算。VSM 2.0改进了MLC建模。原子能机构TD1583测量的离子室测量结果在预期的公差范围内。两个匹配的LINACs的伽玛通过率在单和多srs脑和SABR脊柱治疗的1%、1mm和10%阈值下得到改善。VSM 2.0代表了在蒙特卡洛TPS环境中光束建模的有意义的进步,提供了改进的剂量学性能和操作简单性。商用检测器用于证明VSM 2.0增强了MLC建模的灵活性,支持更精确的SRT和SABR交付匹配的LINACs。从光束模型中去除可配置的依赖项将产生更一致的高质量光束模型,并改善摩纳哥TPS调试的工作流程。
{"title":"Evaluating Monaco 6.2.2 in complex radiotherapy across matched LINACs: improved MLC modelling and dose accuracy with virtual source model 2.0.","authors":"Luis Muñoz, Peter McLoone, Peter Metcalfe, Anatoly B Rosenfeld, Giordano Biasi","doi":"10.1007/s13246-025-01602-5","DOIUrl":"10.1007/s13246-025-01602-5","url":null,"abstract":"<p><p>This study assesses the updated Monaco TPS virtual source model (VSM) 2.0, which removes multileaf collimator (MLC) and jaw characterization as editable factors from the MLC geometry section within Monaco. The focus is on the impact of changes to stereotactic radiotherapy (SRT) cases for spinal and intracranial treatments for two beam matched linear accelerators. A validated custom VSM 1.6 model optimized for SRT was compared with the Elekta Accelerated Go Live 6 MV flattening filter-free (FFF) and VSM 2.0. Evaluations included measured MLC characteristics with a high-resolution detector, measured output factors (OPF), ion chamber fields in the thorax phantom, and recalculations of clinically relevant SRT cases. VSM 2.0 improves MLC modelling. Ion chamber measurements for IAEA TD1583 measurements were found to be within expected tolerances. Gamma pass rates for two matched LINACs evidenced improvement at 1%, 1 mm and 10% threshold for single and multi-SRS brain and SABR Spine treatments. VSM 2.0 represents a meaningful advancement in beam modelling within a Monte Carlo-based TPS environment, offering improved dosimetric performance and operational simplicity. Commercially available detectors were used to demonstrate that VSM 2.0 enhances agility MLC modelling, supporting more precise SRT and SABR delivery for matched LINACs. Removing configurable dependencies from the beam model will result in more consistent high quality beam models, an improves workflows for commissioning of the Monaco TPS.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"1573-1588"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12738645/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144676209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-08-21DOI: 10.1007/s13246-025-01630-1
Andrew Chacon, Sylvia Gong, Artur Cichocki, Talia Enright, Harris Panopoulos, Nathan Sonnberger, Andrew M Scott, Graeme O'Keefe
Zirconium-89 is presently undergoing pre-clinical investigation for its potential application as a positron emission tomography (PET) theranostic radioisotope. A critical consideration in the increasing number of trials and eventual clinical implementations is a comprehensive understanding of the radioactive waste byproducts and their quantification. This study focuses on the investigation and characterisation of the waste isotopes generated during the production of Zirconium-89, employing a combination of Geant4 Monte Carlo simulation and experimental methodologies utilising commercially obtainable starting materials from Thermofisher. Post cyclotron production samples of waste were taken and measured using a high purity germanium detector. Subsequent spectrum analysis consistently revealed the presence of the following isotopes in units of kBq per GBq of Zirconium-89 produced: cobalt-56 (13 ± 2, 14 ± 2, 15 ± 1), cobalt-57 (0.087 ± 0.004, 0.097 ± 0.004, 0.086 ± 0.007), rhenium-183 (2.61 ± 0.06, 3.29 ± 0.06, 2.47 ± 0.09), scandium-48 (27 ± 0.9, 21.1 ± 0.4), yttrium-88 (0.67 ± 0.06, 1.1 ± 0.4, 0.73 ± 0.06) and zirconium-88 (90 ± 5, 1340 ± 60, 35 ± 2). All the waste isotopes were able to reasonably be estimated using Geant4 Monte Carlo simulations or the deviation was able to be justified. The repeatability and predictability of isotopes and activities will enable informed decision-making regarding storage and disposal procedures in accordance with local legislative requirements.
{"title":"Monte Carlo prediction and experimental characterisation of long-lived waste byproducts arising from cyclotron production of zirconium-89 utilising a commercially available yttrium foil.","authors":"Andrew Chacon, Sylvia Gong, Artur Cichocki, Talia Enright, Harris Panopoulos, Nathan Sonnberger, Andrew M Scott, Graeme O'Keefe","doi":"10.1007/s13246-025-01630-1","DOIUrl":"10.1007/s13246-025-01630-1","url":null,"abstract":"<p><p>Zirconium-89 is presently undergoing pre-clinical investigation for its potential application as a positron emission tomography (PET) theranostic radioisotope. A critical consideration in the increasing number of trials and eventual clinical implementations is a comprehensive understanding of the radioactive waste byproducts and their quantification. This study focuses on the investigation and characterisation of the waste isotopes generated during the production of Zirconium-89, employing a combination of Geant4 Monte Carlo simulation and experimental methodologies utilising commercially obtainable starting materials from Thermofisher. Post cyclotron production samples of waste were taken and measured using a high purity germanium detector. Subsequent spectrum analysis consistently revealed the presence of the following isotopes in units of kBq per GBq of Zirconium-89 produced: cobalt-56 (13 ± 2, 14 ± 2, 15 ± 1), cobalt-57 (0.087 ± 0.004, 0.097 ± 0.004, 0.086 ± 0.007), rhenium-183 (2.61 ± 0.06, 3.29 ± 0.06, 2.47 ± 0.09), scandium-48 (27 ± 0.9, 21.1 ± 0.4), yttrium-88 (0.67 ± 0.06, 1.1 ± 0.4, 0.73 ± 0.06) and zirconium-88 (90 ± 5, 1340 ± 60, 35 ± 2). All the waste isotopes were able to reasonably be estimated using Geant4 Monte Carlo simulations or the deviation was able to be justified. The repeatability and predictability of isotopes and activities will enable informed decision-making regarding storage and disposal procedures in accordance with local legislative requirements.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"1901-1910"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12738615/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144974832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-07-28DOI: 10.1007/s13246-025-01605-2
Adam L Jolly, Andrew L Fielding
Targeted alpha therapy (TαT) employs alpha particle-emitting radioisotopes conjugated to tumour-specific carriers to precisely irradiate tumour cells. Monte-carlo techniques have been used to accurately simulate absorbed dose and DNA damage for the four promising TαT radionuclides, Actinium-225 (225Ac), Radium-223, (223Ra), Lead-212 (212Pb) and Astatine-211, (211At). TOPAS and TOPAS-nBio, based on the Geant4 and Geant4-DNA monte-carlo codes respectively, were used to model the radioactive decay and alpha particle transport within a simplified spherical cell model. Four different sites within the cell model were used for the initial radionuclide distributions: the cell membrane layer, within the cytoplasm volume, on the nucleus surface, and within the nucleus volume. Results indicate higher absorbed doses to the nucleus per decay when radionuclides are initially located on the nucleus wall or within the nucleus volume. 225Ac and 223Ra, with longer decay chains and higher alpha yields, exhibit higher doses to the nucleus per decay compared to 212Pb and 211At. Notably, 211At, particularly when initially distributed within the nucleus volume or at its surface, demonstrates high relative efficacy, indicated by the absorbed dose to the nucleus per decay and number of single and double-strand breaks. These findings suggest that tumour-specific molecules should ideally target the nucleus to optimize efficacy.
{"title":"Modelling single cell dosimetry and DNA damage of targeted alpha therapy using Monte-Carlo techniques.","authors":"Adam L Jolly, Andrew L Fielding","doi":"10.1007/s13246-025-01605-2","DOIUrl":"10.1007/s13246-025-01605-2","url":null,"abstract":"<p><p>Targeted alpha therapy (TαT) employs alpha particle-emitting radioisotopes conjugated to tumour-specific carriers to precisely irradiate tumour cells. Monte-carlo techniques have been used to accurately simulate absorbed dose and DNA damage for the four promising TαT radionuclides, Actinium-225 (<sup>225</sup>Ac), Radium-223, (<sup>223</sup>Ra), Lead-212 (<sup>212</sup>Pb) and Astatine-211, (<sup>211</sup>At). TOPAS and TOPAS-nBio, based on the Geant4 and Geant4-DNA monte-carlo codes respectively, were used to model the radioactive decay and alpha particle transport within a simplified spherical cell model. Four different sites within the cell model were used for the initial radionuclide distributions: the cell membrane layer, within the cytoplasm volume, on the nucleus surface, and within the nucleus volume. Results indicate higher absorbed doses to the nucleus per decay when radionuclides are initially located on the nucleus wall or within the nucleus volume. <sup>225</sup>Ac and <sup>223</sup>Ra, with longer decay chains and higher alpha yields, exhibit higher doses to the nucleus per decay compared to <sup>212</sup>Pb and <sup>211</sup>At. Notably, <sup>211</sup>At, particularly when initially distributed within the nucleus volume or at its surface, demonstrates high relative efficacy, indicated by the absorbed dose to the nucleus per decay and number of single and double-strand breaks. These findings suggest that tumour-specific molecules should ideally target the nucleus to optimize efficacy.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"1611-1624"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12738655/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144734069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The ESD is calculated using the backscatter factor (BSF). However, BSFs for flat surfaces have been used even though simulations have shown that the BSFs for curved surfaces, which represent the human body more accurately, do not match those for flat surfaces. Measuring these values in practice presents a challenge because conventional dosimeters used for BSF measurement have sensitive volumes that cannot conform to curved surfaces. In this study, we measured the BSF for a curved surface using a flexible scintillator. The scintillator, composed of Gd₃Al₂Ga₃O₁₂ (GAGG) scintillator powder mixed with a silicone adhesive, was securely attached to the curved surface of a cylindrical phantom. Diagnostic X-rays were irradiated onto the scintillator, and the BSFs were evaluated as the ratio of the light output with and without the phantom. We successfully measured BSFs on a curved surface using a flexible scintillator. The mean difference between the BSFs obtained from the experiments using the flexible scintillator and those obtained from the simulations for the cylindrical phantom was 0.43%. The maximum difference was 1.47%, which was observed at a tube voltage of 40 kV. Thus, the BSFs measured using the flexible scintillator agree well with the simulated results. Our scintillator is useful for measuring BSFs on curved surfaces and contributes to dose management.
{"title":"A novel method for measuring the backscatter factor on a curved surface for diagnostic X-rays using a flexible scintillator sheet.","authors":"Kohei Nakanishi, Seiichi Yamamoto, Masato Yoshida, Kenta Miwa, Ryuichi Nishii","doi":"10.1007/s13246-025-01624-z","DOIUrl":"10.1007/s13246-025-01624-z","url":null,"abstract":"<p><p>The ESD is calculated using the backscatter factor (BSF). However, BSFs for flat surfaces have been used even though simulations have shown that the BSFs for curved surfaces, which represent the human body more accurately, do not match those for flat surfaces. Measuring these values in practice presents a challenge because conventional dosimeters used for BSF measurement have sensitive volumes that cannot conform to curved surfaces. In this study, we measured the BSF for a curved surface using a flexible scintillator. The scintillator, composed of Gd₃Al₂Ga₃O₁₂ (GAGG) scintillator powder mixed with a silicone adhesive, was securely attached to the curved surface of a cylindrical phantom. Diagnostic X-rays were irradiated onto the scintillator, and the BSFs were evaluated as the ratio of the light output with and without the phantom. We successfully measured BSFs on a curved surface using a flexible scintillator. The mean difference between the BSFs obtained from the experiments using the flexible scintillator and those obtained from the simulations for the cylindrical phantom was 0.43%. The maximum difference was 1.47%, which was observed at a tube voltage of 40 kV. Thus, the BSFs measured using the flexible scintillator agree well with the simulated results. Our scintillator is useful for measuring BSFs on curved surfaces and contributes to dose management.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"1831-1839"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144817978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-09-22DOI: 10.1007/s13246-025-01613-2
Philip Martin, Lois Holloway, Peter Metcalfe, Eng-Siew Koh, Farhannah Aly, Edward Chan, Caterina Brighi
An understanding of the repeatability of imaging biomarkers is key to their implementation as clinical tools. In this study we calculate the repeatability and inter-correlation of radiomic features derived from quantitative MRI (qMRI) of Glioblastoma (GBM) patients and assess the effect of image standardisation methods on these factors. We analysed scan-rescan Diffusion Weighted MR Images (DWI) and Dynamic Contrast Enhanced MR Images (DCE) from 36 GBM patients obtained from The Cancer Imaging Archive (TCIA). These included 17 patients, from the QIN-GBM-Treatment-Response patient cohort, scanned post surgery and prior to chemo-radiation therapy and 19 patients, from the RIDER Neuro MRI patient cohort, scanned at diagnosis of tumour recurrence. For both patient cohorts, two sets of scans were taken 2-6 days apart. Each of these patient cohorts was analysed independently to determine if findings were consistent across different acquisition parameters. Parametric maps of Apparent Diffusion Coefficient (ADC) and Cerebral Blood Volume (CBV) were obtained from DWI and DCE data, respectively. Intensity normalisation and noise filtering were applied to the parametric maps in multiple permutations to give 7 distinct standardisation methods. Shape, first order and second order radiomic features for the parametric maps were calculated within the Gross Tumour Volume (GTV). The Intraclass Correlation Coefficient (ICC) was calculated between the feature value at each imaging timepoint. The ICC of first and second order features derived from images with each standardisation method was compared to the ICC of corresponding features derived from images without standardisation. Based on the average ICC of features derived from ADC images without image standardisation, first order features were the most repeatable in both patient cohorts. For ADC derived features in the QIN cohort, shape features were the second most repeatable followed by second order features. For ADC derived features in the RIDER cohort, second order features were the second most repeatable followed by shape features. In CBV images, shape features were the most repeatable followed by second order and then first order in both patient cohorts. No image standardisation method implemented in this study was found to significantly increase the repeatability of ADC-derived first or second order features. For first order CBV features z-score normalisation without noise filtering produced a significant improvement in feature repeatability in both patient cohorts. Radiomic feature repeatability is impacted by feature class. Image standardisation methods implemented in this study were not found to be effective at improving the repeatability of ADC-derived features and had limited utility for improving CBV derived features. Future radiomic studies should consider feature repeatability as an important factor in feature selection.
{"title":"Repeatability of diffusion and perfusion MRI derived radiomic features in glioblastoma: a test-retest study.","authors":"Philip Martin, Lois Holloway, Peter Metcalfe, Eng-Siew Koh, Farhannah Aly, Edward Chan, Caterina Brighi","doi":"10.1007/s13246-025-01613-2","DOIUrl":"10.1007/s13246-025-01613-2","url":null,"abstract":"<p><p>An understanding of the repeatability of imaging biomarkers is key to their implementation as clinical tools. In this study we calculate the repeatability and inter-correlation of radiomic features derived from quantitative MRI (qMRI) of Glioblastoma (GBM) patients and assess the effect of image standardisation methods on these factors. We analysed scan-rescan Diffusion Weighted MR Images (DWI) and Dynamic Contrast Enhanced MR Images (DCE) from 36 GBM patients obtained from The Cancer Imaging Archive (TCIA). These included 17 patients, from the QIN-GBM-Treatment-Response patient cohort, scanned post surgery and prior to chemo-radiation therapy and 19 patients, from the RIDER Neuro MRI patient cohort, scanned at diagnosis of tumour recurrence. For both patient cohorts, two sets of scans were taken 2-6 days apart. Each of these patient cohorts was analysed independently to determine if findings were consistent across different acquisition parameters. Parametric maps of Apparent Diffusion Coefficient (ADC) and Cerebral Blood Volume (CBV) were obtained from DWI and DCE data, respectively. Intensity normalisation and noise filtering were applied to the parametric maps in multiple permutations to give 7 distinct standardisation methods. Shape, first order and second order radiomic features for the parametric maps were calculated within the Gross Tumour Volume (GTV). The Intraclass Correlation Coefficient (ICC) was calculated between the feature value at each imaging timepoint. The ICC of first and second order features derived from images with each standardisation method was compared to the ICC of corresponding features derived from images without standardisation. Based on the average ICC of features derived from ADC images without image standardisation, first order features were the most repeatable in both patient cohorts. For ADC derived features in the QIN cohort, shape features were the second most repeatable followed by second order features. For ADC derived features in the RIDER cohort, second order features were the second most repeatable followed by shape features. In CBV images, shape features were the most repeatable followed by second order and then first order in both patient cohorts. No image standardisation method implemented in this study was found to significantly increase the repeatability of ADC-derived first or second order features. For first order CBV features z-score normalisation without noise filtering produced a significant improvement in feature repeatability in both patient cohorts. Radiomic feature repeatability is impacted by feature class. Image standardisation methods implemented in this study were not found to be effective at improving the repeatability of ADC-derived features and had limited utility for improving CBV derived features. Future radiomic studies should consider feature repeatability as an important factor in feature selection.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"1691-1702"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145114781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}