Pub Date : 2025-12-18DOI: 10.1007/s13246-025-01647-6
Jianhong Liu, Wei Chen, Haochuan Jiang, Jun Jiang, Lianggeng Gong
Photon starvation in computed tomography, which occurs when insufficient photon counts allow electronic noise to dominate the signal, leads to severe degradation in reconstructed images. This paper proposes a pre-correction method that combines a negative feedback mechanism with an adaptive diffusion filter to mitigate photon-starved effects by suppressing electronic noise in the sinogram prior to logarithmic transformation. The method was evaluated using ultra-low-dose scans of an anthropomorphic torso phantom and clinical patient data. For comparison, several sinogram-based denoising methods were also applied. The proposed method produced reconstructed images with the lowest noise, highest structural similarity, and superior spatial resolution, along with significantly reduced streaking and bias artifacts. Experimental results demonstrate that the proposed method effectively suppresses noise, streaking artifacts and large-scale bias artifacts in low-signal anatomical regions under severe photon starvation in low-dose conditions, while maintaining acceptable resolution.
{"title":"A pre-log correction method based on dynamic approximation to reduce photon-starved deterioration.","authors":"Jianhong Liu, Wei Chen, Haochuan Jiang, Jun Jiang, Lianggeng Gong","doi":"10.1007/s13246-025-01647-6","DOIUrl":"https://doi.org/10.1007/s13246-025-01647-6","url":null,"abstract":"<p><p>Photon starvation in computed tomography, which occurs when insufficient photon counts allow electronic noise to dominate the signal, leads to severe degradation in reconstructed images. This paper proposes a pre-correction method that combines a negative feedback mechanism with an adaptive diffusion filter to mitigate photon-starved effects by suppressing electronic noise in the sinogram prior to logarithmic transformation. The method was evaluated using ultra-low-dose scans of an anthropomorphic torso phantom and clinical patient data. For comparison, several sinogram-based denoising methods were also applied. The proposed method produced reconstructed images with the lowest noise, highest structural similarity, and superior spatial resolution, along with significantly reduced streaking and bias artifacts. Experimental results demonstrate that the proposed method effectively suppresses noise, streaking artifacts and large-scale bias artifacts in low-signal anatomical regions under severe photon starvation in low-dose conditions, while maintaining acceptable resolution.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145775811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-15DOI: 10.1007/s13246-025-01686-z
Mark Ashburner, Roger Huang, John Chin, Moamen Aly
<p><strong>Introduction: </strong>Deep learning (DL)-based auto-segmentation has rapidly become the state-of-the-art in radiotherapy planning, significantly reducing contouring time while achieving geometric accuracy comparable to expert-derived contours [1-3]. While AI contouring on CTp is now widely established, its application to cone-beam CT (CBCT) is less well explored, despite CBCT's critical role in daily image guidance for prostate radiotherapy. Current adaptive workflows rely on manual contouring or deformable image registration (DIR), both of which are resource-intensive and subject to limitations in accuracy and consistency. Recent advances in AI-based CBCT segmentation have shown promise in reducing manual workload, improving contour consistency, and supporting adaptive radiotherapy (ART) workflows [4]. To assess the clinical implications of these developments, this study retrospectively analyzed CBCT images from 20 prostate cancer patients, comparing AI- and DIR-generated contours to evaluate systematic differences and their potential impact on dosimetry and ART decision-making.</p><p><strong>Methods: </strong>Twenty prostate radiotherapy patients were retrospectively selected, treated with either 42.7 Gy in 7 fractions or 60 Gy in 20 fractions, and imaged on Halcyon linear accelerators using Hypersight CBCT ([Formula: see text]). AI-generated contours were produced with Limbus AI v1.8.0, while deformable image registration (DIR) contours were propagated from planning CTs in Velocity v4.2. Contour accuracy was assessed by two senior medical officers using a four-point Likert scale across 140 CBCTs. Prostate, bladder, and rectum were analyzed using Dice Similarity Coefficient (DSC), Hausdorff Distance (HD), mean surface distance (MSD), center-of-mass (COM) displacement, and volumetric change relative to the planning CT. Dosimetric evaluation included [Formula: see text], [Formula: see text], [Formula: see text], and clinically defined organ-at-risk metrics to assess potential implications for adaptive radiotherapy. Statistical significance was tested using paired Student's t-tests and Wilcoxon signed-rank tests with a threshold of [Formula: see text].</p><p><strong>Results: </strong>AI-generated contours achieved acceptable clinical accuracy in >80% of cases, with fewer severe or medium errors compared to DIR-derived contours, which required minimal changes of 49%. Quantitative analysis demonstrated broadly comparable Dice Similarity Coefficients (DSC), Hausdorff Distance (HD), and mean surface distance (MSD) across prostate, bladder, and rectum. Organ variation on CBCT revealed larger mean centre of mass shifts and volume differences for AI, particularly in bladder contours, whereas DIR showed smaller systematic deviations. Dosimetric comparisons highlighted that prostate dose metrics were significantly different between methods, while bladder differences were mostly non-significant except at high-dose volumes, and rectum analysis re
导读:基于深度学习(DL)的自动分割已迅速成为放疗计划中的最新技术,它显著减少了轮廓时间,同时实现了与专家导出的轮廓相当的几何精度[1-3]。虽然CTp上的人工智能轮廓已经得到了广泛的建立,但它在锥束CT (CBCT)上的应用还没有得到很好的探索,尽管CBCT在前列腺放疗的日常图像引导中起着至关重要的作用。当前的自适应工作流程依赖于手动轮廓或可变形图像配准(DIR),这两种方法都是资源密集型的,并且在准确性和一致性方面受到限制。基于人工智能的CBCT分割的最新进展在减少人工工作量、提高轮廓一致性和支持自适应放疗(ART)工作流程方面显示出了希望[10]。为了评估这些进展的临床意义,本研究回顾性分析了20名前列腺癌患者的CBCT图像,比较了AI和dir生成的轮廓,以评估系统差异及其对剂量学和ART决策的潜在影响。方法:回顾性选择20例前列腺放疗患者,采用42.7 Gy分7段或60 Gy分20段治疗,在Halcyon直线加速器上Hypersight CBCT成像(公式见文)。人工智能生成的轮廓是用Limbus AI v1.8.0生成的,而变形图像配准(DIR)轮廓是用Velocity v4.2从规划ct传播的。等高线准确性由两名高级医务人员在140个cbct中使用四点李克特量表进行评估。采用Dice相似系数(DSC)、Hausdorff距离(HD)、平均表面距离(MSD)、质心位移(COM)和相对于规划CT的体积变化对前列腺、膀胱和直肠进行分析。剂量学评价包括[公式:见文]、[公式:见文]、[公式:见文]和临床定义的高危器官指标,以评估适应性放疗的潜在影响。统计学显著性采用配对学生t检验和Wilcoxon符号秩检验,阈值为[公式:见文本]。结果:人工智能生成的轮廓在80%的病例中达到了可接受的临床准确性,与dir生成的轮廓相比,严重或中度错误更少,后者需要49%的最小更改。定量分析表明,前列腺、膀胱和直肠的骰子相似系数(DSC)、豪斯多夫距离(HD)和平均表面距离(MSD)具有广泛的可比性。CBCT上的器官变化显示AI的平均质心偏移和体积差异较大,特别是膀胱轮廓,而DIR显示较小的系统偏差。剂量学比较强调,前列腺剂量指标在不同方法之间存在显著差异,而膀胱的差异除高剂量量外大多不显著,直肠分析显示出一致的统计学显著差异。总的来说,尽管两种方法都捕获了日常解剖变化,但这表明适应性放疗应用的互补优势。结论:在CBCT图像上,人工智能生成的前列腺放疗轮廓具有较高的几何精度和临床可用性,需要最少的专家校正,而DIR轮廓虽然通常可用,但具有较大的可变性,特别是对于膀胱和直肠等解剖变化较大的器官。尽管几何比较相似,但统计学上显著的剂量学差异强调了仔细的专家验证的重要性,特别是对于像直肠这样的敏感结构。这些发现支持将基于人工智能的轮廓整合到自适应放疗工作流程中,以简化临床流程,减少工作量并保持治疗准确性,同时强调无论是人工智能还是dir衍生的自动轮廓,都应始终接受专家审查,以确保安全有效的患者护理。
{"title":"Comparative analysis of AI-generated and deformed image registration contours on daily CBCT in prostate cancer radiation therapy: accuracy and dosimetric implications using commercial tools.","authors":"Mark Ashburner, Roger Huang, John Chin, Moamen Aly","doi":"10.1007/s13246-025-01686-z","DOIUrl":"https://doi.org/10.1007/s13246-025-01686-z","url":null,"abstract":"<p><strong>Introduction: </strong>Deep learning (DL)-based auto-segmentation has rapidly become the state-of-the-art in radiotherapy planning, significantly reducing contouring time while achieving geometric accuracy comparable to expert-derived contours [1-3]. While AI contouring on CTp is now widely established, its application to cone-beam CT (CBCT) is less well explored, despite CBCT's critical role in daily image guidance for prostate radiotherapy. Current adaptive workflows rely on manual contouring or deformable image registration (DIR), both of which are resource-intensive and subject to limitations in accuracy and consistency. Recent advances in AI-based CBCT segmentation have shown promise in reducing manual workload, improving contour consistency, and supporting adaptive radiotherapy (ART) workflows [4]. To assess the clinical implications of these developments, this study retrospectively analyzed CBCT images from 20 prostate cancer patients, comparing AI- and DIR-generated contours to evaluate systematic differences and their potential impact on dosimetry and ART decision-making.</p><p><strong>Methods: </strong>Twenty prostate radiotherapy patients were retrospectively selected, treated with either 42.7 Gy in 7 fractions or 60 Gy in 20 fractions, and imaged on Halcyon linear accelerators using Hypersight CBCT ([Formula: see text]). AI-generated contours were produced with Limbus AI v1.8.0, while deformable image registration (DIR) contours were propagated from planning CTs in Velocity v4.2. Contour accuracy was assessed by two senior medical officers using a four-point Likert scale across 140 CBCTs. Prostate, bladder, and rectum were analyzed using Dice Similarity Coefficient (DSC), Hausdorff Distance (HD), mean surface distance (MSD), center-of-mass (COM) displacement, and volumetric change relative to the planning CT. Dosimetric evaluation included [Formula: see text], [Formula: see text], [Formula: see text], and clinically defined organ-at-risk metrics to assess potential implications for adaptive radiotherapy. Statistical significance was tested using paired Student's t-tests and Wilcoxon signed-rank tests with a threshold of [Formula: see text].</p><p><strong>Results: </strong>AI-generated contours achieved acceptable clinical accuracy in >80% of cases, with fewer severe or medium errors compared to DIR-derived contours, which required minimal changes of 49%. Quantitative analysis demonstrated broadly comparable Dice Similarity Coefficients (DSC), Hausdorff Distance (HD), and mean surface distance (MSD) across prostate, bladder, and rectum. Organ variation on CBCT revealed larger mean centre of mass shifts and volume differences for AI, particularly in bladder contours, whereas DIR showed smaller systematic deviations. Dosimetric comparisons highlighted that prostate dose metrics were significantly different between methods, while bladder differences were mostly non-significant except at high-dose volumes, and rectum analysis re","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145757846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-15DOI: 10.1007/s13246-025-01685-0
Ramandeep Singh, Parikshith Chavakula, Joy Chatterjee, Anuj Saini, Deepak Joshi, Ashish Suri
Accurate prediction of human motor actions is essential for developing intuitive, responsive, and adaptive human-machine interaction systems. This study investigates the use of force myography (FMG) to predict knob-turning activity with varying torque values and arm angles. Participants performed knob-turning activities on three spiral springs with different torque values and at four arm angles. A convolution neural network, long short-term memory hybrid classification approach was employed to classify the FMG data and predict torque and arm angle with an overall accuracy of 95.87 ± 2.59% and 94.06 ± 2.44%, respectively. The study also shows that the presence of subcutaneous fat did not significantly affect the classification of torque and arm angle ([Formula: see text], Mann-Whitney U test). These findings demonstrate the potential of FMG as an effective method for accurately predicting activities of daily life involving tasks with varying torque and arm angles.
{"title":"Turning a knob: deep learning-based prediction of torque and arm angles using force myography.","authors":"Ramandeep Singh, Parikshith Chavakula, Joy Chatterjee, Anuj Saini, Deepak Joshi, Ashish Suri","doi":"10.1007/s13246-025-01685-0","DOIUrl":"https://doi.org/10.1007/s13246-025-01685-0","url":null,"abstract":"<p><p>Accurate prediction of human motor actions is essential for developing intuitive, responsive, and adaptive human-machine interaction systems. This study investigates the use of force myography (FMG) to predict knob-turning activity with varying torque values and arm angles. Participants performed knob-turning activities on three spiral springs with different torque values and at four arm angles. A convolution neural network, long short-term memory hybrid classification approach was employed to classify the FMG data and predict torque and arm angle with an overall accuracy of 95.87 ± 2.59% and 94.06 ± 2.44%, respectively. The study also shows that the presence of subcutaneous fat did not significantly affect the classification of torque and arm angle ([Formula: see text], Mann-Whitney U test). These findings demonstrate the potential of FMG as an effective method for accurately predicting activities of daily life involving tasks with varying torque and arm angles.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145757998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-15DOI: 10.1007/s13246-025-01684-1
Jingjing M Dougherty, Chris J Beltran
To evaluate a proof-of-concept three-dimensional surface reconstruction technique using a hybrid LiDAR and RGB sensor system with an open-source, GPU-accelerated pipeline. The goal is to generate photorealistic digital twins of phantom surfaces for integration into radiotherapy collision avoidance workflows. A portable Intel RealSense sensor was used to acquire synchronized depth and color images. Sensor performance, including depth accuracy, fill rate, and planar root mean square error, was evaluated to determine practical scan range. A reconstruction pipeline was implemented using the Open3D library with a voxel-based framework, signed distance function integration, ray casting, and color and depth-based simultaneous localization and mapping for pose tracking. Surface meshes were generated using the Marching Cubes algorithm. Validation involved scanning rectangular box phantoms and an anthropomorphic Rando phantom in a single circular motion. Reconstructed models were registered to CT-derived meshes using manual point picking and iterative closest point alignment. Accuracy was assessed using cloud-to-mesh distance metrics and compared to Poisson surface reconstruction. Highest accuracy was observed within the 0.3 to 2.0 m range. Dimensional differences for box models were within five millimeters. The Rando phantom showed a registration error of 1.8 mm and 100% theoretical overlap with the CT reference. Global mean signed distance was minus 0.32 mm with a standard deviation of 3.85 mm. This technique has strong potential to enables accurate, realistic surface modeling using low-cost, open-source tools and supports future integration into radiotherapy digital twin systems.
{"title":"Hybrid LiDAR-RGB 3D surface reconstruction for collision avoidance in radiotherapy: a proof‑of‑concept phantom study.","authors":"Jingjing M Dougherty, Chris J Beltran","doi":"10.1007/s13246-025-01684-1","DOIUrl":"https://doi.org/10.1007/s13246-025-01684-1","url":null,"abstract":"<p><p>To evaluate a proof-of-concept three-dimensional surface reconstruction technique using a hybrid LiDAR and RGB sensor system with an open-source, GPU-accelerated pipeline. The goal is to generate photorealistic digital twins of phantom surfaces for integration into radiotherapy collision avoidance workflows. A portable Intel RealSense sensor was used to acquire synchronized depth and color images. Sensor performance, including depth accuracy, fill rate, and planar root mean square error, was evaluated to determine practical scan range. A reconstruction pipeline was implemented using the Open3D library with a voxel-based framework, signed distance function integration, ray casting, and color and depth-based simultaneous localization and mapping for pose tracking. Surface meshes were generated using the Marching Cubes algorithm. Validation involved scanning rectangular box phantoms and an anthropomorphic Rando phantom in a single circular motion. Reconstructed models were registered to CT-derived meshes using manual point picking and iterative closest point alignment. Accuracy was assessed using cloud-to-mesh distance metrics and compared to Poisson surface reconstruction. Highest accuracy was observed within the 0.3 to 2.0 m range. Dimensional differences for box models were within five millimeters. The Rando phantom showed a registration error of 1.8 mm and 100% theoretical overlap with the CT reference. Global mean signed distance was minus 0.32 mm with a standard deviation of 3.85 mm. This technique has strong potential to enables accurate, realistic surface modeling using low-cost, open-source tools and supports future integration into radiotherapy digital twin systems.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145758035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-15DOI: 10.1007/s13246-025-01679-y
S Krishnendu, Maheshwari Biradar
In medical imaging, particularly in enhancing computed tomography (CT) scan images, improving image quality while preserving diagnostic content is critical for detecting different types of abnormalities, especially in cases such as tumors, inflammatory conditions, or vascular issues. This paper proposes a novel image enhancement pipeline that integrates several image enhancement techniques into a sequential workflow that is specifically designed for abdominal CT scan images. The proposed pipeline combines windowing, contrast-limited adaptive histogram equalization, denoising via non-local means, and unsharp masking to concurrently address several issues affecting the quality of the images. Unlike existing methods, the proposed combinational approach improves contrast, suppresses noise, and sharpens structural detail, guaranteeing the balance between the enhancement and the diagnostic integrity. The workflow was evaluated on datasets from The Cancer Imaging Archive and the Medical Segmentation Decathlon. The proposed approach is assessed using key image quality metrics, yielding an average Peak Signal-to-Noise Ratio of 31.79 dB, Universal Image Quality Index of 0.96, Feature Similarity Index of 0.93, Absolute Mean Brightness Error of 7.12, and Edge Content of 7.78. These results indicate significant improvements in contrast enhancement, noise reduction, and the preservation of structural details. We performed an additional qualitative analysis by generating histograms and saliency maps that further confirm the method's effectiveness in enhancing the diagnostic quality of the CT images for both clinical and research purposes.
{"title":"Enhancing diagnostic information in abdominal computed tomography (CT) images through optimized image enhancement techniques.","authors":"S Krishnendu, Maheshwari Biradar","doi":"10.1007/s13246-025-01679-y","DOIUrl":"https://doi.org/10.1007/s13246-025-01679-y","url":null,"abstract":"<p><p>In medical imaging, particularly in enhancing computed tomography (CT) scan images, improving image quality while preserving diagnostic content is critical for detecting different types of abnormalities, especially in cases such as tumors, inflammatory conditions, or vascular issues. This paper proposes a novel image enhancement pipeline that integrates several image enhancement techniques into a sequential workflow that is specifically designed for abdominal CT scan images. The proposed pipeline combines windowing, contrast-limited adaptive histogram equalization, denoising via non-local means, and unsharp masking to concurrently address several issues affecting the quality of the images. Unlike existing methods, the proposed combinational approach improves contrast, suppresses noise, and sharpens structural detail, guaranteeing the balance between the enhancement and the diagnostic integrity. The workflow was evaluated on datasets from The Cancer Imaging Archive and the Medical Segmentation Decathlon. The proposed approach is assessed using key image quality metrics, yielding an average Peak Signal-to-Noise Ratio of 31.79 dB, Universal Image Quality Index of 0.96, Feature Similarity Index of 0.93, Absolute Mean Brightness Error of 7.12, and Edge Content of 7.78. These results indicate significant improvements in contrast enhancement, noise reduction, and the preservation of structural details. We performed an additional qualitative analysis by generating histograms and saliency maps that further confirm the method's effectiveness in enhancing the diagnostic quality of the CT images for both clinical and research purposes.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145758004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To compare dose to the organ at risk (OAR) and target coverage of carbon-ion beam, protons, and photons for patients with head and neck cancer. Treatment plans for carbon-ion pencil beam scanning (C-PBS) (64 Gy (RBE) in 16 fractions), proton pencil beam scanning (P-PBS), and volumetric modulated arc therapy (VMAT) (70 Gy in 35 fractions for P-PBS and VMAT) were generated and compared using different dose constraints per treatment modality. Dose metrics (e.g. D95,V20) were analyzed. Statistical significance was assessed by the Wilcoxon signed-rank test. Also, we investigated howmany normal tissues were irradiated above the constraint after achieving the planning goals (pass rate) in the OARs. C-PBS outperformed P-PBS and VMAT in PTV coverage (p = 0.01 for both); however, P-PBS and VMAT did not differ substantially from each another (p = 0.35). C-PBS was superior in limiting the dose to the OAR. The pass rates for C-PBS, P-PBS, and VMAT were 94%, 81%, and 69%, respectively. C-PBS demonstrated superior performance compared to VMAT and P-PBS in terms of dose conformation to the target volume and normal tissue sparing, and achieved the highest pass rate in meeting dose constraints.
{"title":"Carbon-ions, protons or photons for head and neck cancer radiotherapy-an in silico planning study.","authors":"Hyun-Cheol Kang, Shinichiro Mori, Tapesh Bhattacharyya, Wataru Furuichi, Naoki Tohyama, Akihiro Nomoto, Nobuyuki Kanematsu, Hiroaki Ikawa, Masashi Koto, Shigeru Yamada","doi":"10.1007/s13246-025-01677-0","DOIUrl":"https://doi.org/10.1007/s13246-025-01677-0","url":null,"abstract":"<p><p>To compare dose to the organ at risk (OAR) and target coverage of carbon-ion beam, protons, and photons for patients with head and neck cancer. Treatment plans for carbon-ion pencil beam scanning (C-PBS) (64 Gy (RBE) in 16 fractions), proton pencil beam scanning (P-PBS), and volumetric modulated arc therapy (VMAT) (70 Gy in 35 fractions for P-PBS and VMAT) were generated and compared using different dose constraints per treatment modality. Dose metrics (e.g. D95,V20) were analyzed. Statistical significance was assessed by the Wilcoxon signed-rank test. Also, we investigated howmany normal tissues were irradiated above the constraint after achieving the planning goals (pass rate) in the OARs. C-PBS outperformed P-PBS and VMAT in PTV coverage (p = 0.01 for both); however, P-PBS and VMAT did not differ substantially from each another (p = 0.35). C-PBS was superior in limiting the dose to the OAR. The pass rates for C-PBS, P-PBS, and VMAT were 94%, 81%, and 69%, respectively. C-PBS demonstrated superior performance compared to VMAT and P-PBS in terms of dose conformation to the target volume and normal tissue sparing, and achieved the highest pass rate in meeting dose constraints.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145655877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-09-10DOI: 10.1007/s13246-025-01637-8
Zhen Hui Chen, Hans Lynggaard Riis, Rohen White, Thomas Milan, Pejman Rowshanfarzad
{"title":"A comprehensive investigation of the radiation isocentre spatial variability in linear accelerators: implications for commissioning, QA, and clinical protocols.","authors":"Zhen Hui Chen, Hans Lynggaard Riis, Rohen White, Thomas Milan, Pejman Rowshanfarzad","doi":"10.1007/s13246-025-01637-8","DOIUrl":"10.1007/s13246-025-01637-8","url":null,"abstract":"","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"1979-1993"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12738597/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145030942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-07-24DOI: 10.1007/s13246-025-01599-x
Michael John James Douglass
Access to medical imaging data is crucial for research, training, and treatment planning in medical imaging and radiation therapy. However, ethical constraints and time-consuming approval processes often limit the availability of such data for research. This study introduces DICOMator, an open-source Blender add-on designed to address this challenge by enabling the creation of synthetic CT datasets from 3D mesh objects. DICOMator aims to provide researchers and medical professionals with a flexible tool for generating customisable and semi-realistic synthetic CT data, including 4D CT datasets from user defined static or animated 3D mesh objects. The add-on leverages Blender's powerful 3D modelling environment, utilising its mesh manipulation, animation and rendering capabilities to create synthetic data ranging from simple phantoms to accurate anatomical models. DICOMator incorporates various features to simulate common CT imaging artefacts, bridging the gap between 3D modelling and medical imaging. DICOMator voxelises 3D mesh objects, assigns appropriate Hounsfield Unit values, and applies artefact simulations. These simulations include detector noise, metal artefacts and partial volume effects. By incorporating these artefacts, DICOMator produces synthetic CT data that more closely resembles real CT scans. The resulting data is then exported in DICOM format, ensuring compatibility with existing medical imaging workflows and treatment planning systems. To demonstrate DICOMator's capabilities, three synthetic CT datasets were created: a simple lung phantom to illustrate basic functionality, a more realistic cranial CT scan to demonstrate dose calculations and CT image registration on synthetic data in treatment planning systems. Finally, a thoracic 4D CT scan featuring multiple breathing phases was created to demonstrate the dynamic imaging capabilities and the quantitative accuracy of the synthetic datasets. These examples were chosen to highlight DICOMator's versatility in generating diverse and complex synthetic CT data suitable for various research and educational purposes, from basic quality assurance to advanced motion management studies. DICOMator offers a promising solution to the limitations of patient CT data availability in medical physics research. By providing a user-friendly interface for creating customisable synthetic datasets from 3D meshes, it has the potential to accelerate research, validate treatment planning tools such as deformable image registration, and enhance educational resources in the field of radiation oncology medical physics. Future developments may include incorporation of other imaging modalities, such as MRI or PET, further expanding its utility in multi-modal imaging research.
{"title":"An open-source tool for converting 3D mesh volumes into synthetic DICOM CT images for medical physics research.","authors":"Michael John James Douglass","doi":"10.1007/s13246-025-01599-x","DOIUrl":"10.1007/s13246-025-01599-x","url":null,"abstract":"<p><p>Access to medical imaging data is crucial for research, training, and treatment planning in medical imaging and radiation therapy. However, ethical constraints and time-consuming approval processes often limit the availability of such data for research. This study introduces DICOMator, an open-source Blender add-on designed to address this challenge by enabling the creation of synthetic CT datasets from 3D mesh objects. DICOMator aims to provide researchers and medical professionals with a flexible tool for generating customisable and semi-realistic synthetic CT data, including 4D CT datasets from user defined static or animated 3D mesh objects. The add-on leverages Blender's powerful 3D modelling environment, utilising its mesh manipulation, animation and rendering capabilities to create synthetic data ranging from simple phantoms to accurate anatomical models. DICOMator incorporates various features to simulate common CT imaging artefacts, bridging the gap between 3D modelling and medical imaging. DICOMator voxelises 3D mesh objects, assigns appropriate Hounsfield Unit values, and applies artefact simulations. These simulations include detector noise, metal artefacts and partial volume effects. By incorporating these artefacts, DICOMator produces synthetic CT data that more closely resembles real CT scans. The resulting data is then exported in DICOM format, ensuring compatibility with existing medical imaging workflows and treatment planning systems. To demonstrate DICOMator's capabilities, three synthetic CT datasets were created: a simple lung phantom to illustrate basic functionality, a more realistic cranial CT scan to demonstrate dose calculations and CT image registration on synthetic data in treatment planning systems. Finally, a thoracic 4D CT scan featuring multiple breathing phases was created to demonstrate the dynamic imaging capabilities and the quantitative accuracy of the synthetic datasets. These examples were chosen to highlight DICOMator's versatility in generating diverse and complex synthetic CT data suitable for various research and educational purposes, from basic quality assurance to advanced motion management studies. DICOMator offers a promising solution to the limitations of patient CT data availability in medical physics research. By providing a user-friendly interface for creating customisable synthetic datasets from 3D meshes, it has the potential to accelerate research, validate treatment planning tools such as deformable image registration, and enhance educational resources in the field of radiation oncology medical physics. Future developments may include incorporation of other imaging modalities, such as MRI or PET, further expanding its utility in multi-modal imaging research.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"1525-1538"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12738608/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144709487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The lower image contrast of megavoltage computed tomography (MVCT), which corresponds to kilovoltage computed tomography (kVCT), can inhibit accurate dosimetric assessments. This study proposes a deep learning approach, specifically the pix2pix network, to generate high-quality synthetic kVCT (skVCT) images from MVCT data. The model was trained on a dataset of 25 paired patient images and evaluated on a test set of 15 paired images. We performed visual inspections to assess the quality of the generated skVCT images and calculated the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Dosimetric equivalence was evaluated by comparing the gamma pass rates of treatment plans derived from skVCT and kVCT images. Results showed that skVCT images exhibited significantly higher quality than MVCT images, with PSNR and SSIM values of 31.9 ± 1.1 dB and 94.8% ± 1.3%, respectively, compared to 26.8 ± 1.7 dB and 89.5% ± 1.5% for MVCT-to-kVCT comparisons. Furthermore, treatment plans based on skVCT images achieved excellent gamma pass rates of 99.78 ± 0.14% and 99.82 ± 0.20% for 2 mm/2% and 3 mm/3% criteria, respectively, comparable to those obtained from kVCT-based plans (99.70 ± 0.31% and 99.79 ± 1.32%). This study demonstrates the potential of pix2pix models for generating high-quality skVCT images, which could significantly enhance Adaptive Radiation Therapy (ART).
{"title":"Dosimetric evaluation of synthetic kilo-voltage CT images generated from megavoltage CT for head and neck tomotherapy using a conditional GAN network.","authors":"Yazdan Choghazardi, Mohamad Bagher Tavakoli, Iraj Abedi, Mahnaz Roayaei, Simin Hemati, Ahmad Shanei","doi":"10.1007/s13246-025-01603-4","DOIUrl":"10.1007/s13246-025-01603-4","url":null,"abstract":"<p><p>The lower image contrast of megavoltage computed tomography (MVCT), which corresponds to kilovoltage computed tomography (kVCT), can inhibit accurate dosimetric assessments. This study proposes a deep learning approach, specifically the pix2pix network, to generate high-quality synthetic kVCT (skVCT) images from MVCT data. The model was trained on a dataset of 25 paired patient images and evaluated on a test set of 15 paired images. We performed visual inspections to assess the quality of the generated skVCT images and calculated the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Dosimetric equivalence was evaluated by comparing the gamma pass rates of treatment plans derived from skVCT and kVCT images. Results showed that skVCT images exhibited significantly higher quality than MVCT images, with PSNR and SSIM values of 31.9 ± 1.1 dB and 94.8% ± 1.3%, respectively, compared to 26.8 ± 1.7 dB and 89.5% ± 1.5% for MVCT-to-kVCT comparisons. Furthermore, treatment plans based on skVCT images achieved excellent gamma pass rates of 99.78 ± 0.14% and 99.82 ± 0.20% for 2 mm/2% and 3 mm/3% criteria, respectively, comparable to those obtained from kVCT-based plans (99.70 ± 0.31% and 99.79 ± 1.32%). This study demonstrates the potential of pix2pix models for generating high-quality skVCT images, which could significantly enhance Adaptive Radiation Therapy (ART).</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"1589-1600"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144734067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}