In radiotherapy treatment planning, the extrapolation of computed tomography (CT) values for low-density areas without known materials may differ between CT scanners, resulting in different calculated doses. We evaluated the differences in the percentage depth dose (PDD) calculated using eight CT scanners. Heterogeneous virtual phantoms were created using LN-300 lung and - 900 HU. For the two types of virtual phantoms, the PDD on the central axis was calculated using five energies, two irradiation field sizes, and two calculation algorithms (the anisotropic analytical algorithm and Acuros XB). For the LN-300 lung, the maximum CT value difference between the eight CT scanners was 51 HU for an electron density (ED) of 0.29 and 8.8 HU for an extrapolated ED of 0.05. The LN-300 lung CT values showed little variation in the CT-ED/physical density data among CT scanners. The difference in the point depth for the PDD in the LN-300 lung between the CT scanners was < 0.5% for all energies and calculation algorithms. Using Acuros XB, the PDD at - 900 HU had a maximum difference between facilities of > 5%, and the dose difference corresponding to an LN-300 lung CT value difference of > 20 HU was > 1% at a field size of 2 × 2 cm2. The study findings suggest that the calculated dose of low-density regions without known materials in the CT-ED conversion table introduces a risk of dose differences between facilities because of the calibration of the CT values, even when the same CT-ED phantom radiation treatment planning and treatment devices are used.
The use of machine learning has seen extraordinary growth since the development of deep learning techniques, notably the deep artificial neural network. Deep learning methodology excels in addressing complicated problems such as image classification, object detection, and natural language processing. A key feature of these networks is the capability to extract useful patterns from vast quantities of complex data, including images. As many branches of healthcare revolves around the generation, processing, and analysis of images, these techniques have become increasingly commonplace. This is especially true for radiotherapy, which relies on the use of anatomical and functional images from a range of imaging modalities, such as Computed Tomography (CT). The aim of this review is to provide an understanding of deep learning methodologies, including neural network types and structure, as well as linking these general concepts to medical CT image processing for radiotherapy. Specifically, it focusses on the stages of enhancement and analysis, incorporating image denoising, super-resolution, generation, registration, and segmentation, supported by examples of recent literature.
Alzheimer's disease (AD) is a neurodegenerative disorder that challenges early diagnosis and intervention, yet the black-box nature of many predictive models limits clinical adoption. In this study, we developed an advanced machine learning (ML) framework that integrates hierarchical feature selection with multiple classifiers to predict progression from mild cognitive impairment (MCI) to AD. Using baseline data from 580 participants in the Alzheimer's Disease Neuroimaging Initiative (ADNI), categorized into stable MCI (sMCI) and progressive MCI (pMCI) subgroups, we analyzed features both individually and across seven key groups. The neuropsychological test group exhibited the highest predictive power, with several of the top individual predictors drawn from this domain. Hierarchical feature selection combining initial statistical filtering and machine learning based refinement, narrowed the feature set to the eight most informative variables. To demystify model decisions, we applied SHAP-based (SHapley Additive exPlanations) explainability analysis, quantifying each feature's contribution to conversion risk. The explainable random forest classifier, optimized on these selected features, achieved 83.79% accuracy (84.93% sensitivity, 83.32% specificity), outperforming other methods and revealing hippocampal volume, delayed memory recall (LDELTOTAL), and Functional Activities Questionnaire (FAQ) scores as the top drivers of conversion. These results underscore the effectiveness of combining diverse data sources with advanced ML models, and demonstrate that transparent, SHAP-driven insights align with known AD biomarkers, transforming our model from a predictive black box into a clinically actionable tool for early diagnosis and patient stratification.
This study introduces a novel optimization framework for cranial three-dimensional rotational angiography (3DRA), combining the development of a brain equivalent in-house phantom with Figure of Merit (FOM) a quantitative evaluation method. The technical contribution involves the development of an in-house phantom constructed using iodine-infused epoxy and lycal resins, validated against clinical Hounsfield Units (HU). A customized head phantom was developed to simulate brain tissue and cranial vasculature for 3DRA optimization. The phantom was constructed using epoxy resin with 0.15-0.2% iodine to replicate brain tissue and lycal resin with iodine concentrations ranging from 0.65 to 0.7% to simulate blood vessels of varying diameters. The phantom materials validation was performed by comparing their HU values to clinical reference HU values from brain tissue and cranial vessels, ensuring accurate tissue simulation. The validated phantom was used to acquire images using cranial 3DRA protocols, specifically Prop-Scan and Roll-Scan. Image quality was assessed using Signal-Difference-to-Noise Ratio (SDNR), Dose-Area Product (DAP), and Modulation Transfer Function (MTF). Imaging efficiency was quantified using the Figure of Merit (FOM), calculated as SDNR2/DAP, to objectively compare the performance of two cranial 3DRA protocols. The task-based optimization showed that Roll-Scan consistently outperformed Prop-Scan across all vessel sizes and regions. Roll-Scan yields FOM values ranging from 183 to 337, while Prop-Scan FOM values ranged from 96 to 189. Additionally, Roll-Scan (0.27 lp/pixel) delivered better spatial resolution, as indicated by higher MTF 10% value than Prop-Scan (0.23 lp/pixel). Most notably, Roll-Scan consistently detecting 2 mm vessel structures among all regions of the phantom. This capability is clinically important in cerebral angiography, which is accurate visualization of small vessels, i.e. the Anterior Cerebral Artery (ACA), Posterior Cerebral Artery (PCA), and Middle Cerebral Artery (MCA). These findings highlight Roll-Scan as the superior protocol for brain interventional imaging, underscoring the significance of FOM as a comprehensive parameter for optimizing imaging protocols in clinical practice. The experimental results support the use of the Roll-Scan protocol as the preferred acquisition method for cerebral angiography in clinical practice. The analysis using FOM provides substantial and quantifiable evidence in determining the acquisition methods. Furthermore, the customized in-house phantom is recommended as a candidate to optimization tools for clinical medical physicists.
This study investigates how deep learning (DL) can enhance ovarian cancer diagnosis and staging using large imaging datasets. Specifically, we compare six conventional convolutional neural network (CNN) architectures-ResNet, DenseNet, GoogLeNet, U-Net, VGG, and AlexNet-with OCDA-Net, an enhanced model designed for [18F]FDG PET image analysis. The OCDA-Net, an advancement on the ResNet architecture, was thoroughly compared using randomly split datasets of training (80%), validation (10%), and test (10%) images. Trained over 100 epochs, OCDA-Net achieved superior diagnostic classification with an accuracy of 92%, and staging results of 94%, supported by robust precision, recall, and F-measure metrics. Grad-CAM ++ heat-maps confirmed that the network attends to hyper-metabolic lesions, supporting clinical interpretability. Our findings show that OCDA-Net outperforms existing CNN models and has strong potential to transform ovarian cancer diagnosis and staging. The study suggests that implementing these DL models in clinical practice could ultimately improve patient prognoses. Future research should expand datasets, enhance model interpretability, and validate these models in clinical settings.
In lung CT imaging, motion artifacts caused by cardiac motion and respiration are common. Recently, CLEAR Motion, a deep learning-based reconstruction method that applies motion correction technology, has been developed. This study aims to quantitatively evaluate the clinical usefulness of CLEAR Motion. A total of 129 lung CT was analyzed, and heart rate, height, weight, and BMI of all patients were obtained from medical records. Images with and without CLEAR Motion were reconstructed, and quantitative evaluation was performed using variance of Laplacian (VL) and PSNR. The difference in VL (DVL) between the two reconstruction methods was used to evaluate which part of the lung field (upper, middle, or lower) CLEAR Motion is effective. To evaluate the effect of motion correction based on patient characteristics, the correlation between body mass index (BMI), heart rate and DVL was determined. Visual assessment of motion artifacts was performed using paired comparisons by 9 radiological technologists. With the exception of one case, VL was higher in CLEAR Motion. Almost all the cases (110 cases) showed large DVL in the lower part. BMI showed a positive correlation with DVL (r = 0.55, p < 0.05), while no differences in DVL were observed based on heart rate. The average PSNR was 35.8 ± 0.92 dB. Visual assessments indicated that CLEAR Motion was preferred in most cases, with an average preference score of 0.96 (p < 0.05). Using Clear Motion allows for obtaining images with fewer motion artifacts in lung CT.
Immediate breast Reconstruction is increasing in use in Australia and accounts for almost 10% of breast cancer patients (Roder in Breast 22:1220-1225, 2013). Many treatments include a bolus to increase dose to the skin surface. Air gaps under bolus increase uncertainty in dosimetry and many bolus types are unable to conform to the shape of the breast or are not flexible throughout treatment if there is a swelling induced contour change. This study investigates the use of two bolus types that can be manufactured in house-wet combine and ThermoBolus. Wet combine is a material composed of several water soaked dressings. ThermoBolus is a product developed in-house that consists of thermoplastic encased in silicone. Plans using a volumetric arc therapy technique were created for each bolus and dosimetry performed with thermoluminescent detectors (TLDs) and EBT-3 film over three fractions. Wax was used to simulate swelling and allow analysis of the flexibility of the bolus materials. ThermoBolus had a range of agreement with calculation from -2 to 4% for film measurement and -5.6 to 1.0% for TLDs. Wet combine had a range of agreement with calculation from 1.6 to 10.5% for film measurement and -13.5 to 13.1% for TLDs. It showed consistent conformity and flexibility for all fractions and with induced contour but air gaps of 2-3 mm were observed between layers of the material. ThermoBolus and wet combine are able to conform to contour change without the introduction of large air gaps between the patient surface and bolus. ThermoBolus is reusable and can be remoulded if the patient undergoes significant contour change during the course of treatment. It is able to be modelled accurately by the treatment planning system. Wet combine shows inconsistency in manufacture and requires more than one bolus to be made over the course of treatment, reducing accuracy in modelling and dosimetry.
Artificial Intelligence has shown great promise in healthcare, particularly in non-invasive diagnostics using bio signals. This study focuses on classifying eye states (open or closed) using Electroencephalogram (EEG) signals captured via a 14-electrode neuroheadset, recorded through a Brain-Computer Interface (BCI). A publicly available dataset comprising 14,980 instances was used, where each sample represents EEG signals corresponding to eye activity. Fourteen classical machine learning (ML) models were evaluated using a tenfold cross-validation approach. The preprocessing pipeline involved removing outliers using the Z-score method, addressing class imbalance with SMOTETomek, and applying a bandpass filter to reduce signal noise. Significant EEG features were selected using a two-sample independent t-test (p < 0.05), ensuring only statistically relevant electrodes were retained. Additionally, the Common Spatial Pattern (CSP) method was used for feature extraction to enhance class separability by maximizing variance differences between eye states. Experimental results demonstrate that several classifiers achieved strong performance, with accuracy above 90%. The k-Nearest Neighbours classifier yielded the highest accuracy of 97.92% with CSP, and 97.75% without CSP. The application of CSP also enhanced the performance of Multi-Layer Perceptron and Support Vector Machine, reaching accuracies of 95.30% and 93.93%, respectively. The results affirm that integrating statistical validation, signal processing, and ML techniques can enable accurate and efficient EEG-based eye state classification, with practical implications for real-time BCI systems and offering a lightweight solution for real-time healthcare wearable applications healthcare applications.
Blood pressure is an essential indicator of cardiovascular health in the human body, and regular and accurate blood pressure measurement is essential for preventing cardiovascular diseases. The emergence of photoplethysmography (PPG) and the advancement of machine learning offers new opportunities for noninvasive blood pressure measurement. This paper proposes a non-contact method for measuring blood pressure using face video and machine learning. This method extracts facial remote photoplethysmography (RPPG) signals from face video captured by a camera, and enhances the signal quality of RPPG through a set of filtering processes. The blood pressure regression model is constructed using the extreme gradient boosting tree (XGBoost) method to estimate blood pressure from RPPG signals. This approach achieved accurate blood pressure measurement, with a measurement error of 4.8893 ± 6.6237 mmHg for systolic pressure and 4.0805 ± 5.5821 mmHg for diastolic pressure. Experimental results show that this method fully complies with the American Medical Instrumentation Association (AAMI).Our proposed method has minor errors in predicting the systolic and diastolic blood pressures and achieves grade A evaluation for both systolic and diastolic blood pressures according to the British Hypertension Society (BHS) standards.

