Background field removal (BFR) is a critical step required for successful quantitative susceptibility mapping (QSM). However, eliminating the background field in brains containing significant susceptibility sources, such as intracranial hemorrhages, is challenging due to the relatively large scale of the field induced by these pathological susceptibility sources.
This study proposes a new deep learning-based method, BFRnet, to remove the background field in healthy and hemorrhagic subjects. The network is built with the dual-frequency octave convolutions on the U-net architecture, trained with synthetic field maps containing significant susceptibility sources. The BFRnet method is compared with three conventional BFR methods and one previous deep learning method using simulated and in vivo brains from 4 healthy and 2 hemorrhagic subjects. Robustness against acquisition field-of-view (FOV) orientation and brain masking are also investigated.
For both simulation and in vivo experiments, BFRnet led to the best visually appealing results in the local field and QSM results with the minimum contrast loss and the most accurate hemorrhage susceptibility measurements among all five methods. In addition, BFRnet produced the most consistent local field and susceptibility maps between different sizes of brain masks, while conventional methods depend drastically on precise brain extraction and further brain edge erosions. It is also observed that BFRnet performed the best among all BFR methods for acquisition FOVs oblique to the main magnetic field.
The proposed BFRnet improved the accuracy of local field reconstruction in the hemorrhagic subjects compared with conventional BFR algorithms. The BFRnet method was effective for acquisitions of tilted orientations and retained whole brains without edge erosion as often required by traditional BFR methods.
Proton irradiation is a well-established method to treat deep-seated tumors in radio oncology. Usually, an X-ray computed tomography (CT) scan is used for treatment planning. Since proton therapy is based on the precise knowledge of the stopping power describing the energy loss of protons in the patient tissues, the Hounsfield units of the planning CT have to be converted. This conversion introduces range errors in the treatment plan, which could be reduced, if the stopping power values were extracted directly from an image obtained using protons instead of X-rays. Since protons are affected by multiple Coulomb scattering, reconstruction of the 3D stopping power map results in limited image quality if the curved proton path is not considered. This work presents a substantial code extension of the open-source toolbox TIGRE for proton CT (pCT) image reconstruction based on proton radiographs including a curved proton path estimate. The code extension and the reconstruction algorithms are GPU-based, allowing to achieve reconstruction results within minutes. The performance of the pCT code extension was tested with Monte Carlo simulated data using three phantoms (Catphan® high resolution and sensitometry modules and a CIRS patient phantom). In the simulations, ideal and non-ideal conditions for a pCT setup were assumed. The obtained mean absolute percentage error was found to be below 1% and up to 8 lp/cm could be resolved using an idealized setup. These findings demonstrate that the presented code extension to the TIGRE toolbox offers the possibility for other research groups to use a fast and accurate open-source pCT reconstruction.
The increasing complexity of new treatment methods as well as the Information Technology (IT) infrastructure within radiotherapy require new methods for risk analysis. This work presents a methodology on how to model the treatment process of radiotherapy in different levels. This subdivision makes it possible to perform workflow-specific risk analysis and to assess the impact of IT risks on the overall treatment workflow.
A Unified Modeling Language (UML) activity diagram is used to model the workflows. The subdivision of the workflows into different levels is done with the help of swim lanes. The model created in this way is exported in an xml-compatible format and stored in a database with the help of a Python program.
Based on an existing risk analysis, the workflows CT Appointment, Glioblastoma Multiforme, and Deep Inspiration Breath Hold (DIBH) were modeled in detail. Part of the analysis are automatically generated workflow-specific risk matrices including risks of medical devices incorporated into a specific workflow. In addition, SQL queries allow to quickly retrieve e.g., the details of the medical device network installed in a department.
Activity diagrams of UML can be used to model workflows in radiotherapy. Through this, a connection between the different levels of the entire workflow can be established and workflow-specific risk analysis is possible.
In radiotherapy, X-ray or heavy ion beams target tumors to cause damage to their cell DNA. This damage is mainly induced by secondary low energy electrons. In this paper, we report the DNA molecular breaks at the atomic level as a function of electron energy and types of electron interactions using of Monte Carlo simulation. The number of DNA single and double strand breaks are compared to those from experimental results based on electron energies. In recent years, DNA atomistic models were introduced but still the simulations consider energy deposition in volumes of DNA or water equivalent material. We simulated a model of atomistic B-DNA in vacuum, forming 1122 base pairs of 30 nm in length. Each atom has been represented by a sphere whose radius equals the radius of van der Waals. We repeatedly simulated 10 million electrons for each energy from 4 eV to 500 eV and counted each interaction type with its position x, y, z in the volume of DNA. Based on the number and types of interactions at the atomic level, the number of DNA single and double strand breaks were calculated. We found that the dissociative electron attachment has the dominant effect on DNA strand breaks at energies below 10 eV compared to excitation and ionization. In addition, it is straightforward with our simulation to discriminate the strand and base breaks as a function of radiation interaction type and energy. In conclusion, the knowledge of DNA damage at the atomic level helps design direct internal therapeutic agents of cancer treatment.
Monte Carlo simulations are crucial for calculating magnetic field correction factors for the dosimetry in external magnetic fields. As in Monte Carlo codes the charged particle transport is performed in straight condensed history (CH) steps, the curved trajectories of these particles in the presence of external magnetic fields can only be approximated. In this study, the charged particle transport in presence of a strong magnetic field was investigated using the Fano cavity test. The test was performed in an ionization chamber and a diode detector, showing how the step size restrictions must be adjusted to perform a consistent charged particle transport within all geometrical regions.
Monte Carlo simulations of the charged particle transport in a magnetic field of 1.5 T were performed using the EGSnrc code system including an additional EMF-macro for the transport of charged particle in electro-magnetic fields. Detailed models of an ionization chamber and a diode detector were placed in a water phantom and irradiated with a so called Fano source, which is a monoenergetic, isotropic electron source, where the number of emitted particles is proportional to the local density.
The results of the Fano cavity test strongly depend on the energy of charged particles and the density within the given geometry. By adjusting the maximal length of the charged particle steps, it was possible to calculate the deposited dose in the investigated regions with high accuracy (%). The Fano cavity test was performed in all regions of the detailed detector models. Using the default value for the step size in the external magnetic field, the maximal deviation between Monte Carlo based and analytical dose value in the sensitive volume of the ion chamber and diode detector was 8% and 0.1%, respectively.
The Fano cavity test is a crucial validation method for the modeled detectors and the transport algorithms when performing Monte Carlo simulations in a strong external magnetic field. Special care should be given, when calculating dose in volumes of low density. This study has shown that the Fano cavity test is a useful method to adapt particle transport parameters for a given simulation geometry.
Background: Dosimetric validation of single isocenter multi-target radiosurgery plans is difficult due to conditions of electronic disequilibrium and the simultaneous irradiation of multiple off-axis lesions dispersed throughout the volume. Here we report the benchmarking of a customizable Monte Carlo secondary dose calculation algorithm specific for multi-target radiosurgery which future users may use to guide their commissioning and clinical implementation.
Purpose: To report the generation, validation, and clinical benchmarking of a volumetric Monte Carlo (MC) dose calculation beam model for single isocenter radiosurgery of intracranial multi-focal disease.
Methods: The beam model was prepared within SciMoCa (ScientificRT, Munich Germany), a commercial independent dose calculation software, with the aim of broad availability via the commercial software for use with single isocenter radiosurgery. The process included (1) definition & acquisition of measurement data required for beam modeling, (2) tuning model parameters to match measurements, (3) validation of the beam model via independent measurements and end-to-end testing, and finally, (4) clinical benchmarking and validation of beam model utility in a patient specific QA setting. We utilized a 6X Flattening-Filter-Free photon beam from a TrueBeam STX linear accelerator (Siemens Healthineers, Munich Germany).
Results: In addition to the measured data required for standard IMRT/VMAT (depth dose, central axis profiles & output factors, leaf gap), beam modeling and validation for single-isocenter SRS required central axis and off axis (5 cm & 9 cm) small field output factors and comparison between measurement and simulation of backscatter with aperture for jaw much greater than MLCs. Validation end-to-end measurements included SRS MapCHECK in StereoPHAN geometry (2%/1 mm Gamma = 99.2% ± 2.2%), and OSL & scintillator measurements in anthropomorphic STEEV phantom (6 targets, volume = 0.1-4.1cc, distance from isocenter = 1.2-7.9 cm) for which mean difference was -1.9% ± 2.2%. For 10 patient cases, MC for individual PTVs was -0.8% ± 1.5%, -1.3% ± 1.7%, and -0.5% ± 1.8% for mean dose, D95%, and D1%, respectively. This corresponded to custom passing rates action limits per AAPM TG-218 guidelines of ±5.2%, ±6.4%, and ±6.3%, respectively.
Conclusions: The beam modeling, validation, and clinical action criteria outlined here serves as a benchmark for future users of the customized beam model within SciMoCa for single isocenter radiosurgery of multi-focal disease.
Introduction: Deep learning-based approaches are increasingly being used for the reconstruction of accelerated MRI scans. However, presented analyses are frequently lacking in-detail evaluation of basal measures like resolution or signal-to-noise ratio. To help closing this gap, spatially resolved maps of image resolution and noise enhancement (g-factor) are determined and assessed for typical model- and data-driven MR reconstruction methods in this paper.
Methods: MR data from a routine brain scan of a patient were undersampled in retrospect at R = 4 and reconstructed using two data-driven (variational network (VN), U-Net) and two model based reconstructions methods (GRAPPA, TV-constrained compressed sensing). Local resolution was estimated by the width of the main-lobe of a local point-spread function, which was determined for every single pixel by reconstructing images with an additional small perturbation. G-factor maps were determined using a multiple replica method.
Results: GRAPPA showed good spatial resolution, but increased g-factors (1.43-1.84, 75% quartile) over all other methods. The images delivered from compressed sensing suffered most from low local resolution, in particular in homogeneous areas of the image. VN and U-Net show similar resolution with mostly moderate local blurring, slightly better for U-Net. For all methods except GRAPPA the resolution as well as the g-factors depend on the anatomy and the direction of undersampling.
Conclusion: Objective image quality parameters, local resolution and g-factors have been determined. The examined data driven methods show less local blurring than compressed sensing. The noise enhancement for reconstructions using CS, VN and U-Net is elevated at anatomical contours but is drastically reduced with respect to GRAPPA. Overall, the applied framework provides the possibility for more detailed analysis of novel reconstruction approaches incorporating non-linear and non-stationary transformations.