Pub Date : 2003-10-19DOI: 10.1109/NSSMIC.2003.1352445
I. E. El Naqa, J. Deasy, M. Vicic
A fundamental prerequisite of computer aided radiotherapy treatment is the accurate estimation of the dose distributions so as to deliver a high homogeneous dose volume to the tumor without causing unnecessary side effects for the patient. The Monte Carlo (MC) method is considered as the most effective dose distribution computational technique. However, it is too slow and contaminated with noisy degradations that could affect the dose contour visibility and the estimates of dosimetric parameters. In this work, we propose a feature-adaptive median hybrid filter for the denoising of MC dose distributions. Median filtering has been shown to outperform the moving average (mean) in removal of impulsive noise (outliers) and preservation of edges, but it fails to provide the same degree of smoothness in homogeneous regions. We combine linear filters with the median operation to produce hybrid median filters. The filter output can be obtained as a weighted sum of the linear filter and the median operation depending on the properties of the local neighborhood. We evaluated the technique on different datasets, a challenging 2-D synthetic dataset of different geometric shapes at different scales with added noise and blurring, and 2-D/3-D water phantoms. The proposed filter, judged by mean square error, performed well in comparison with currently existing techniques. Denoising of full 3-D real treatment plan datasets has shown similar promise.
{"title":"Locally adaptive denoising of Monte Carlo dose distributions via hybrid median filtering","authors":"I. E. El Naqa, J. Deasy, M. Vicic","doi":"10.1109/NSSMIC.2003.1352445","DOIUrl":"https://doi.org/10.1109/NSSMIC.2003.1352445","url":null,"abstract":"A fundamental prerequisite of computer aided radiotherapy treatment is the accurate estimation of the dose distributions so as to deliver a high homogeneous dose volume to the tumor without causing unnecessary side effects for the patient. The Monte Carlo (MC) method is considered as the most effective dose distribution computational technique. However, it is too slow and contaminated with noisy degradations that could affect the dose contour visibility and the estimates of dosimetric parameters. In this work, we propose a feature-adaptive median hybrid filter for the denoising of MC dose distributions. Median filtering has been shown to outperform the moving average (mean) in removal of impulsive noise (outliers) and preservation of edges, but it fails to provide the same degree of smoothness in homogeneous regions. We combine linear filters with the median operation to produce hybrid median filters. The filter output can be obtained as a weighted sum of the linear filter and the median operation depending on the properties of the local neighborhood. We evaluated the technique on different datasets, a challenging 2-D synthetic dataset of different geometric shapes at different scales with added noise and blurring, and 2-D/3-D water phantoms. The proposed filter, judged by mean square error, performed well in comparison with currently existing techniques. Denoising of full 3-D real treatment plan datasets has shown similar promise.","PeriodicalId":186175,"journal":{"name":"2003 IEEE Nuclear Science Symposium. Conference Record (IEEE Cat. No.03CH37515)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115437370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-19DOI: 10.1109/NSSMIC.2003.1352466
G. Gulsen, C. Deng, O. Nalcioglu
In this work, we present a Monte Carlo simulation program for the investigation of reconstruction of the entrance point and angle of incidence of /spl gamma/-rays incident on a crystal by using the light output from the adjacent crystals. Initially, the Monte Carlo simulator was used to obtain the light response function (LRF) of each crystal in a detector array with respect to the entrance position and angle of incidence of the incident /spl gamma/-rays. Later, the simulator was used to determine the spatial resolution of a 3-layer detector ring consisting of 200 crystals per layer.
{"title":"A Monte Carlo study of depth of interaction in PET","authors":"G. Gulsen, C. Deng, O. Nalcioglu","doi":"10.1109/NSSMIC.2003.1352466","DOIUrl":"https://doi.org/10.1109/NSSMIC.2003.1352466","url":null,"abstract":"In this work, we present a Monte Carlo simulation program for the investigation of reconstruction of the entrance point and angle of incidence of /spl gamma/-rays incident on a crystal by using the light output from the adjacent crystals. Initially, the Monte Carlo simulator was used to obtain the light response function (LRF) of each crystal in a detector array with respect to the entrance position and angle of incidence of the incident /spl gamma/-rays. Later, the simulator was used to determine the spatial resolution of a 3-layer detector ring consisting of 200 crystals per layer.","PeriodicalId":186175,"journal":{"name":"2003 IEEE Nuclear Science Symposium. Conference Record (IEEE Cat. No.03CH37515)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122148086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-19DOI: 10.1109/NSSMIC.2003.1352429
H.-T. Chen, C. Kao, C. Chen
We present a new method of scatter reduction for PET with list-mode acquisition. Our method is based on a predetermined true fraction table for preferentially reducing scatter events based on the two detected photon energies in a coincidence event. For this method to work, the true fraction table needs to be insensitive to the anatomy of the phantom. In this paper, by use of Monte-Carlo technique we calculate the true fraction tables for various activity and attenuation distributions and demonstrate that these tables are indeed robust to wide variations of activity and attenuation distributions. Hence, by employing a single true fraction table, very effective scatter reduction can be obtained for PET data that are derived from substantially different activity distributions. The resulting scatter-reduced PET data are also shown to have improved noise equivalent count and to produce images of better contrast. This scatter reduction method is computationally efficient; the operations can be performed in real time for list-mode acquisition and is attractive for practical use.
{"title":"A fast, energy-dependent scatter reduction method for 3D PET imaging","authors":"H.-T. Chen, C. Kao, C. Chen","doi":"10.1109/NSSMIC.2003.1352429","DOIUrl":"https://doi.org/10.1109/NSSMIC.2003.1352429","url":null,"abstract":"We present a new method of scatter reduction for PET with list-mode acquisition. Our method is based on a predetermined true fraction table for preferentially reducing scatter events based on the two detected photon energies in a coincidence event. For this method to work, the true fraction table needs to be insensitive to the anatomy of the phantom. In this paper, by use of Monte-Carlo technique we calculate the true fraction tables for various activity and attenuation distributions and demonstrate that these tables are indeed robust to wide variations of activity and attenuation distributions. Hence, by employing a single true fraction table, very effective scatter reduction can be obtained for PET data that are derived from substantially different activity distributions. The resulting scatter-reduced PET data are also shown to have improved noise equivalent count and to produce images of better contrast. This scatter reduction method is computationally efficient; the operations can be performed in real time for list-mode acquisition and is attractive for practical use.","PeriodicalId":186175,"journal":{"name":"2003 IEEE Nuclear Science Symposium. Conference Record (IEEE Cat. No.03CH37515)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117121464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-19DOI: 10.1109/NSSMIC.2003.1352542
K. Ziemons, U. Heinrichs, M. Streun, U. Pietrzyk
The ClearPET/spl trade/ project is proposed by working groups of the Crystal Clear Collaboration (CCC) to develop a 2/sup nd/ generation high performance small animal positron emission tomograph (PET). High sensitivity and high spatial resolution is foreseen for the ClearPET/spl trade/ camera by using a phoswich arrangement combining mixed lutetium yttrium aluminum perovskite (LuYAP:Ce) and lutetium oxyorthosilicate (LSO) scintillating crystals. Design optimizations for the first photomultiplier tube (PMT) based ClearPET camera are done with a Monte-Carlo simulation package implemented on GEANT3 (CERN, Geneva, Switzerland). A dual-head prototype has been built to test the frontend electronics and was used to validate the implementation of the GEANT3 simulation tool. Multiple simulations were performed following the experimental protocols to measure the intrinsic resolution and the sensitivity profile in axial and radial direction. Including a mean energy resolution of about 27.0% the simulated intrinsic resolution is about (1.41/spl plusmn/0.11)mm compared to the measured of (1.48/spl plusmn/0.06)mm. The simulated sensitivity profiles show a mean square deviation of 12.6% in axial direction and 3.6% in radial direction. Satisfactorily these results are representative for all designs and confirm the scanner geometry.
{"title":"Validation of GEANT3 simulation studies with a dual-head PMT ClearPET/spl trade/ prototype","authors":"K. Ziemons, U. Heinrichs, M. Streun, U. Pietrzyk","doi":"10.1109/NSSMIC.2003.1352542","DOIUrl":"https://doi.org/10.1109/NSSMIC.2003.1352542","url":null,"abstract":"The ClearPET/spl trade/ project is proposed by working groups of the Crystal Clear Collaboration (CCC) to develop a 2/sup nd/ generation high performance small animal positron emission tomograph (PET). High sensitivity and high spatial resolution is foreseen for the ClearPET/spl trade/ camera by using a phoswich arrangement combining mixed lutetium yttrium aluminum perovskite (LuYAP:Ce) and lutetium oxyorthosilicate (LSO) scintillating crystals. Design optimizations for the first photomultiplier tube (PMT) based ClearPET camera are done with a Monte-Carlo simulation package implemented on GEANT3 (CERN, Geneva, Switzerland). A dual-head prototype has been built to test the frontend electronics and was used to validate the implementation of the GEANT3 simulation tool. Multiple simulations were performed following the experimental protocols to measure the intrinsic resolution and the sensitivity profile in axial and radial direction. Including a mean energy resolution of about 27.0% the simulated intrinsic resolution is about (1.41/spl plusmn/0.11)mm compared to the measured of (1.48/spl plusmn/0.06)mm. The simulated sensitivity profiles show a mean square deviation of 12.6% in axial direction and 3.6% in radial direction. Satisfactorily these results are representative for all designs and confirm the scanner geometry.","PeriodicalId":186175,"journal":{"name":"2003 IEEE Nuclear Science Symposium. Conference Record (IEEE Cat. No.03CH37515)","volume":"160 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124486226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-19DOI: 10.1109/NSSMIC.2003.1352687
Zhong He, B. Sturm
The latest depth-sensing coplanar-grid CdZnTe detectors, each has dimensions of 1.5/spl times/1.5/spl times/1 cm/sup 3/ and uses third-generation coplanar-anode design, have been tested. An energy resolution of 2.0% FWHM at 662 keV gamma-ray energy was obtained. Detector performance has been observed experimentally as a function of depth of gamma-ray interaction, and as a function of radial position near the anode surface. The difference between the depth-sensing technique and the relative gain compensation method is discussed. The measured results show the improvement of third-generation anode design, and the advantage of using depth sensing technique for the correction of electron trapping. Material uniformity of CdZnTe crystals manufactured by eV products have been directly observed and compared on two 1.5/spl times/1.5/spl times/1 cm/sup 3/ detectors.
{"title":"Characteristics of depth sensing coplanar-grid CdZnTe detectors","authors":"Zhong He, B. Sturm","doi":"10.1109/NSSMIC.2003.1352687","DOIUrl":"https://doi.org/10.1109/NSSMIC.2003.1352687","url":null,"abstract":"The latest depth-sensing coplanar-grid CdZnTe detectors, each has dimensions of 1.5/spl times/1.5/spl times/1 cm/sup 3/ and uses third-generation coplanar-anode design, have been tested. An energy resolution of 2.0% FWHM at 662 keV gamma-ray energy was obtained. Detector performance has been observed experimentally as a function of depth of gamma-ray interaction, and as a function of radial position near the anode surface. The difference between the depth-sensing technique and the relative gain compensation method is discussed. The measured results show the improvement of third-generation anode design, and the advantage of using depth sensing technique for the correction of electron trapping. Material uniformity of CdZnTe crystals manufactured by eV products have been directly observed and compared on two 1.5/spl times/1.5/spl times/1 cm/sup 3/ detectors.","PeriodicalId":186175,"journal":{"name":"2003 IEEE Nuclear Science Symposium. Conference Record (IEEE Cat. No.03CH37515)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126109453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-19DOI: 10.1109/NSSMIC.2003.1352439
N. Motomura, M. Takahashi, G. Nakagawara, H. Iida
In recent years, various SPECT attenuation correction systems using CT data have been developed. For attenuation correction of cerebral SPECT data in routine studies, the software method using CT and SPECT data registered with automatic registration software has been used much more than the hardware method using CT data acquired with combined SPECT/CT systems. In this work, the software-based method was compared with a method using TCT data acquired with a sequential SPECT/TCT scan with no subject motion as the golden standard. Attenuation corrected SPECT values using the registered CT data were compared to those using TCT data. Ten sets of normal volunteer data were acquired. The differences in attenuation corrected SPECT values between the SPECT-CT and SPECT-TCT methods were 1.4/spl plusmn/1.9% for the entire brain, and the maximum regional difference was 7.8% for both white and gray matter regions. Other regions within the brain where SPECT values were low (e.g., skull, ventricles) were excluded from evaluation. The results indicate that automatic registration software can register CT to SPECT data quite accurately and that a software-based attenuation correction method using CT data can correct attenuation accurately for cerebral data. Consequently, such a software-based attenuation correction method using CT data that requires no specialized hardware seems feasible for use in routine studies.
{"title":"Evaluation of a SPECT attenuation correction method using CT data registered with automatic registration software","authors":"N. Motomura, M. Takahashi, G. Nakagawara, H. Iida","doi":"10.1109/NSSMIC.2003.1352439","DOIUrl":"https://doi.org/10.1109/NSSMIC.2003.1352439","url":null,"abstract":"In recent years, various SPECT attenuation correction systems using CT data have been developed. For attenuation correction of cerebral SPECT data in routine studies, the software method using CT and SPECT data registered with automatic registration software has been used much more than the hardware method using CT data acquired with combined SPECT/CT systems. In this work, the software-based method was compared with a method using TCT data acquired with a sequential SPECT/TCT scan with no subject motion as the golden standard. Attenuation corrected SPECT values using the registered CT data were compared to those using TCT data. Ten sets of normal volunteer data were acquired. The differences in attenuation corrected SPECT values between the SPECT-CT and SPECT-TCT methods were 1.4/spl plusmn/1.9% for the entire brain, and the maximum regional difference was 7.8% for both white and gray matter regions. Other regions within the brain where SPECT values were low (e.g., skull, ventricles) were excluded from evaluation. The results indicate that automatic registration software can register CT to SPECT data quite accurately and that a software-based attenuation correction method using CT data can correct attenuation accurately for cerebral data. Consequently, such a software-based attenuation correction method using CT data that requires no specialized hardware seems feasible for use in routine studies.","PeriodicalId":186175,"journal":{"name":"2003 IEEE Nuclear Science Symposium. Conference Record (IEEE Cat. No.03CH37515)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128446358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-19DOI: 10.1109/NSSMIC.2003.1352583
I. E. El Naqa, D. Low, J. Deasy, A. Amini, P. Parikh, M. Nystrom
4D-CT is being developed to provide breathing motion information for radiation therapy treatment planning. Potential applications include optimization of intensity-modulated beams in the presence of breathing motion and intra-fraction target volume margin determination for conformal therapy. A major challenge of this process is the determination of the internal motion (trajectories) from the 4D CT data. Manual identification and tracking of internal landmarks is impractical. For example, in a single couch position, 512 /spl times/ 512 /spl times/ 12 pixel CT scans contains 3.1/spl times/10/sup 5/ voxels. If 15 of these scans are acquired throughout the breathing cycle, there are almost 47 million voxels to evaluate necessitating automation of the registration process. The natural high contrast between bronchi, vessels, other lung tissue offers an excellent opportunity to develop automated deformable registration techniques. We have been investigating the use motion compensated temporal smoothing using optical flow for this purpose. Optical flow analysis uses the CT intensity and temporal (in our case tidal volume) gradients to estimate the motion trajectories. The algorithm is applied to 3D image datasets reconstructed at different percentiles of tidal volumes. The trajectories can be used to interpolate CT datasets between tidal volumes.
{"title":"Automated breathing motion tracking for 4D computed tomography","authors":"I. E. El Naqa, D. Low, J. Deasy, A. Amini, P. Parikh, M. Nystrom","doi":"10.1109/NSSMIC.2003.1352583","DOIUrl":"https://doi.org/10.1109/NSSMIC.2003.1352583","url":null,"abstract":"4D-CT is being developed to provide breathing motion information for radiation therapy treatment planning. Potential applications include optimization of intensity-modulated beams in the presence of breathing motion and intra-fraction target volume margin determination for conformal therapy. A major challenge of this process is the determination of the internal motion (trajectories) from the 4D CT data. Manual identification and tracking of internal landmarks is impractical. For example, in a single couch position, 512 /spl times/ 512 /spl times/ 12 pixel CT scans contains 3.1/spl times/10/sup 5/ voxels. If 15 of these scans are acquired throughout the breathing cycle, there are almost 47 million voxels to evaluate necessitating automation of the registration process. The natural high contrast between bronchi, vessels, other lung tissue offers an excellent opportunity to develop automated deformable registration techniques. We have been investigating the use motion compensated temporal smoothing using optical flow for this purpose. Optical flow analysis uses the CT intensity and temporal (in our case tidal volume) gradients to estimate the motion trajectories. The algorithm is applied to 3D image datasets reconstructed at different percentiles of tidal volumes. The trajectories can be used to interpolate CT datasets between tidal volumes.","PeriodicalId":186175,"journal":{"name":"2003 IEEE Nuclear Science Symposium. Conference Record (IEEE Cat. No.03CH37515)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128461197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-19DOI: 10.1109/NSSMIC.2003.1352405
J. Brankov, I. El-Naqa, Y. Yang, M. Wernick
We propose two algorithms for task-based image quality assessment based on machine learning. The channelized Hotelling observer (CHO) is a well-known numerical observer, which is used as a surrogate for human observers in assessments of lesion detectability. We explore the possibility of replacing the linear CHO with nonlinear algorithms that learn the relationship between measured image features and lesion detectability obtained from human observer studies. Our results suggest that both support vector machines and neural networks can offer improved performance over the CHO in predicting the human-observer performance.
{"title":"Learning a nonlinear channelized observer for image quality assessment","authors":"J. Brankov, I. El-Naqa, Y. Yang, M. Wernick","doi":"10.1109/NSSMIC.2003.1352405","DOIUrl":"https://doi.org/10.1109/NSSMIC.2003.1352405","url":null,"abstract":"We propose two algorithms for task-based image quality assessment based on machine learning. The channelized Hotelling observer (CHO) is a well-known numerical observer, which is used as a surrogate for human observers in assessments of lesion detectability. We explore the possibility of replacing the linear CHO with nonlinear algorithms that learn the relationship between measured image features and lesion detectability obtained from human observer studies. Our results suggest that both support vector machines and neural networks can offer improved performance over the CHO in predicting the human-observer performance.","PeriodicalId":186175,"journal":{"name":"2003 IEEE Nuclear Science Symposium. Conference Record (IEEE Cat. No.03CH37515)","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129604046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-19DOI: 10.1109/NSSMIC.2003.1352557
Jussi Tohka, Anu Kivimäki, A. Reilhac, J. Mykkänen, U. Ruotsalainen
In this study, we evaluate quantitatively the performance of the DM-DSM (deformable model with dual surface minimization) method for brain surface extraction from PET images with Monte Carlo simulated data. The DM-DSM method is based on a deformable model and has been found reliable in previous tests with images of healthy volunteers acquired with C-11-Raclopride and F-18-FDG. As the evaluation of the method with real data is challenging, it could not provide precise figures describing the accuracy of the method. In addition to evaluation, we adjust parameter values for the DM-DSM method to improve its accuracy in this study. We compare the DM-DSM method to PET brain delineation based on MRI-PET registration. For this we assume either the knowledge of the precise anatomical brain volume or we extract the brain volume from the anatomical MR image. With FDG, the DM-DSM method yielded brain surfaces of high accuracy, almost as accurate as those obtained by using image registration and the knowledge of the exact anatomy. If the precise anatomical brain volume was not known, the DM- DSM method was more accurate than the image registration based method. With Raclopride, the accuracy of the DM-DSM method was slightly lower than with FDG but it was better than the one obtained using image registration and assuming the knowledge of the anatomical brain volume. When we extracted brain volume automatically from the MR image, the sagittal sinus was excluded from the brain improving the registration accuracy and leading to better quantitative results than those obtained with the DM-DSM method.
{"title":"Brain surface extraction from PET images with deformable model: assessment using Monte Carlo simulator","authors":"Jussi Tohka, Anu Kivimäki, A. Reilhac, J. Mykkänen, U. Ruotsalainen","doi":"10.1109/NSSMIC.2003.1352557","DOIUrl":"https://doi.org/10.1109/NSSMIC.2003.1352557","url":null,"abstract":"In this study, we evaluate quantitatively the performance of the DM-DSM (deformable model with dual surface minimization) method for brain surface extraction from PET images with Monte Carlo simulated data. The DM-DSM method is based on a deformable model and has been found reliable in previous tests with images of healthy volunteers acquired with C-11-Raclopride and F-18-FDG. As the evaluation of the method with real data is challenging, it could not provide precise figures describing the accuracy of the method. In addition to evaluation, we adjust parameter values for the DM-DSM method to improve its accuracy in this study. We compare the DM-DSM method to PET brain delineation based on MRI-PET registration. For this we assume either the knowledge of the precise anatomical brain volume or we extract the brain volume from the anatomical MR image. With FDG, the DM-DSM method yielded brain surfaces of high accuracy, almost as accurate as those obtained by using image registration and the knowledge of the exact anatomy. If the precise anatomical brain volume was not known, the DM- DSM method was more accurate than the image registration based method. With Raclopride, the accuracy of the DM-DSM method was slightly lower than with FDG but it was better than the one obtained using image registration and assuming the knowledge of the anatomical brain volume. When we extracted brain volume automatically from the MR image, the sagittal sinus was excluded from the brain improving the registration accuracy and leading to better quantitative results than those obtained with the DM-DSM method.","PeriodicalId":186175,"journal":{"name":"2003 IEEE Nuclear Science Symposium. Conference Record (IEEE Cat. No.03CH37515)","volume":"2012 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127403302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-19DOI: 10.1109/NSSMIC.2003.1352656
T. Geralis, G. Fanourakis, Y. Giomataris, K. Zachariadou
The Micromegas (/spl mu/M) detector is one of the three types of detectors (CCD, /spl mu/M and TPC) that are used for solar axion detection at the CAST experiment. The /spl mu/M detector is sensitive to X-rays, originating from a conversion of axion to photon in a strong magnetic field (9T), in the range of a few hundred eV to 10 keV. Good detection efficiency, energy resolution, spatial resolution and extremely low background are the characteristics of this type of detector. The Data Acquisition of the Micromegas detector is presented here. The Front End cards are using multiplexed analog integrated circuits. A set of VME modules performs the readout that is expandable to read up to 2 /spl times/ 19 /spl times/ 2048 (77824) channels. The system has the capability to apply online, per individual channel, a threshold and a subsequent pedestal subtraction. At the CAST experiment rates the dead time is negligible. A PCI-MXI2-VME interface is used to read out the data to a PC, to perform monitoring and to Display Events. The system is based on the National Instruments' LabView software and is running both in Windows 2000 and Linux operating systems. The data are automatically archived on storage media at the central CERN computing facilities and the PC clock is synchronized periodically with the GPS time using the CERN time servers. This allows the precise event time stamping for the possibility to correlate them to astrophysical phenomena. The precision of the clock update is of the order of 50 /spl mu/s. The same system has been used for Medical Imaging R&D programs and Dark Matter searches.
{"title":"The data acquisition of the Micromegas detector for the CAST experiment","authors":"T. Geralis, G. Fanourakis, Y. Giomataris, K. Zachariadou","doi":"10.1109/NSSMIC.2003.1352656","DOIUrl":"https://doi.org/10.1109/NSSMIC.2003.1352656","url":null,"abstract":"The Micromegas (/spl mu/M) detector is one of the three types of detectors (CCD, /spl mu/M and TPC) that are used for solar axion detection at the CAST experiment. The /spl mu/M detector is sensitive to X-rays, originating from a conversion of axion to photon in a strong magnetic field (9T), in the range of a few hundred eV to 10 keV. Good detection efficiency, energy resolution, spatial resolution and extremely low background are the characteristics of this type of detector. The Data Acquisition of the Micromegas detector is presented here. The Front End cards are using multiplexed analog integrated circuits. A set of VME modules performs the readout that is expandable to read up to 2 /spl times/ 19 /spl times/ 2048 (77824) channels. The system has the capability to apply online, per individual channel, a threshold and a subsequent pedestal subtraction. At the CAST experiment rates the dead time is negligible. A PCI-MXI2-VME interface is used to read out the data to a PC, to perform monitoring and to Display Events. The system is based on the National Instruments' LabView software and is running both in Windows 2000 and Linux operating systems. The data are automatically archived on storage media at the central CERN computing facilities and the PC clock is synchronized periodically with the GPS time using the CERN time servers. This allows the precise event time stamping for the possibility to correlate them to astrophysical phenomena. The precision of the clock update is of the order of 50 /spl mu/s. The same system has been used for Medical Imaging R&D programs and Dark Matter searches.","PeriodicalId":186175,"journal":{"name":"2003 IEEE Nuclear Science Symposium. Conference Record (IEEE Cat. No.03CH37515)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127503383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}