Pub Date : 2005-10-23DOI: 10.1109/NSSMIC.2005.1596549
A. Seifert, B. Milbrath, W. Pitts, E. Smith
High-pressure xenon (HPXe) detectors have historically been unable to achieve or even approach the theoretically predicted energy resolution, a phenomenon usually attributed to problems with microphonic, vibrational, or acoustic noise. All these noises are expected to have characteristic frequency signatures. We have determined the effects of external acoustic noise signals on the resolution of HPXe spectrometers and implemented a technique to reduce or eliminate the resolution loss caused by external acoustic noise in real time. Using a precision waveform generator as the driver on a 400-watt speaker, we determined the response of a commercial HPXe detector to a variety of constant frequency acoustic noise signals by performing a fast Fourier transform on a buffered detector output signal and noting distortions to the spectral response of the frequency domain. A data acquisition package was developed using the frequency response information to perform realtime digital signal noise filtering on each gamma-ray pulse. With external acoustic noise, the measured resolution of HPXe gamma-ray energy spectra was degraded by a factor of 2 to 3. With the noise mitigating data acquisition package the spectroscopic resolution was restored to values comparable to the resolution measured under ideal (non-noisy) conditions.
{"title":"Implementation of a noise mitigation strategy for a high-pressure xenon detector","authors":"A. Seifert, B. Milbrath, W. Pitts, E. Smith","doi":"10.1109/NSSMIC.2005.1596549","DOIUrl":"https://doi.org/10.1109/NSSMIC.2005.1596549","url":null,"abstract":"High-pressure xenon (HPXe) detectors have historically been unable to achieve or even approach the theoretically predicted energy resolution, a phenomenon usually attributed to problems with microphonic, vibrational, or acoustic noise. All these noises are expected to have characteristic frequency signatures. We have determined the effects of external acoustic noise signals on the resolution of HPXe spectrometers and implemented a technique to reduce or eliminate the resolution loss caused by external acoustic noise in real time. Using a precision waveform generator as the driver on a 400-watt speaker, we determined the response of a commercial HPXe detector to a variety of constant frequency acoustic noise signals by performing a fast Fourier transform on a buffered detector output signal and noting distortions to the spectral response of the frequency domain. A data acquisition package was developed using the frequency response information to perform realtime digital signal noise filtering on each gamma-ray pulse. With external acoustic noise, the measured resolution of HPXe gamma-ray energy spectra was degraded by a factor of 2 to 3. With the noise mitigating data acquisition package the spectroscopic resolution was restored to values comparable to the resolution measured under ideal (non-noisy) conditions.","PeriodicalId":105619,"journal":{"name":"IEEE Nuclear Science Symposium Conference Record, 2005","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127553301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-23DOI: 10.1109/NSSMIC.2005.1596256
K. McCormick, D. Stromswold, J. Ely, J. Schweppe, R. Kouzes
A spectroscopic radiation portal monitor (SPM) prototype consisting of four 10.16-cmtimes10.16-cmtimes40.64-cm sodium iodide (NaI) crystals has been constructed at Pacific Northwest National Laboratory (PNNL). The prototype was put through a variety of tests, including measurements of the absolute detection efficiency of unshielded sources and the detection efficiency and isotopic identification capability of the detector for shielded isotopic sources. The monitor's response to various types of cargo and source configurations was also studied. The results of these tests are presented in this report
{"title":"Spectroscopic radiation portal monitor prototype","authors":"K. McCormick, D. Stromswold, J. Ely, J. Schweppe, R. Kouzes","doi":"10.1109/NSSMIC.2005.1596256","DOIUrl":"https://doi.org/10.1109/NSSMIC.2005.1596256","url":null,"abstract":"A spectroscopic radiation portal monitor (SPM) prototype consisting of four 10.16-cmtimes10.16-cmtimes40.64-cm sodium iodide (NaI) crystals has been constructed at Pacific Northwest National Laboratory (PNNL). The prototype was put through a variety of tests, including measurements of the absolute detection efficiency of unshielded sources and the detection efficiency and isotopic identification capability of the detector for shielded isotopic sources. The monitor's response to various types of cargo and source configurations was also studied. The results of these tests are presented in this report","PeriodicalId":105619,"journal":{"name":"IEEE Nuclear Science Symposium Conference Record, 2005","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122648323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-23DOI: 10.1109/NSSMIC.2005.1596709
Yingying Zhang, J. Fessier, J. Hsieh
Accurate predictions of variance can be useful for algorithm analysis and for the design of regularization methods. Computing predicted variances at every pixel using matrix-based approximations is impractical. Even the recently adopted methods that are based on local discrete Fourier approximations are impractical since they would require two 2D FFT calculations for every pixel, particularly for shift-variant systems like fan-beam tomography. This paper describes a new analytical approach to predict the approximate variance maps of images reconstructed by penalized likelihood estimation with quadratic regularization in a fan-beam geometry. This analytical approach requires computation equivalent to one backprojection and some simple summations, so it is computationally practical even for the data sizes in X-ray CT. Simulation results show that it gives accurate predictions of the variance maps. The parallel-beam geometry is a simple special case of the fan-beam analysis.
{"title":"Fast variance image predictions for quadratically regularized statistical image reconstruction in fan-beam tomography","authors":"Yingying Zhang, J. Fessier, J. Hsieh","doi":"10.1109/NSSMIC.2005.1596709","DOIUrl":"https://doi.org/10.1109/NSSMIC.2005.1596709","url":null,"abstract":"Accurate predictions of variance can be useful for algorithm analysis and for the design of regularization methods. Computing predicted variances at every pixel using matrix-based approximations is impractical. Even the recently adopted methods that are based on local discrete Fourier approximations are impractical since they would require two 2D FFT calculations for every pixel, particularly for shift-variant systems like fan-beam tomography. This paper describes a new analytical approach to predict the approximate variance maps of images reconstructed by penalized likelihood estimation with quadratic regularization in a fan-beam geometry. This analytical approach requires computation equivalent to one backprojection and some simple summations, so it is computationally practical even for the data sizes in X-ray CT. Simulation results show that it gives accurate predictions of the variance maps. The parallel-beam geometry is a simple special case of the fan-beam analysis.","PeriodicalId":105619,"journal":{"name":"IEEE Nuclear Science Symposium Conference Record, 2005","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122584919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-23DOI: 10.1109/NSSMIC.2005.1596794
H. Shi, J. Fessier
Statistical methods for tomographic image reconstruction lead to improved spatial resolution and noise properties in PET. Penalized-likelihood (PL) image reconstruction methods involve maximizing an objective function that is based on the log-likelihood of the sinogram measurements and on a roughness penalty function to control noise. In emission tomography, PL methods (and MAP methods) based on conventional quadratic regularization functions lead to nonuniform and anisotropic spatial resolution, even for idealized shift-invariant imaging systems. We have previously addressed this problem for parallel-beam 2D emission tomography, and for fan-beam 2D transmission tomography by designing data-dependent, shift-variant regularizers that improve resolution uniformity and isotropy, even for idealized shift-invariant imaging systems. This paper extends those methods to 3D cylindrical PET, using an analytical design approach that is numerically efficient.
{"title":"Quadratic regularization design for 3D cylindrical PET","authors":"H. Shi, J. Fessier","doi":"10.1109/NSSMIC.2005.1596794","DOIUrl":"https://doi.org/10.1109/NSSMIC.2005.1596794","url":null,"abstract":"Statistical methods for tomographic image reconstruction lead to improved spatial resolution and noise properties in PET. Penalized-likelihood (PL) image reconstruction methods involve maximizing an objective function that is based on the log-likelihood of the sinogram measurements and on a roughness penalty function to control noise. In emission tomography, PL methods (and MAP methods) based on conventional quadratic regularization functions lead to nonuniform and anisotropic spatial resolution, even for idealized shift-invariant imaging systems. We have previously addressed this problem for parallel-beam 2D emission tomography, and for fan-beam 2D transmission tomography by designing data-dependent, shift-variant regularizers that improve resolution uniformity and isotropy, even for idealized shift-invariant imaging systems. This paper extends those methods to 3D cylindrical PET, using an analytical design approach that is numerically efficient.","PeriodicalId":105619,"journal":{"name":"IEEE Nuclear Science Symposium Conference Record, 2005","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131352494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-23DOI: 10.1109/NSSMIC.2005.1596891
B. Feng, M. King, H. Gifford, P. Pretorius, G. L. Zeng, J. Fessler
In SPECT, accurate emission reconstruction requires attenuation compensation with high-quality attenuation maps. Resolution loss in transmission maps could cause blurring and artifacts in emission reconstruction. For a transmission system employing parallel-hole collimators and a sheet source, distance-dependent blurring is caused by the non-ideal source and camera collimations, and the finite intrinsic resolution of the detector. These can be approximately modeled by an incremental-blurring model. To compensate for this blurring in iterative transmission reconstruction, we incorporated the incremental blurring model in the forward projector of the OSTR algorithm but did not include the blur in the backprojector. To evaluate our approach, we simulated transmission projections of the MCAT phantom using a ray-tracing projector, in which the rays coming out from a source point form a narrow cone. The geometric blurring due to the non-ideal source and camera collimations was simulated by multiplying the counts along each cone-beam ray with a weight calculated from the overall geometric response function (assumed a two-dimensional Gaussian function), and then summing the weighted counts into projections. The resulting projections were convolved with the intrinsic response (another two-dimensional Gaussian) to simulate the total system blurring of transmission imaging. Poisson noise was then added to the projection data. We also acquired two sets of transmission measurements of an air-filled Data Spectrum Deluxe SPECT phantom on a Prism 2000 scanning-line-source transmission system. We reconstructed the simulations using the OSTR algorithm, with and without modeling of the incremental blur in the projector. The scaling parameter of the penalty prior was optimized in each case by minimizing the root-mean-square error (RMSE). Reconstructions showed that modeling the incremental blur improved the resolution of the attenuation map and quantitative accuracy
{"title":"Modeling the distance-dependent blurring in transmission imaging in the ordered-subset transmission (OSTR) algorithm by using an unmatched projector/backprojector pair","authors":"B. Feng, M. King, H. Gifford, P. Pretorius, G. L. Zeng, J. Fessler","doi":"10.1109/NSSMIC.2005.1596891","DOIUrl":"https://doi.org/10.1109/NSSMIC.2005.1596891","url":null,"abstract":"In SPECT, accurate emission reconstruction requires attenuation compensation with high-quality attenuation maps. Resolution loss in transmission maps could cause blurring and artifacts in emission reconstruction. For a transmission system employing parallel-hole collimators and a sheet source, distance-dependent blurring is caused by the non-ideal source and camera collimations, and the finite intrinsic resolution of the detector. These can be approximately modeled by an incremental-blurring model. To compensate for this blurring in iterative transmission reconstruction, we incorporated the incremental blurring model in the forward projector of the OSTR algorithm but did not include the blur in the backprojector. To evaluate our approach, we simulated transmission projections of the MCAT phantom using a ray-tracing projector, in which the rays coming out from a source point form a narrow cone. The geometric blurring due to the non-ideal source and camera collimations was simulated by multiplying the counts along each cone-beam ray with a weight calculated from the overall geometric response function (assumed a two-dimensional Gaussian function), and then summing the weighted counts into projections. The resulting projections were convolved with the intrinsic response (another two-dimensional Gaussian) to simulate the total system blurring of transmission imaging. Poisson noise was then added to the projection data. We also acquired two sets of transmission measurements of an air-filled Data Spectrum Deluxe SPECT phantom on a Prism 2000 scanning-line-source transmission system. We reconstructed the simulations using the OSTR algorithm, with and without modeling of the incremental blur in the projector. The scaling parameter of the penalty prior was optimized in each case by minimizing the root-mean-square error (RMSE). Reconstructions showed that modeling the incremental blur improved the resolution of the attenuation map and quantitative accuracy","PeriodicalId":105619,"journal":{"name":"IEEE Nuclear Science Symposium Conference Record, 2005","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130650780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-23DOI: 10.1109/NSSMIC.2005.1596435
K. Jarman, L. Smith, A. Heredia-Langner, A.R. Renholds, W. Kaye, S. Miller
Passive gamma-ray spectrometers composed of attenuation filters and integrating detector materials provide important advantages in terms of zero-power operation and ruggedness for long-term monitoring scenarios (e.g. national security or environmental remediation). However, the many design parameters, including attenuation filter material and thickness and number of pixels (filter/integrating material combinations), present a challenging optimization problem in designing spectrometers for different applications. In many of these applications, the goal is simply one of anomaly detection deciding that there is a gamma-ray source not normally found in the nuisance source populations of that particular measurement environment. A passive spectrometer design study approach using an anomaly detection metric is presented here, and is founded on "injecting" target sources of interest (e.g. 57Co, 133Ba, 137Cs) into a nuisance source population that represents the widely varying backgrounds typical of long-term monitoring scenarios. The design evaluation metric is quantified by the probability of detection given a required probability of false alarm. A genetic algorithm employs this metric to probe the large design space and identify superior spectrometer designs
{"title":"Optimal design of passive gamma-ray spectrometers","authors":"K. Jarman, L. Smith, A. Heredia-Langner, A.R. Renholds, W. Kaye, S. Miller","doi":"10.1109/NSSMIC.2005.1596435","DOIUrl":"https://doi.org/10.1109/NSSMIC.2005.1596435","url":null,"abstract":"Passive gamma-ray spectrometers composed of attenuation filters and integrating detector materials provide important advantages in terms of zero-power operation and ruggedness for long-term monitoring scenarios (e.g. national security or environmental remediation). However, the many design parameters, including attenuation filter material and thickness and number of pixels (filter/integrating material combinations), present a challenging optimization problem in designing spectrometers for different applications. In many of these applications, the goal is simply one of anomaly detection deciding that there is a gamma-ray source not normally found in the nuisance source populations of that particular measurement environment. A passive spectrometer design study approach using an anomaly detection metric is presented here, and is founded on \"injecting\" target sources of interest (e.g. 57Co, 133Ba, 137Cs) into a nuisance source population that represents the widely varying backgrounds typical of long-term monitoring scenarios. The design evaluation metric is quantified by the probability of detection given a required probability of false alarm. A genetic algorithm employs this metric to probe the large design space and identify superior spectrometer designs","PeriodicalId":105619,"journal":{"name":"IEEE Nuclear Science Symposium Conference Record, 2005","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130795556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-23DOI: 10.1109/NSSMIC.2005.1596265
G. Warren, L. Smith, M. Cooper, W. Kaye
A framework for quantitatively evaluating current and proposed gamma-ray instruments intended for search applications has been developed. The framework is designed to generate a large library of "virtual neighborhoods" that can be used to assess the performance of nearly any gamma-ray sensor type (e.g. handhelds or Compton imagers). Calculating nuisance-source emissions and combining various sources to create a large number of random virtual scenes places a significant computational burden on the development of the framework. To reduce this burden, a number of radiation transport simplifications have been made which maintain the essential physics ingredients for the quantitative assessment of search instruments while significantly reducing computational times. The general approach to creating the evaluation framework and the simplifying transport assumptions employed to make it computationally tractable are discussed, and examples of how such a framework might be utilized by the national and homeland security communities are provided
{"title":"Evaluation framework for search instruments","authors":"G. Warren, L. Smith, M. Cooper, W. Kaye","doi":"10.1109/NSSMIC.2005.1596265","DOIUrl":"https://doi.org/10.1109/NSSMIC.2005.1596265","url":null,"abstract":"A framework for quantitatively evaluating current and proposed gamma-ray instruments intended for search applications has been developed. The framework is designed to generate a large library of \"virtual neighborhoods\" that can be used to assess the performance of nearly any gamma-ray sensor type (e.g. handhelds or Compton imagers). Calculating nuisance-source emissions and combining various sources to create a large number of random virtual scenes places a significant computational burden on the development of the framework. To reduce this burden, a number of radiation transport simplifications have been made which maintain the essential physics ingredients for the quantitative assessment of search instruments while significantly reducing computational times. The general approach to creating the evaluation framework and the simplifying transport assumptions employed to make it computationally tractable are discussed, and examples of how such a framework might be utilized by the national and homeland security communities are provided","PeriodicalId":105619,"journal":{"name":"IEEE Nuclear Science Symposium Conference Record, 2005","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126958269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-23DOI: 10.1109/NSSMIC.2005.1596895
B. De Man, S. Basu, Jean-Baptiste Thibault, J. Hsieh, J. Fessier, C. Bouman, K. Sauer
This paper compares four different minimization approaches for iterative reconstruction in CT:(1) iterative coordinate descent approach (ICD), (2) conjugate gradient approach (CG), (3) separable parabolic surrogate approach with ordered subsets (OS), and (4) convergent ordered subsets approach (COS). In addition to showing that all approaches result in the same final image, the paper gives an indication of the number of iterations and time to convergence for the studied approaches
{"title":"A study of four minimization approaches for iterative reconstruction in X-ray CT","authors":"B. De Man, S. Basu, Jean-Baptiste Thibault, J. Hsieh, J. Fessier, C. Bouman, K. Sauer","doi":"10.1109/NSSMIC.2005.1596895","DOIUrl":"https://doi.org/10.1109/NSSMIC.2005.1596895","url":null,"abstract":"This paper compares four different minimization approaches for iterative reconstruction in CT:(1) iterative coordinate descent approach (ICD), (2) conjugate gradient approach (CG), (3) separable parabolic surrogate approach with ordered subsets (OS), and (4) convergent ordered subsets approach (COS). In addition to showing that all approaches result in the same final image, the paper gives an indication of the number of iterations and time to convergence for the studied approaches","PeriodicalId":105619,"journal":{"name":"IEEE Nuclear Science Symposium Conference Record, 2005","volume":"154 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123776278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-23DOI: 10.1109/NSSMIC.2005.1596614
S. Srivastava, J. Fessler
In X-ray computed tomography (CT), bony structures cause beam-hardening artifacts that appear on the reconstructed image as streaks and shadows. Currently, there are two classes of methods for correcting for bone-related beam hardening. The standard approach used with filtered backprojection (FBP) reconstruction is the Joseph and Spital (JS) method. In the current simulation study (which is inspired by a clinical head scan), the JS method requires a simple table or polynomial model for correcting water-related beam hardening, and two additional tuning parameters to compensate for bone. Like all FBP methods, it is sensitive to data noise. Statistical methods have also been proposed recently for image reconstruction from noisy polyenergetic X-ray data. However, these methods have required more knowledge of the X-ray spectrum than is needed in the JS method, hampering their use in practice. This paper proposes a simplified statistical image reconstruction approach for polyenergetic X-ray CT that uses the same calibration data and tuning parameters used in the JS method, thereby facilitating its practical use. Simulation results indicate that the proposed method provides improved image quality (reduced beam hardening artifacts and noise) compared to the JS method, at the price of increased computation. The results also indicate that the image quality of the proposed method is comparable to a method requiring more beam-hardening information.
在x射线计算机断层扫描(CT)中,骨骼结构引起的波束硬化伪影在重建图像上以条纹和阴影的形式出现。目前,有两类方法纠正骨相关的梁硬化。滤波后反投影(FBP)重建的标准方法是Joseph and hospital (JS)方法。在目前的模拟研究中(受到临床头部扫描的启发),JS方法需要一个简单的表或多项式模型来纠正与水相关的光束硬化,以及两个额外的调谐参数来补偿骨骼。与所有FBP方法一样,它对数据噪声很敏感。最近也提出了统计方法用于从噪声多能x射线数据中重建图像。然而,这些方法比JS方法需要更多的x射线光谱知识,阻碍了它们在实践中的应用。本文提出了一种简化的多能x射线CT统计图像重建方法,该方法使用了与JS方法相同的校准数据和调谐参数,便于实际应用。仿真结果表明,与JS方法相比,该方法提高了图像质量(减少了光束硬化伪影和噪声),但代价是增加了计算量。结果还表明,该方法的图像质量与需要更多波束硬化信息的方法相当。
{"title":"Simplified statistical image reconstruction for polyenergetic X-ray CT","authors":"S. Srivastava, J. Fessler","doi":"10.1109/NSSMIC.2005.1596614","DOIUrl":"https://doi.org/10.1109/NSSMIC.2005.1596614","url":null,"abstract":"In X-ray computed tomography (CT), bony structures cause beam-hardening artifacts that appear on the reconstructed image as streaks and shadows. Currently, there are two classes of methods for correcting for bone-related beam hardening. The standard approach used with filtered backprojection (FBP) reconstruction is the Joseph and Spital (JS) method. In the current simulation study (which is inspired by a clinical head scan), the JS method requires a simple table or polynomial model for correcting water-related beam hardening, and two additional tuning parameters to compensate for bone. Like all FBP methods, it is sensitive to data noise. Statistical methods have also been proposed recently for image reconstruction from noisy polyenergetic X-ray data. However, these methods have required more knowledge of the X-ray spectrum than is needed in the JS method, hampering their use in practice. This paper proposes a simplified statistical image reconstruction approach for polyenergetic X-ray CT that uses the same calibration data and tuning parameters used in the JS method, thereby facilitating its practical use. Simulation results indicate that the proposed method provides improved image quality (reduced beam hardening artifacts and noise) compared to the JS method, at the price of increased computation. The results also indicate that the image quality of the proposed method is comparable to a method requiring more beam-hardening information.","PeriodicalId":105619,"journal":{"name":"IEEE Nuclear Science Symposium Conference Record, 2005","volume":"145 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115761713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-23DOI: 10.1109/NSSMIC.2005.1596425
S. Robinson, R. Kouzes, R. McConn, R. Pagh, J. Schweppe, E. Siciliano
Much of the data used to analyze and calibrate alarm algorithms for radiation portal monitor (RPM) systems has come from actual measurements of vehicles passing through RPMs. Due to the inherent limitations and expense of taking data with controlled radioactive sources, the majority of these data contain no sources except for naturally occurring radioactive material cargo sources in the presence of natural background. Advances in computing capabilities have made it feasible to simulate "in-the-field" detector responses from a wide variety of source/cargo configurations, and to produce data matching that generated in the field. Computational models have been developed by the RPM project for many detectors, vehicles, cargo configurations, and sources. These models are being used to simulate RPM responses to complicated source/cargo configurations for vehicles with and without sources. The simulated data is, and will be used to 1) complement existing field data, 2) help guide the progress of future data taking, 3) improve our ability to calibrate and refine alarm algorithms, 4) verify the causes of effects seen in the field, and 5) look for unknown effects not corresponding to theoretical models. A large set of simulated data that has been validated against field data will allow for in-depth testing of detection alarm algorithms for a variety of source scenarios.
{"title":"Creation of realistic radiation transport models of radiation portal monitors for homeland security purposes","authors":"S. Robinson, R. Kouzes, R. McConn, R. Pagh, J. Schweppe, E. Siciliano","doi":"10.1109/NSSMIC.2005.1596425","DOIUrl":"https://doi.org/10.1109/NSSMIC.2005.1596425","url":null,"abstract":"Much of the data used to analyze and calibrate alarm algorithms for radiation portal monitor (RPM) systems has come from actual measurements of vehicles passing through RPMs. Due to the inherent limitations and expense of taking data with controlled radioactive sources, the majority of these data contain no sources except for naturally occurring radioactive material cargo sources in the presence of natural background. Advances in computing capabilities have made it feasible to simulate \"in-the-field\" detector responses from a wide variety of source/cargo configurations, and to produce data matching that generated in the field. Computational models have been developed by the RPM project for many detectors, vehicles, cargo configurations, and sources. These models are being used to simulate RPM responses to complicated source/cargo configurations for vehicles with and without sources. The simulated data is, and will be used to 1) complement existing field data, 2) help guide the progress of future data taking, 3) improve our ability to calibrate and refine alarm algorithms, 4) verify the causes of effects seen in the field, and 5) look for unknown effects not corresponding to theoretical models. A large set of simulated data that has been validated against field data will allow for in-depth testing of detection alarm algorithms for a variety of source scenarios.","PeriodicalId":105619,"journal":{"name":"IEEE Nuclear Science Symposium Conference Record, 2005","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115105702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}