Pub Date : 2012-10-01DOI: 10.1109/NSSMIC.2012.6551877
R. Miyaoka, W. Hunter, L. Pierce
Continuous miniature crystal element (cMiCE) PET detectors use monolithic scintillators coupled to arrays of photosensor elements and statistics based methods for positioning of detected events. Current implementations acquire and utilize all photosensor array channels for event positioning (e.g., 64 channels for an 8×8 PMT or SiPM array). We investigate different multiplexing strategies to reduce the number of acquired signal channels and their impact on positioning performance. This study was conducted using data collected from a cMiCE PET detector. Sixty-four signals were collected per event and data were binned into four depth of interaction regions. The multiplexing strategies were implemented in software. Multiplexing strategies investigated included rowcolumn (RC) summing of signals (64 channel -> 16 channel); sampling based upon a modulus 3 and modulus 5 patterns of detector channels (64 -> 16); variants of RC summing (e.g., 64 -> 19 or 64 -> 8); and multiplexing based upon principal component analysis. The average X,Y intrinsic spatial resolution for the cMiCE detector using all 64 channels for positioning was 1.26 mm FWHM in X and Y. For standard RC summing of signals the average intrinsic X,Y spatial resolution was 1.32 mm. The intrinsic spatial resolution for the modulus 3 and 5 multiplexing was significantly worse at 1.43 mm FWHM. A RC summing method that used three additional multiplexed channels along the edges of the crystal provided similar decoding performance as standard RC summing (i.e., 1.31 mm) but better visual spatial positioning in the corners and edges of the detector. However, the most encouraging results were using multiplexing methods based upon the principal components of the detector signals; the intrinsic spatial resolution for this method was best of all the multiplexing methods (i.e., 1.30 mm FWHM) and it proved to be fairly robust to slight changes in the weighting factors. In conclusion, signal multiplexing techniques can be applied to monolithic crystal PET detectors that utilize statistics-based positioning methods. Reductions in the number of acquisition signal channels of a factor of 3-5 resulted in only 4-10% degradation in spatial resolution performance.
连续微型晶体元件(cMiCE) PET探测器使用单片闪烁体耦合到光敏元件阵列和基于统计的方法来定位检测到的事件。目前的实现获取并利用所有光敏传感器阵列通道进行事件定位(例如,8×8 PMT或SiPM阵列的64通道)。我们研究了不同的多路复用策略,以减少获取的信号通道数量及其对定位性能的影响。本研究使用从cMiCE PET检测器收集的数据进行。每个事件收集64个信号,并将数据分为四个深度的交互区域。多路复用策略在软件中实现。研究的多路复用策略包括信号行列(RC)求和(64通道-> 16通道);基于检测器通道的模3和模5模式(64 -> 16)的采样;RC求和的变体(例如,64 -> 19或64 -> 8);以及基于主成分分析的多路复用。使用所有64通道定位的cMiCE探测器在X和Y上的平均X、Y内禀空间分辨率为1.26 mm FWHM。对于标准RC信号求和,平均X、Y内禀空间分辨率为1.32 mm。模量3和模量5复用的本征空间分辨率在FWHM为1.43 mm时明显变差。使用沿晶体边缘的三个额外多路复用通道的RC求和方法提供了与标准RC求和相似的解码性能(即1.31 mm),但在检测器的角落和边缘有更好的视觉空间定位。然而,最令人鼓舞的结果是使用基于探测器信号主成分的多路复用方法;该方法的固有空间分辨率是所有复用方法中最好的(即1.30 mm FWHM),并且对加权因子的微小变化具有相当的鲁棒性。总之,信号复用技术可以应用于利用基于统计的定位方法的单片晶体PET探测器。将采集信号通道的数量减少3-5倍,只会导致空间分辨率性能下降4-10%。
{"title":"Multiplexing strategies for cMiCE PET detectors","authors":"R. Miyaoka, W. Hunter, L. Pierce","doi":"10.1109/NSSMIC.2012.6551877","DOIUrl":"https://doi.org/10.1109/NSSMIC.2012.6551877","url":null,"abstract":"Continuous miniature crystal element (cMiCE) PET detectors use monolithic scintillators coupled to arrays of photosensor elements and statistics based methods for positioning of detected events. Current implementations acquire and utilize all photosensor array channels for event positioning (e.g., 64 channels for an 8×8 PMT or SiPM array). We investigate different multiplexing strategies to reduce the number of acquired signal channels and their impact on positioning performance. This study was conducted using data collected from a cMiCE PET detector. Sixty-four signals were collected per event and data were binned into four depth of interaction regions. The multiplexing strategies were implemented in software. Multiplexing strategies investigated included rowcolumn (RC) summing of signals (64 channel -> 16 channel); sampling based upon a modulus 3 and modulus 5 patterns of detector channels (64 -> 16); variants of RC summing (e.g., 64 -> 19 or 64 -> 8); and multiplexing based upon principal component analysis. The average X,Y intrinsic spatial resolution for the cMiCE detector using all 64 channels for positioning was 1.26 mm FWHM in X and Y. For standard RC summing of signals the average intrinsic X,Y spatial resolution was 1.32 mm. The intrinsic spatial resolution for the modulus 3 and 5 multiplexing was significantly worse at 1.43 mm FWHM. A RC summing method that used three additional multiplexed channels along the edges of the crystal provided similar decoding performance as standard RC summing (i.e., 1.31 mm) but better visual spatial positioning in the corners and edges of the detector. However, the most encouraging results were using multiplexing methods based upon the principal components of the detector signals; the intrinsic spatial resolution for this method was best of all the multiplexing methods (i.e., 1.30 mm FWHM) and it proved to be fairly robust to slight changes in the weighting factors. In conclusion, signal multiplexing techniques can be applied to monolithic crystal PET detectors that utilize statistics-based positioning methods. Reductions in the number of acquisition signal channels of a factor of 3-5 resulted in only 4-10% degradation in spatial resolution performance.","PeriodicalId":187728,"journal":{"name":"2012 IEEE Nuclear Science Symposium and Medical Imaging Conference Record (NSS/MIC)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128101570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/NSSMIC.2012.6551537
Zhiqiang Chen, Ming Chang, Liang Li, Yongshun Xiao, Ge Wang
In recent years, total variation (TV) minimization method has been extensively studied as one famous way of compressed sensing (CS) based CT reconstruction algorithms. Its great success makes it possible to reduce the X-ray dose because it needs much less data comparing to conventional reconstruction method. In this work, a reweighted total variation (RwTV) instead of TV is adopted as a better proxy of L0 minimization regularization. To solve the RwTV minimization constrain reconstruction problem, we treat the raw data fidelity and the sparseness constraint separately in an alternating manner as it is often used in the TV-based reconstruction problems. The key of our method is the choice of the RwTV's weighting parameters which influence the balance between data fidelity and RwTV minimization during the convergence process. Moreover, the RwTV stopping criteria is introduced based on the SNR of reconstructed image to guarantee an appropriate iteration number for the RwTV minimization process. Furthermore the FISTA method is incorporated to achieve a faster convergence rate. Finally numerical experiments show the advantage in image quality of our approach compared to the TV minimization method while the projection data of only 10 views are used.
{"title":"A reweighted total variation minimization method for few view CT reconstruction in the instant CT","authors":"Zhiqiang Chen, Ming Chang, Liang Li, Yongshun Xiao, Ge Wang","doi":"10.1109/NSSMIC.2012.6551537","DOIUrl":"https://doi.org/10.1109/NSSMIC.2012.6551537","url":null,"abstract":"In recent years, total variation (TV) minimization method has been extensively studied as one famous way of compressed sensing (CS) based CT reconstruction algorithms. Its great success makes it possible to reduce the X-ray dose because it needs much less data comparing to conventional reconstruction method. In this work, a reweighted total variation (RwTV) instead of TV is adopted as a better proxy of L0 minimization regularization. To solve the RwTV minimization constrain reconstruction problem, we treat the raw data fidelity and the sparseness constraint separately in an alternating manner as it is often used in the TV-based reconstruction problems. The key of our method is the choice of the RwTV's weighting parameters which influence the balance between data fidelity and RwTV minimization during the convergence process. Moreover, the RwTV stopping criteria is introduced based on the SNR of reconstructed image to guarantee an appropriate iteration number for the RwTV minimization process. Furthermore the FISTA method is incorporated to achieve a faster convergence rate. Finally numerical experiments show the advantage in image quality of our approach compared to the TV minimization method while the projection data of only 10 views are used.","PeriodicalId":187728,"journal":{"name":"2012 IEEE Nuclear Science Symposium and Medical Imaging Conference Record (NSS/MIC)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125655889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/NSSMIC.2012.6551896
G. L. Zeng, Andrew M. Hernandez, D. Kadrmas, G. Gullberg
This paper uses the method of integration by parts to convert a differentiation equation into an equation that does not contain any derivatives. A linear estimation model is set up and a closed-form estimation solution is obtained.
本文用分部积分法将微分方程转化为不含任何导数的方程。建立了一个线性估计模型,得到了一个闭式估计解。
{"title":"Closed-form kinetic parameter estimation using wavelets","authors":"G. L. Zeng, Andrew M. Hernandez, D. Kadrmas, G. Gullberg","doi":"10.1109/NSSMIC.2012.6551896","DOIUrl":"https://doi.org/10.1109/NSSMIC.2012.6551896","url":null,"abstract":"This paper uses the method of integration by parts to convert a differentiation equation into an equation that does not contain any derivatives. A linear estimation model is set up and a closed-form estimation solution is obtained.","PeriodicalId":187728,"journal":{"name":"2012 IEEE Nuclear Science Symposium and Medical Imaging Conference Record (NSS/MIC)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122259812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/NSSMIC.2012.6551709
Alexander M. Grant, C. Levin
We have developed a method of optically encoding position, energy, and arrival time of annihilation photon interactions in PET detectors with fast optical pulse (130 ps FWHM) trains, and demonstrated it in a two-channel coincidence setup. Two LSO-SiPM detector channels were optically encoded and multiplexed down to one optical fiber readout channel, and custom software was used to decode coincidences in the resulting pulse trains and calculate coincidence timing resolution. Timing resolution of ~168 ps FWHM was achieved using pulse height discrimination, indicating that optical encoding introduces little timing jitter, and showing promise for use in time-of-flight (ToF) PET imaging. We have demonstrated what is essentially two-channel optically multiplexed coincidence detection with only a single digitizer. This technique has the potential to eliminate the need for coincidence processing electronics for every detector channel, thereby reducing the complexity of high-resolution PET scanners with thousands of readout channels. It could eventually replace bulky electronic multiplexing and readout schemes with only a few optical fibers.
{"title":"Optical encoding and multiplexing of PET coincidence events","authors":"Alexander M. Grant, C. Levin","doi":"10.1109/NSSMIC.2012.6551709","DOIUrl":"https://doi.org/10.1109/NSSMIC.2012.6551709","url":null,"abstract":"We have developed a method of optically encoding position, energy, and arrival time of annihilation photon interactions in PET detectors with fast optical pulse (130 ps FWHM) trains, and demonstrated it in a two-channel coincidence setup. Two LSO-SiPM detector channels were optically encoded and multiplexed down to one optical fiber readout channel, and custom software was used to decode coincidences in the resulting pulse trains and calculate coincidence timing resolution. Timing resolution of ~168 ps FWHM was achieved using pulse height discrimination, indicating that optical encoding introduces little timing jitter, and showing promise for use in time-of-flight (ToF) PET imaging. We have demonstrated what is essentially two-channel optically multiplexed coincidence detection with only a single digitizer. This technique has the potential to eliminate the need for coincidence processing electronics for every detector channel, thereby reducing the complexity of high-resolution PET scanners with thousands of readout channels. It could eventually replace bulky electronic multiplexing and readout schemes with only a few optical fibers.","PeriodicalId":187728,"journal":{"name":"2012 IEEE Nuclear Science Symposium and Medical Imaging Conference Record (NSS/MIC)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127971834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/NSSMIC.2012.6551669
B. Feng, D. Austin
In this work, a method of generating normalization maps by scanning a large uniform cylindrical object in standard tomography mode (with the collimator on) has been investigated. This method may have several advantages over point-source approaches: First, since the object is not attached to the collimator, the normalization map generated may be less sensitivity to the geometric changes than the point-source-atpinhole approach. Second, it represents the typical photon incident angles during imaging. Third, it can be applied to single-pinhole and multi-pinhole collimators. Combined with the point-source normalization at 360 mm distance, a normalization correction map (to correct for the 360 mm point-source normalization) can be generated for the specific isotope and collimator and thus applied to different scanners of the same type.
{"title":"Generation of normalization maps for pixelated pinhole SPECT detectors by scanning a uniform cylinder phantom","authors":"B. Feng, D. Austin","doi":"10.1109/NSSMIC.2012.6551669","DOIUrl":"https://doi.org/10.1109/NSSMIC.2012.6551669","url":null,"abstract":"In this work, a method of generating normalization maps by scanning a large uniform cylindrical object in standard tomography mode (with the collimator on) has been investigated. This method may have several advantages over point-source approaches: First, since the object is not attached to the collimator, the normalization map generated may be less sensitivity to the geometric changes than the point-source-atpinhole approach. Second, it represents the typical photon incident angles during imaging. Third, it can be applied to single-pinhole and multi-pinhole collimators. Combined with the point-source normalization at 360 mm distance, a normalization correction map (to correct for the 360 mm point-source normalization) can be generated for the specific isotope and collimator and thus applied to different scanners of the same type.","PeriodicalId":187728,"journal":{"name":"2012 IEEE Nuclear Science Symposium and Medical Imaging Conference Record (NSS/MIC)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115787324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/NSSMIC.2012.6551404
T. Aso, Kazuki Kawashima, T. Nishio, Se Byeong Lee, Takashi Sasaki
Treatment planning systems in proton therapy facilities employ X-ray CT images for dose calculation of a patient. Since the interactions of X-ray in matter are fundamentally different from those of proton, the X-ray image has to be translated into the stopping power map of protons for dose calculation. This conversion induces intrinsic discrepancy and known as the limitation of proton treatment plan based on X-ray imaging. Proton imaging is considered to be a direct measurement of stopping power map and is expected to improve the treatment accuracy. In this paper, we report on a study about the multiplex proton imaging using the GEANT4 simulation. The multiplex proton images have been reconstructed by using 250 MeV mono energetic proton beam. Two imaging schemes were proposed i.e. the range scan imaging scheme and the multiple-scattering imaging scheme. The range scan imaging scheme used a range modulator in variable thickness, in order to obtain energy dependent proton images. The proton images were reconstructed by accumulating energy deposits in a water equivalent detector of which thickness was 1 cm. The multiple-scattering imaging scheme was reconstructed by calculating the spatial displacement due to multiple Coulomb scattering in the object. Both methods successfully reconstructed the image of artificial organs in a water phantom.
{"title":"A study on multiplex proton imaging using GEANT4","authors":"T. Aso, Kazuki Kawashima, T. Nishio, Se Byeong Lee, Takashi Sasaki","doi":"10.1109/NSSMIC.2012.6551404","DOIUrl":"https://doi.org/10.1109/NSSMIC.2012.6551404","url":null,"abstract":"Treatment planning systems in proton therapy facilities employ X-ray CT images for dose calculation of a patient. Since the interactions of X-ray in matter are fundamentally different from those of proton, the X-ray image has to be translated into the stopping power map of protons for dose calculation. This conversion induces intrinsic discrepancy and known as the limitation of proton treatment plan based on X-ray imaging. Proton imaging is considered to be a direct measurement of stopping power map and is expected to improve the treatment accuracy. In this paper, we report on a study about the multiplex proton imaging using the GEANT4 simulation. The multiplex proton images have been reconstructed by using 250 MeV mono energetic proton beam. Two imaging schemes were proposed i.e. the range scan imaging scheme and the multiple-scattering imaging scheme. The range scan imaging scheme used a range modulator in variable thickness, in order to obtain energy dependent proton images. The proton images were reconstructed by accumulating energy deposits in a water equivalent detector of which thickness was 1 cm. The multiple-scattering imaging scheme was reconstructed by calculating the spatial displacement due to multiple Coulomb scattering in the object. Both methods successfully reconstructed the image of artificial organs in a water phantom.","PeriodicalId":187728,"journal":{"name":"2012 IEEE Nuclear Science Symposium and Medical Imaging Conference Record (NSS/MIC)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132363543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/NSSMIC.2012.6551508
N. da Silva, M. Gaens, U. Pietrzyk, P. Almeida, H. Herzog
In positron emission tomography (PET) a post-filtering step may be used to reduce image noise. For that purpose a moving average filter with a Gaussian shape is frequently used. However, such a filter decreases the spatial resolution and increases the spillover between adjacent structures. These effects become important when dealing with small structures such as the carotid arteries with the aim to derive an image derived input function (IDIF). In this work, a bilateral filter which involves the anatomical information from a segmented magnetic resonance image (MRI) is proposed. To test the filter, dynamic FDG images were simulated with GATE (Geant4 Application for Tomographic Emission) for the BrainPET scanner. To evaluate the filter, the signal to noise ratio (SNR) of the IDIF was calculated. Moreover, three approaches to estimate the IDIF were examined, which were based on: i) the carotid volume of interest (VOl) average, ii) the hottest voxels per plane in carotid VOl and iii) the hottest voxels in the carotid VOl. These were evaluated with the area under the curve (AVe) as well as with partial volume coefficients. The results show that the bilateral filter increases the SNR and reduces the differences between the simulated and estimated IDIF. In conclusion, compared to moving average Gaussian filtering the proposed filter reduces the PVE and increases the SNR.
在正电子发射断层扫描(PET)中,可采用后滤波步骤来降低图像噪声。为此,经常使用高斯形状的移动平均滤波器。然而,这种滤波器降低了空间分辨率,增加了相邻结构之间的溢出。当处理小结构(如颈动脉)时,这些影响变得重要,目的是获得图像派生输入函数(IDIF)。在这项工作中,提出了一种涉及分割磁共振图像(MRI)解剖信息的双边滤波器。为了测试该滤波器,使用GATE (Geant4 Application for Tomographic Emission)对BrainPET扫描仪的动态FDG图像进行了模拟。为了评价该滤波器,计算了IDIF的信噪比(SNR)。此外,研究了三种估计IDIF的方法,它们基于:i)颈动脉感兴趣体积(VOl)平均值,ii)颈动脉VOl中每个平面的最热体素和iii)颈动脉VOl中最热体素。这些方法用曲线下面积(AVe)和部分体积系数进行评估。结果表明,双边滤波器提高了信噪比,减小了模拟和估计IDIF之间的差异。综上所述,与移动平均高斯滤波相比,该滤波降低了PVE,提高了信噪比。
{"title":"Bilateral filter for image derived input function in MR-BrainPET","authors":"N. da Silva, M. Gaens, U. Pietrzyk, P. Almeida, H. Herzog","doi":"10.1109/NSSMIC.2012.6551508","DOIUrl":"https://doi.org/10.1109/NSSMIC.2012.6551508","url":null,"abstract":"In positron emission tomography (PET) a post-filtering step may be used to reduce image noise. For that purpose a moving average filter with a Gaussian shape is frequently used. However, such a filter decreases the spatial resolution and increases the spillover between adjacent structures. These effects become important when dealing with small structures such as the carotid arteries with the aim to derive an image derived input function (IDIF). In this work, a bilateral filter which involves the anatomical information from a segmented magnetic resonance image (MRI) is proposed. To test the filter, dynamic FDG images were simulated with GATE (Geant4 Application for Tomographic Emission) for the BrainPET scanner. To evaluate the filter, the signal to noise ratio (SNR) of the IDIF was calculated. Moreover, three approaches to estimate the IDIF were examined, which were based on: i) the carotid volume of interest (VOl) average, ii) the hottest voxels per plane in carotid VOl and iii) the hottest voxels in the carotid VOl. These were evaluated with the area under the curve (AVe) as well as with partial volume coefficients. The results show that the bilateral filter increases the SNR and reduces the differences between the simulated and estimated IDIF. In conclusion, compared to moving average Gaussian filtering the proposed filter reduces the PVE and increases the SNR.","PeriodicalId":187728,"journal":{"name":"2012 IEEE Nuclear Science Symposium and Medical Imaging Conference Record (NSS/MIC)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132393513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/NSSMIC.2012.6551437
S. Orsi
The POLAR experiment is a joint European-Chinese project conceived for a precise measurement of hard X-ray polarization and optimized for the detection of the prompt emission of Gamma-Ray Bursts (GRB) in the energy range 50500 ke V. A first detailed measurement of the polarization from astrophysical sources will lead to a better understanding of the source geometry and of the emission mechanisms. Thanks to its large modulation factor, large effective area, and wide field of view (1/3 of the visible sky), POLAR will be able to reach a minimum detectable polarization (1-σ level) of about 3% for several GRB measurements per year. POLAR is a novel compact space-borne Compton polarimeter consisting of 1600 low-Z plastic scintillator bars, read out by 25 flat-panel multianode photomultipliers. The incoming photons undergo Compton scattering in the bars and produce a modulation pattern; experiments with polarized synchrotron radiation and GEANT4 Monte Carlo simulations have shown that the polarization degree and angle can be retrieved from this pattern with the accuracy necessary for pinning down the GRB mechanisms. The European Space Agency financed (through its PRODEX office) in December 2011 the construction of three copies of the POLAR detectors: two full-scale copies of the flight model are currently under construction in Geneva and will undergo a space qualification campaign; the flight model will be placed onboard the Chinese spacelab TG-2, scheduled for launch in low Earth orbit in 2014.
{"title":"POLAR: A Gamma-Ray Burst polarimeter in space","authors":"S. Orsi","doi":"10.1109/NSSMIC.2012.6551437","DOIUrl":"https://doi.org/10.1109/NSSMIC.2012.6551437","url":null,"abstract":"The POLAR experiment is a joint European-Chinese project conceived for a precise measurement of hard X-ray polarization and optimized for the detection of the prompt emission of Gamma-Ray Bursts (GRB) in the energy range 50500 ke V. A first detailed measurement of the polarization from astrophysical sources will lead to a better understanding of the source geometry and of the emission mechanisms. Thanks to its large modulation factor, large effective area, and wide field of view (1/3 of the visible sky), POLAR will be able to reach a minimum detectable polarization (1-σ level) of about 3% for several GRB measurements per year. POLAR is a novel compact space-borne Compton polarimeter consisting of 1600 low-Z plastic scintillator bars, read out by 25 flat-panel multianode photomultipliers. The incoming photons undergo Compton scattering in the bars and produce a modulation pattern; experiments with polarized synchrotron radiation and GEANT4 Monte Carlo simulations have shown that the polarization degree and angle can be retrieved from this pattern with the accuracy necessary for pinning down the GRB mechanisms. The European Space Agency financed (through its PRODEX office) in December 2011 the construction of three copies of the POLAR detectors: two full-scale copies of the flight model are currently under construction in Geneva and will undergo a space qualification campaign; the flight model will be placed onboard the Chinese spacelab TG-2, scheduled for launch in low Earth orbit in 2014.","PeriodicalId":187728,"journal":{"name":"2012 IEEE Nuclear Science Symposium and Medical Imaging Conference Record (NSS/MIC)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129994565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/NSSMIC.2012.6551368
J. T. Anderson, M. Albers, M. Alcorta, C. Campbell, M. Carpenter, C. Chiara, M. Cromaz, H. David, D. Doering, D. Doherty, C. Hoffman, R. Janssens, J. Joseph, T. Khoo, A. Kreps, T. Lauritsen, I. Lee, C. Lionberger, C. Lister, T. Madden, M. Oberling, A. Rogers, D. Seweryniak, P. Wilt, S. Zhu, S. Zimmermann
A new data acquisition system for experiments using the Gammasphere detector array and associated detectors including the double-sided silicon strip detector (DSSD) is under development. Waveform digitization and triggering hardware identical to that developed for GRETINA has been procured and interfaced to the both the existing Gammasphere and DSSD detectors to provide significantly increased data throughput and increased event rates. A new parasitic signal connection and cable plant provides the ability to simultaneously measure the same events using both the old and new systems. A triggering interface module has been manufactured that connects the Gammasphere trigger and clock to the new system. A second digital data acquisition system has been attached to the DSSD detector located downstream from Gammasphere. The trigger systems of Gammasphere, Digital Gammasphere and Digital DSSD have successfully synchronized the clocks of all three data acquisition systems demonstrating timestamp correlation across multiple detector systems. New firmware for the digitizer modules specific to the signals provided by the Gammasphere detector has been developed and is currently being tested in situ to directly compare the energy resolution of the two data acquisition systems. We describe the system as implemented and show test results to date, where significantly faster event processing rates have been obtained with nearly equivalent energy resolution.
{"title":"A digital data acquisition system for the detectors at gammasphere","authors":"J. T. Anderson, M. Albers, M. Alcorta, C. Campbell, M. Carpenter, C. Chiara, M. Cromaz, H. David, D. Doering, D. Doherty, C. Hoffman, R. Janssens, J. Joseph, T. Khoo, A. Kreps, T. Lauritsen, I. Lee, C. Lionberger, C. Lister, T. Madden, M. Oberling, A. Rogers, D. Seweryniak, P. Wilt, S. Zhu, S. Zimmermann","doi":"10.1109/NSSMIC.2012.6551368","DOIUrl":"https://doi.org/10.1109/NSSMIC.2012.6551368","url":null,"abstract":"A new data acquisition system for experiments using the Gammasphere detector array and associated detectors including the double-sided silicon strip detector (DSSD) is under development. Waveform digitization and triggering hardware identical to that developed for GRETINA has been procured and interfaced to the both the existing Gammasphere and DSSD detectors to provide significantly increased data throughput and increased event rates. A new parasitic signal connection and cable plant provides the ability to simultaneously measure the same events using both the old and new systems. A triggering interface module has been manufactured that connects the Gammasphere trigger and clock to the new system. A second digital data acquisition system has been attached to the DSSD detector located downstream from Gammasphere. The trigger systems of Gammasphere, Digital Gammasphere and Digital DSSD have successfully synchronized the clocks of all three data acquisition systems demonstrating timestamp correlation across multiple detector systems. New firmware for the digitizer modules specific to the signals provided by the Gammasphere detector has been developed and is currently being tested in situ to directly compare the energy resolution of the two data acquisition systems. We describe the system as implemented and show test results to date, where significantly faster event processing rates have been obtained with nearly equivalent energy resolution.","PeriodicalId":187728,"journal":{"name":"2012 IEEE Nuclear Science Symposium and Medical Imaging Conference Record (NSS/MIC)","volume":"33 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130003639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/NSSMIC.2012.6551907
J. Michálek, M. Capek, J. Janáček, X. Mao, L. Kubínová
Image registration tasks are often formulated in terms of minimization of a functional consisting of a data fidelity term penalizing the mismatch between the reference and the target image, and a term enforcing smoothness of shift between neighboring pairs of pixels (a min-sum problem). For registration of neighboring physical slices of microscopy specimens with discontinuities, Janacek [1] proposed earlier an L1-distance data fidelity term and a total variation (TV) smoothness term, and used a graph-cut based iterative steepest descent algorithm for minimization. The L1-TV functional is in general non-convex, and thus a steepest descent algorithm is not guaranteed to converge to the global minimum. Schlesinger et. aI. [10] presented an equivalent transformation of max-sum problems to the problem of minimizing a dual quantity called problem power, which is - contrary to the original max-sum (min-sum) functional - convex (concave). We applied Schlesinger's approach to develop an alternative, multi-label, L1-TV minimization algorithm by maximization of the dual problem. We compared experimentally results obtained by the multi-label dual solution with a graph cut based minimization. For Schlesinger's subgradient algorithm we proposed a step control heuristics which considerably enhances both speed and accuracy compared with known stepsize strategies for subgradient methods. The registration algorithm is easily parallelizable, since the dynamic programming maximization of the functional along a horizontal (resp. vertical) gridline is independent of maximization along any other horizontal (resp. vertical) gridlines. We have implemented it both on Core Quad or Core Duo PCs and CUDA Graphic Processing Unit, thus significantly speeding up the computation.
{"title":"Matching of irreversibly deformed images in microscopy based on piecewise monotone subgradient optimization using parallel processing","authors":"J. Michálek, M. Capek, J. Janáček, X. Mao, L. Kubínová","doi":"10.1109/NSSMIC.2012.6551907","DOIUrl":"https://doi.org/10.1109/NSSMIC.2012.6551907","url":null,"abstract":"Image registration tasks are often formulated in terms of minimization of a functional consisting of a data fidelity term penalizing the mismatch between the reference and the target image, and a term enforcing smoothness of shift between neighboring pairs of pixels (a min-sum problem). For registration of neighboring physical slices of microscopy specimens with discontinuities, Janacek [1] proposed earlier an L1-distance data fidelity term and a total variation (TV) smoothness term, and used a graph-cut based iterative steepest descent algorithm for minimization. The L1-TV functional is in general non-convex, and thus a steepest descent algorithm is not guaranteed to converge to the global minimum. Schlesinger et. aI. [10] presented an equivalent transformation of max-sum problems to the problem of minimizing a dual quantity called problem power, which is - contrary to the original max-sum (min-sum) functional - convex (concave). We applied Schlesinger's approach to develop an alternative, multi-label, L1-TV minimization algorithm by maximization of the dual problem. We compared experimentally results obtained by the multi-label dual solution with a graph cut based minimization. For Schlesinger's subgradient algorithm we proposed a step control heuristics which considerably enhances both speed and accuracy compared with known stepsize strategies for subgradient methods. The registration algorithm is easily parallelizable, since the dynamic programming maximization of the functional along a horizontal (resp. vertical) gridline is independent of maximization along any other horizontal (resp. vertical) gridlines. We have implemented it both on Core Quad or Core Duo PCs and CUDA Graphic Processing Unit, thus significantly speeding up the computation.","PeriodicalId":187728,"journal":{"name":"2012 IEEE Nuclear Science Symposium and Medical Imaging Conference Record (NSS/MIC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130126015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}