Unsupervised domain adaptation (UDA) methods have achieved promising performance in alleviating the domain shift between different imaging modalities. In this article, we propose a robust two-stage 3-D anatomy-guided self-training cross-modality segmentation (ASTCMSeg) framework based on UDA for unpaired cross-modality image segmentation, including the anatomy-guided image translation and self-training segmentation stages. In the translation stage, we first leverage the similarity distributions between patches to capture the latent anatomical relationships and propose an anatomical relation consistency (ARC) for preserving the correct anatomical relationships. Then, we design a frequency domain constraint to enforce the consistency of important frequency components during image translation. Finally, we integrate the ARC and frequency domain constraint with contrastive learning for anatomy-guided image translation. In the segmentation stage, we propose a context-aware anisotropic mesh network for segmenting anisotropic volumes in the target domain. Meanwhile, we design a volumetric adaptive self-training method that dynamically selects appropriate pseudo-label thresholds to learn the abundant label information from unlabeled target volumes. Our proposed method is validated on the cross-modality brain structure, cardiac substructure, and abdominal multiorgan segmentation tasks. Experimental results show that our proposed method achieves state-of-the-art performance in all tasks and significantly outperforms other 2-D based or 3-D based UDA methods.
{"title":"A 3-D Anatomy-Guided Self-Training Segmentation Framework for Unpaired Cross-Modality Medical Image Segmentation","authors":"Yuzhou Zhuang;Hong Liu;Enmin Song;Xiangyang Xu;Yongde Liao;Guanchao Ye;Chih-Cheng Hung","doi":"10.1109/TRPMS.2023.3332619","DOIUrl":"10.1109/TRPMS.2023.3332619","url":null,"abstract":"Unsupervised domain adaptation (UDA) methods have achieved promising performance in alleviating the domain shift between different imaging modalities. In this article, we propose a robust two-stage 3-D anatomy-guided self-training cross-modality segmentation (ASTCMSeg) framework based on UDA for unpaired cross-modality image segmentation, including the anatomy-guided image translation and self-training segmentation stages. In the translation stage, we first leverage the similarity distributions between patches to capture the latent anatomical relationships and propose an anatomical relation consistency (ARC) for preserving the correct anatomical relationships. Then, we design a frequency domain constraint to enforce the consistency of important frequency components during image translation. Finally, we integrate the ARC and frequency domain constraint with contrastive learning for anatomy-guided image translation. In the segmentation stage, we propose a context-aware anisotropic mesh network for segmenting anisotropic volumes in the target domain. Meanwhile, we design a volumetric adaptive self-training method that dynamically selects appropriate pseudo-label thresholds to learn the abundant label information from unlabeled target volumes. Our proposed method is validated on the cross-modality brain structure, cardiac substructure, and abdominal multiorgan segmentation tasks. Experimental results show that our proposed method achieves state-of-the-art performance in all tasks and significantly outperforms other 2-D based or 3-D based UDA methods.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135661085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-13DOI: 10.1109/TRPMS.2023.3332288
Hadley DeBrosse;Ling Jian Meng;Patrick La Rivière
Imaging the spatial distribution of low concentrations of metal is a growing problem of interest with applications in medical and material sciences. X-ray fluorescence emission tomography (XFET) is an emerging metal mapping imaging modality with potential sensitivity improvements and practical advantages over other methods. However, XFET detector placement must first be optimized to ensure accurate metal density quantification and adequate spatial resolution. In this work, we first use singular value decomposition of the imaging model and eigendecomposition of the object-specific Fisher information matrix to study how detector arrangement affects spatial resolution and feature preservation. We then perform joint image reconstructions of a numerical gold phantom. For this phantom, we show that two parallel detectors provide metal quantification with similar accuracy to four detectors, despite the resulting anisotropic spatial resolution in the attenuation map estimate. Two orthogonal detectors provide improved spatial resolution along one axis, but underestimate the metal concentration in distant regions. Therefore, this work demonstrates the minor effect of using fewer, but strategically placed, detectors in the case where detector placement is restricted. This work is a critical investigation into the limitations and capabilities of XFET prior to its translation to preclinical and benchtop uses.
{"title":"Effect of Detector Placement on Joint Estimation in X-Ray Fluorescence Emission Tomography","authors":"Hadley DeBrosse;Ling Jian Meng;Patrick La Rivière","doi":"10.1109/TRPMS.2023.3332288","DOIUrl":"10.1109/TRPMS.2023.3332288","url":null,"abstract":"Imaging the spatial distribution of low concentrations of metal is a growing problem of interest with applications in medical and material sciences. X-ray fluorescence emission tomography (XFET) is an emerging metal mapping imaging modality with potential sensitivity improvements and practical advantages over other methods. However, XFET detector placement must first be optimized to ensure accurate metal density quantification and adequate spatial resolution. In this work, we first use singular value decomposition of the imaging model and eigendecomposition of the object-specific Fisher information matrix to study how detector arrangement affects spatial resolution and feature preservation. We then perform joint image reconstructions of a numerical gold phantom. For this phantom, we show that two parallel detectors provide metal quantification with similar accuracy to four detectors, despite the resulting anisotropic spatial resolution in the attenuation map estimate. Two orthogonal detectors provide improved spatial resolution along one axis, but underestimate the metal concentration in distant regions. Therefore, this work demonstrates the minor effect of using fewer, but strategically placed, detectors in the case where detector placement is restricted. This work is a critical investigation into the limitations and capabilities of XFET prior to its translation to preclinical and benchtop uses.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135611150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-08DOI: 10.1109/TRPMS.2023.3330365
{"title":"2023 Index IEEE Transactions on Radiation and Plasma Medical Sciences Vol. 7","authors":"","doi":"10.1109/TRPMS.2023.3330365","DOIUrl":"10.1109/TRPMS.2023.3330365","url":null,"abstract":"","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2023-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10312794","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135515041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-07DOI: 10.1109/TRPMS.2023.3330772
Lu Wen;Jianghong Xiao;Chen Zu;Xi Wu;Jiliu Zhou;Xingchen Peng;Yan Wang
Cervical cancer stands as a prominent female malignancy, posing a serious threat to women’s health. The clinical solution typically involves time-consuming and laborious radiotherapy planning. Although convolutional neural network (CNN)-based models have been investigated to automate the radiotherapy planning by predicting its outcomes, i.e., dose distribution maps, the insufficiency of data in the cervical cancer dataset limits the prediction performance and generalization of models. Additionally, the intrinsic locality of convolution operations also hinders models from capturing dose information at a global range, limiting the prediction accuracy. In this article, we propose a transfer learning framework embedded with transformer, namely, DoseTransfer, to automatically predict the dose distribution for cervical cancer. To address the limited data in the cervical cancer dataset, we leverage highly correlated clinical information from rectum cancer and transfer this knowledge in a two-phase framework. Specifically, the first phase is the pretraining phase which aims to pretrain the model with the rectum cancer dataset and extract prior knowledge from rectum cancer, while the second phase is the transferring phase where the priorly learned knowledge is effectively transferred to cervical cancer and guides the model to achieve better accuracy. Moreover, both phases are embedded with transformers to capture the global dependencies ignored by CNN, learning wider feature representations. Experimental results on the in-house datasets (i.e., rectum cancer dataset and cervical cancer dataset) have demonstrated the effectiveness of the proposed method.
{"title":"DoseTransfer: A Transformer Embedded Model With Transfer Learning for Radiotherapy Dose Prediction of Cervical Cancer","authors":"Lu Wen;Jianghong Xiao;Chen Zu;Xi Wu;Jiliu Zhou;Xingchen Peng;Yan Wang","doi":"10.1109/TRPMS.2023.3330772","DOIUrl":"10.1109/TRPMS.2023.3330772","url":null,"abstract":"Cervical cancer stands as a prominent female malignancy, posing a serious threat to women’s health. The clinical solution typically involves time-consuming and laborious radiotherapy planning. Although convolutional neural network (CNN)-based models have been investigated to automate the radiotherapy planning by predicting its outcomes, i.e., dose distribution maps, the insufficiency of data in the cervical cancer dataset limits the prediction performance and generalization of models. Additionally, the intrinsic locality of convolution operations also hinders models from capturing dose information at a global range, limiting the prediction accuracy. In this article, we propose a transfer learning framework embedded with transformer, namely, DoseTransfer, to automatically predict the dose distribution for cervical cancer. To address the limited data in the cervical cancer dataset, we leverage highly correlated clinical information from rectum cancer and transfer this knowledge in a two-phase framework. Specifically, the first phase is the pretraining phase which aims to pretrain the model with the rectum cancer dataset and extract prior knowledge from rectum cancer, while the second phase is the transferring phase where the priorly learned knowledge is effectively transferred to cervical cancer and guides the model to achieve better accuracy. Moreover, both phases are embedded with transformers to capture the global dependencies ignored by CNN, learning wider feature representations. Experimental results on the in-house datasets (i.e., rectum cancer dataset and cervical cancer dataset) have demonstrated the effectiveness of the proposed method.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135507701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Spectral computed tomography (CT) offers the possibility to reconstruct attenuation images at different energy levels, which can be then used for material decomposition. However, traditional methods reconstruct each energy bin individually and are vulnerable to noise. In this article, we propose a novel synergistic method for spectral CT reconstruction, namely, Uconnect. It utilizes trained convolutional neural networks (CNNs) to connect the energy bins to a latent image so that the full binned data is used synergistically. We experiment on two types of low-dose data: 1) simulated and 2) real patient data. Qualitative and quantitative analysis show that our proposed Uconnect outperforms state-of-the-art model-based iterative reconstruction (MBIR) techniques as well as CNN-based denoising.
{"title":"Uconnect: Synergistic Spectral CT Reconstruction With U-Nets Connecting the Energy Bins","authors":"Zhihan Wang;Alexandre Bousse;Franck Vermet;Jacques Froment;Béatrice Vedel;Alessandro Perelli;Jean-Pierre Tasu;Dimitris Visvikis","doi":"10.1109/TRPMS.2023.3330045","DOIUrl":"10.1109/TRPMS.2023.3330045","url":null,"abstract":"Spectral computed tomography (CT) offers the possibility to reconstruct attenuation images at different energy levels, which can be then used for material decomposition. However, traditional methods reconstruct each energy bin individually and are vulnerable to noise. In this article, we propose a novel synergistic method for spectral CT reconstruction, namely, Uconnect. It utilizes trained convolutional neural networks (CNNs) to connect the energy bins to a latent image so that the full binned data is used synergistically. We experiment on two types of low-dose data: 1) simulated and 2) real patient data. Qualitative and quantitative analysis show that our proposed Uconnect outperforms state-of-the-art model-based iterative reconstruction (MBIR) techniques as well as CNN-based denoising.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2023-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134982611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-02DOI: 10.1109/TRPMS.2023.3325699
{"title":"Member Get-A-Member (MGM) Program","authors":"","doi":"10.1109/TRPMS.2023.3325699","DOIUrl":"https://doi.org/10.1109/TRPMS.2023.3325699","url":null,"abstract":"","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71903337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-02DOI: 10.1109/TRPMS.2023.3325693
{"title":"IEEE Transactions on Radiation and Plasma Medical Sciences Information for Authors","authors":"","doi":"10.1109/TRPMS.2023.3325693","DOIUrl":"https://doi.org/10.1109/TRPMS.2023.3325693","url":null,"abstract":"","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71903338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-02DOI: 10.1109/TRPMS.2023.3325695
{"title":"IEEE Transactions on Radiation and Plasma Medical Sciences Publication Information","authors":"","doi":"10.1109/TRPMS.2023.3325695","DOIUrl":"https://doi.org/10.1109/TRPMS.2023.3325695","url":null,"abstract":"","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71903561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01DOI: 10.1109/TRPMS.2023.3307128
Xi Zhang, Xin Yu, Heng Zhang, Changlin Liu, H. Sabet, S. Xie, Jianfeng Xu, Q. Peng
The spatial resolutions of preclinical positron emission tomography (PET) imagers are largely determined by the size of the crystals. This study explores methods to construct PET detectors using crystals with ultrasmall cross Section for preclinical PET imagers with ultrahigh resolution. Three $16times 16$ segmented LYSO: Ce crystal arrays were built with different reflectors and assembling techniques using $0.25times 0.25times 6.25,,{mathrm{ mm}}^{3}$ pixels. The crystal arrays were readout by 3-mm SiPMs with a crystal-to-SiPM pixel area ratio of approximately 1:94, and the signals were recorded with custom-designed read-out electronics. Two coupling configurations were conducted. The arrays were evaluated in terms of flood histogram, energy resolution, and timing resolution. The first array, constructed with discrete LYSO crystals filled with BaSO4 reflectors, had nonuniformly distributed decoding spots in the flood histogram. The second array, constructed with enhanced specular reflector (ESR) reflectors using the slab-sandwich-slice (SSS) production method, had a distorted flood histogram. The third array, constructed with the combination of ESR and BaSO4 using the SSS production method, achieved the best flood histogram in terms of crystal spot uniformity and peak-to-valley ratio (2.80±0.53). The third array also demonstrated good energy resolution (14.89%±2.30%) and timing resolution (926.5 ps). These findings suggest that the SSS production method using the combined reflectors of ESR and BaSO4 is a potential method to construct detectors for ultrahigh-resolution PET imagers.
{"title":"Development and Evaluation of 0.35-mm-Pitch PET Detectors With Different Reflector Arrangements","authors":"Xi Zhang, Xin Yu, Heng Zhang, Changlin Liu, H. Sabet, S. Xie, Jianfeng Xu, Q. Peng","doi":"10.1109/TRPMS.2023.3307128","DOIUrl":"https://doi.org/10.1109/TRPMS.2023.3307128","url":null,"abstract":"The spatial resolutions of preclinical positron emission tomography (PET) imagers are largely determined by the size of the crystals. This study explores methods to construct PET detectors using crystals with ultrasmall cross Section for preclinical PET imagers with ultrahigh resolution. Three $16times 16$ segmented LYSO: Ce crystal arrays were built with different reflectors and assembling techniques using $0.25times 0.25times 6.25,,{mathrm{ mm}}^{3}$ pixels. The crystal arrays were readout by 3-mm SiPMs with a crystal-to-SiPM pixel area ratio of approximately 1:94, and the signals were recorded with custom-designed read-out electronics. Two coupling configurations were conducted. The arrays were evaluated in terms of flood histogram, energy resolution, and timing resolution. The first array, constructed with discrete LYSO crystals filled with BaSO4 reflectors, had nonuniformly distributed decoding spots in the flood histogram. The second array, constructed with enhanced specular reflector (ESR) reflectors using the slab-sandwich-slice (SSS) production method, had a distorted flood histogram. The third array, constructed with the combination of ESR and BaSO4 using the SSS production method, achieved the best flood histogram in terms of crystal spot uniformity and peak-to-valley ratio (2.80±0.53). The third array also demonstrated good energy resolution (14.89%±2.30%) and timing resolution (926.5 ps). These findings suggest that the SSS production method using the combined reflectors of ESR and BaSO4 is a potential method to construct detectors for ultrahigh-resolution PET imagers.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74437330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01DOI: 10.1109/TRPMS.2023.3310581
R. Latella, Antonio J. Gonzalez, D. Bonifacio, M. Kovylina, A. Griol, J. Benlloch, P. Lecoq, G. Konstantinou
In time-of-flight positron emission tomography (TOF-PET), the timing capabilities of the scintillation-based detector play an important role. An approach for fast timing is using the so-called metascintillators, which combine two materials leading to the synergistic blending of their favorable characteristics. An added effect for BGO-based metascintillators is taking advantage of better transportation of Cherenkov photons through UV-transparent materials such as plastic (type EJ232). To prove this, we use an optimized Coincidence Time Resolution (CTR) setup based on electronic boards with two output signals (timing and energy) and near-ultraviolet (NUV) and vacuum-ultraviolet (VUV) silicon photomultipliers (SiPMs) from Fondazione Bruno Kessler (FBK), along with different coupling materials. As a reference detector, we employed a $3times 3times 5$ -mm3 LYSO:Ce,Ca crystal pixel coupled with optical grease to an NUV-HD SiPM. The evaluation is based on low-threshold rise time, energy and time of arrival of event datasets. Timing results of a BGO/EJ $232,,3times 3times 15$ -mm3 metapixel show detector time resolutions (DTRs) of 159 ps for the full photopeak. We demonstrate the possibility of event discrimination using subsets with different DTR from the rise time distributions (RTDs). Finally, we present the synergistic capability of metascintillators to enhance Cherenkov photons detection when used along with VUV-sensitive SiPMs.
{"title":"Exploiting Cherenkov Radiation With BGO-Based Metascintillators","authors":"R. Latella, Antonio J. Gonzalez, D. Bonifacio, M. Kovylina, A. Griol, J. Benlloch, P. Lecoq, G. Konstantinou","doi":"10.1109/TRPMS.2023.3310581","DOIUrl":"https://doi.org/10.1109/TRPMS.2023.3310581","url":null,"abstract":"In time-of-flight positron emission tomography (TOF-PET), the timing capabilities of the scintillation-based detector play an important role. An approach for fast timing is using the so-called metascintillators, which combine two materials leading to the synergistic blending of their favorable characteristics. An added effect for BGO-based metascintillators is taking advantage of better transportation of Cherenkov photons through UV-transparent materials such as plastic (type EJ232). To prove this, we use an optimized Coincidence Time Resolution (CTR) setup based on electronic boards with two output signals (timing and energy) and near-ultraviolet (NUV) and vacuum-ultraviolet (VUV) silicon photomultipliers (SiPMs) from Fondazione Bruno Kessler (FBK), along with different coupling materials. As a reference detector, we employed a $3times 3times 5$ -mm3 LYSO:Ce,Ca crystal pixel coupled with optical grease to an NUV-HD SiPM. The evaluation is based on low-threshold rise time, energy and time of arrival of event datasets. Timing results of a BGO/EJ $232,,3times 3times 15$ -mm3 metapixel show detector time resolutions (DTRs) of 159 ps for the full photopeak. We demonstrate the possibility of event discrimination using subsets with different DTR from the rise time distributions (RTDs). Finally, we present the synergistic capability of metascintillators to enhance Cherenkov photons detection when used along with VUV-sensitive SiPMs.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91017878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}