Xiaoqin Tang, M. V. Hoff, J. Hoogenboom, Yuanhao Guo, Fuyu Cai, G. Lamers, F. Verbeek
{"title":"基于正弦图统一的光学投影断层成像荧光与亮场三维图像融合","authors":"Xiaoqin Tang, M. V. Hoff, J. Hoogenboom, Yuanhao Guo, Fuyu Cai, G. Lamers, F. Verbeek","doi":"10.1109/BIBM.2016.7822552","DOIUrl":null,"url":null,"abstract":"In order to preserve sufficient fluorescence intensity and improve the quality of fluorescence images in optical projection tomography (OPT) imaging, a feasible acquisition solution is to temporally formalize the fluorescence and bright-field imaging procedure as two consecutive phases. To be specific, fluorescence images are acquired first, in a full axial-view revolution, followed by the bright-field images. Due to the mechanical drift, this approach, however, may suffer from a deviation of center of rotation (COR) for the two imaging phases, resulting in irregular 3D image fusion, with which gene or protein activity may be located inaccurately. In this paper, we address this problem and consider it into a framework based on sinogram unification so as to precisely fuse 3D images from different channels for CORs between channels that are not coincident or if COR is not in the center of sinogram. The former case corresponds to the COR deviation above; while the latter one correlates with COR alignment, without which artefacts will be introduced in the reconstructed results. After sinogram unification, inverse radon transform can be implemented on each channel to reconstruct the 3D image. The fusion results are acquired by mapping the 3D images from different channels into a common space. Experimental results indicate that the proposed framework gains excellent performance in 3D image fusion from different channels. For the COR alignment, a new automated method based on interest point detection and included in sinogram unification, is presented. It outperforms traditional COR alignment approaches in combination of effectiveness and computational complexity.","PeriodicalId":345384,"journal":{"name":"2016 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Fluorescence and bright-field 3D image fusion based on sinogram unification for optical projection tomography\",\"authors\":\"Xiaoqin Tang, M. V. Hoff, J. Hoogenboom, Yuanhao Guo, Fuyu Cai, G. Lamers, F. Verbeek\",\"doi\":\"10.1109/BIBM.2016.7822552\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In order to preserve sufficient fluorescence intensity and improve the quality of fluorescence images in optical projection tomography (OPT) imaging, a feasible acquisition solution is to temporally formalize the fluorescence and bright-field imaging procedure as two consecutive phases. To be specific, fluorescence images are acquired first, in a full axial-view revolution, followed by the bright-field images. Due to the mechanical drift, this approach, however, may suffer from a deviation of center of rotation (COR) for the two imaging phases, resulting in irregular 3D image fusion, with which gene or protein activity may be located inaccurately. In this paper, we address this problem and consider it into a framework based on sinogram unification so as to precisely fuse 3D images from different channels for CORs between channels that are not coincident or if COR is not in the center of sinogram. The former case corresponds to the COR deviation above; while the latter one correlates with COR alignment, without which artefacts will be introduced in the reconstructed results. After sinogram unification, inverse radon transform can be implemented on each channel to reconstruct the 3D image. The fusion results are acquired by mapping the 3D images from different channels into a common space. Experimental results indicate that the proposed framework gains excellent performance in 3D image fusion from different channels. For the COR alignment, a new automated method based on interest point detection and included in sinogram unification, is presented. It outperforms traditional COR alignment approaches in combination of effectiveness and computational complexity.\",\"PeriodicalId\":345384,\"journal\":{\"name\":\"2016 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)\",\"volume\":\"4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/BIBM.2016.7822552\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BIBM.2016.7822552","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Fluorescence and bright-field 3D image fusion based on sinogram unification for optical projection tomography
In order to preserve sufficient fluorescence intensity and improve the quality of fluorescence images in optical projection tomography (OPT) imaging, a feasible acquisition solution is to temporally formalize the fluorescence and bright-field imaging procedure as two consecutive phases. To be specific, fluorescence images are acquired first, in a full axial-view revolution, followed by the bright-field images. Due to the mechanical drift, this approach, however, may suffer from a deviation of center of rotation (COR) for the two imaging phases, resulting in irregular 3D image fusion, with which gene or protein activity may be located inaccurately. In this paper, we address this problem and consider it into a framework based on sinogram unification so as to precisely fuse 3D images from different channels for CORs between channels that are not coincident or if COR is not in the center of sinogram. The former case corresponds to the COR deviation above; while the latter one correlates with COR alignment, without which artefacts will be introduced in the reconstructed results. After sinogram unification, inverse radon transform can be implemented on each channel to reconstruct the 3D image. The fusion results are acquired by mapping the 3D images from different channels into a common space. Experimental results indicate that the proposed framework gains excellent performance in 3D image fusion from different channels. For the COR alignment, a new automated method based on interest point detection and included in sinogram unification, is presented. It outperforms traditional COR alignment approaches in combination of effectiveness and computational complexity.