Haoran Chang, Rhodri L. Smith, S. Paisey, R. Boutchko, D. Mitra
{"title":"Radon变换下的深度学习图像变换","authors":"Haoran Chang, Rhodri L. Smith, S. Paisey, R. Boutchko, D. Mitra","doi":"10.1109/NSS/MIC42677.2020.9507793","DOIUrl":null,"url":null,"abstract":"Previously, we have shown that an image location, size, or even constant attenuation factor may be estimated by deep learning from the images Radon transformed representation. In this project, we go a step further to estimate a few other mathematical transformation parameters under Radon transformation. The motivation behind the project is that many medical imaging problems are related to estimating similar invariance parameters. Such estimations are typically performed after image reconstruction from detector images that are in the Radon transformed space. The image reconstruction process introduces additional noise of its own. Deep learning provides a framework for direct estimation of required information from the detector images. A specific case we are interested in is dynamic nuclear imaging, where the quantitative estimations of the target tissues are queried. Motion inherent in biological systems, e.g., in vivo imaging with breathing motion, may be modeled as a transformation in the spatial domain. Motion is particularly prevalent in dynamic imaging, while tracer dynamics in the imaged object are a second source of transformation in the time domain. Our neural network model attempts to discern the two types of transformation (motion and intensity variation dynamics), i.e., tries to learn one type of transformation, ignoring the other.","PeriodicalId":6760,"journal":{"name":"2020 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC)","volume":"37 1","pages":"1-3"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Deep Learning Image Transformation under Radon Transform\",\"authors\":\"Haoran Chang, Rhodri L. Smith, S. Paisey, R. Boutchko, D. Mitra\",\"doi\":\"10.1109/NSS/MIC42677.2020.9507793\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Previously, we have shown that an image location, size, or even constant attenuation factor may be estimated by deep learning from the images Radon transformed representation. In this project, we go a step further to estimate a few other mathematical transformation parameters under Radon transformation. The motivation behind the project is that many medical imaging problems are related to estimating similar invariance parameters. Such estimations are typically performed after image reconstruction from detector images that are in the Radon transformed space. The image reconstruction process introduces additional noise of its own. Deep learning provides a framework for direct estimation of required information from the detector images. A specific case we are interested in is dynamic nuclear imaging, where the quantitative estimations of the target tissues are queried. Motion inherent in biological systems, e.g., in vivo imaging with breathing motion, may be modeled as a transformation in the spatial domain. Motion is particularly prevalent in dynamic imaging, while tracer dynamics in the imaged object are a second source of transformation in the time domain. Our neural network model attempts to discern the two types of transformation (motion and intensity variation dynamics), i.e., tries to learn one type of transformation, ignoring the other.\",\"PeriodicalId\":6760,\"journal\":{\"name\":\"2020 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC)\",\"volume\":\"37 1\",\"pages\":\"1-3\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-10-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/NSS/MIC42677.2020.9507793\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NSS/MIC42677.2020.9507793","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deep Learning Image Transformation under Radon Transform
Previously, we have shown that an image location, size, or even constant attenuation factor may be estimated by deep learning from the images Radon transformed representation. In this project, we go a step further to estimate a few other mathematical transformation parameters under Radon transformation. The motivation behind the project is that many medical imaging problems are related to estimating similar invariance parameters. Such estimations are typically performed after image reconstruction from detector images that are in the Radon transformed space. The image reconstruction process introduces additional noise of its own. Deep learning provides a framework for direct estimation of required information from the detector images. A specific case we are interested in is dynamic nuclear imaging, where the quantitative estimations of the target tissues are queried. Motion inherent in biological systems, e.g., in vivo imaging with breathing motion, may be modeled as a transformation in the spatial domain. Motion is particularly prevalent in dynamic imaging, while tracer dynamics in the imaged object are a second source of transformation in the time domain. Our neural network model attempts to discern the two types of transformation (motion and intensity variation dynamics), i.e., tries to learn one type of transformation, ignoring the other.