{"title":"低计数动态脑 PET 的深度学习辅助帧内运动校正","authors":"Erik Reimers;Ju-Chieh Cheng;Vesna Sossi","doi":"10.1109/TRPMS.2023.3333202","DOIUrl":null,"url":null,"abstract":"Data-driven intraframe motion correction of a dynamic brain PET scan (with each frame duration on the order of minutes) is often achieved through the co-registration of high-temporal-resolution (e.g., 1-s duration) subframes to estimate subject head motion. However, this conventional method of subframe co-registration may perform poorly during periods of low counts and/or drastic changes in the spatial tracer distribution over time. Here, we propose a deep learning (DL), U-Net-based convolutional neural network model which aids in the PET motion estimation to overcome these limitations. Unlike DL models for PET denoising, a nonstandard 2.5-D DL model was used which transforms the high-temporal-resolution subframes into nonquantitative DL subframes which allow for improved differentiation between noise and structural/functional landmarks and estimate a constant tracer distribution across time. When estimating motion during periods of drastic change in spatial distribution (within the first minute of the scan, ~1-s temporal resolution), the proposed DL method was found to reduce the expected magnitude of error (+/−) in the estimation for an artificially injected motion trace from 16 mm and 7° (conventional method) to 0.7 mm and 0.6° (DL method). During periods of low counts but a relatively constant spatial tracer distribution (60th min of the scan, ~1-s temporal resolution), an expected error was reduced from 0.5 mm and 0.7° (conventional method) to 0.3 mm and 0.4° (DL method). The use of the DL method was found to significantly improve the accuracy of an image-derived input function calculation when motion was present during the first minute of the scan.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 1","pages":"53-63"},"PeriodicalIF":4.6000,"publicationDate":"2023-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep-Learning-Aided Intraframe Motion Correction for Low-Count Dynamic Brain PET\",\"authors\":\"Erik Reimers;Ju-Chieh Cheng;Vesna Sossi\",\"doi\":\"10.1109/TRPMS.2023.3333202\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Data-driven intraframe motion correction of a dynamic brain PET scan (with each frame duration on the order of minutes) is often achieved through the co-registration of high-temporal-resolution (e.g., 1-s duration) subframes to estimate subject head motion. However, this conventional method of subframe co-registration may perform poorly during periods of low counts and/or drastic changes in the spatial tracer distribution over time. Here, we propose a deep learning (DL), U-Net-based convolutional neural network model which aids in the PET motion estimation to overcome these limitations. Unlike DL models for PET denoising, a nonstandard 2.5-D DL model was used which transforms the high-temporal-resolution subframes into nonquantitative DL subframes which allow for improved differentiation between noise and structural/functional landmarks and estimate a constant tracer distribution across time. When estimating motion during periods of drastic change in spatial distribution (within the first minute of the scan, ~1-s temporal resolution), the proposed DL method was found to reduce the expected magnitude of error (+/−) in the estimation for an artificially injected motion trace from 16 mm and 7° (conventional method) to 0.7 mm and 0.6° (DL method). During periods of low counts but a relatively constant spatial tracer distribution (60th min of the scan, ~1-s temporal resolution), an expected error was reduced from 0.5 mm and 0.7° (conventional method) to 0.3 mm and 0.4° (DL method). The use of the DL method was found to significantly improve the accuracy of an image-derived input function calculation when motion was present during the first minute of the scan.\",\"PeriodicalId\":46807,\"journal\":{\"name\":\"IEEE Transactions on Radiation and Plasma Medical Sciences\",\"volume\":\"8 1\",\"pages\":\"53-63\"},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2023-11-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Radiation and Plasma Medical Sciences\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10319877/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Radiation and Plasma Medical Sciences","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10319877/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
Deep-Learning-Aided Intraframe Motion Correction for Low-Count Dynamic Brain PET
Data-driven intraframe motion correction of a dynamic brain PET scan (with each frame duration on the order of minutes) is often achieved through the co-registration of high-temporal-resolution (e.g., 1-s duration) subframes to estimate subject head motion. However, this conventional method of subframe co-registration may perform poorly during periods of low counts and/or drastic changes in the spatial tracer distribution over time. Here, we propose a deep learning (DL), U-Net-based convolutional neural network model which aids in the PET motion estimation to overcome these limitations. Unlike DL models for PET denoising, a nonstandard 2.5-D DL model was used which transforms the high-temporal-resolution subframes into nonquantitative DL subframes which allow for improved differentiation between noise and structural/functional landmarks and estimate a constant tracer distribution across time. When estimating motion during periods of drastic change in spatial distribution (within the first minute of the scan, ~1-s temporal resolution), the proposed DL method was found to reduce the expected magnitude of error (+/−) in the estimation for an artificially injected motion trace from 16 mm and 7° (conventional method) to 0.7 mm and 0.6° (DL method). During periods of low counts but a relatively constant spatial tracer distribution (60th min of the scan, ~1-s temporal resolution), an expected error was reduced from 0.5 mm and 0.7° (conventional method) to 0.3 mm and 0.4° (DL method). The use of the DL method was found to significantly improve the accuracy of an image-derived input function calculation when motion was present during the first minute of the scan.