Automatic Generation of Lymphoma Post-Treatment PETs using Conditional-GANs

G. Silva, Inês Domingues, Hugo Duarte, João A. M. Santos
{"title":"Automatic Generation of Lymphoma Post-Treatment PETs using Conditional-GANs","authors":"G. Silva, Inês Domingues, Hugo Duarte, João A. M. Santos","doi":"10.1109/DICTA47822.2019.8945835","DOIUrl":null,"url":null,"abstract":"Positron emission tomography (PET) imaging is a nuclear medicine functional imaging technique and as such it is expensive to perform and subjects the human body to radiation. Therefore, it would be ideal to find a technique that could allow for these images to be generated automatically. This generation can be done using deep learning techniques, more specifically with generative adversarial networks. As far as we are aware there have been no attempts at PET-to-PET generation to date. The objective of this article is to develop a generative adversarial network capable of generating after-treatment PET images from pre-treatment PET images. In order to develop this model, PET scans, originally in 3D, were converted to 2D images. Two methods were used, hand picking each slice and maximum intensity projection. After extracting the slices, several image co-registration techniques were applied in order to find which one would produce the best results according to two metrics, peak signal-to-noise ratio and structural similarity index. They achieved results of 18.8 and 0.856, respectively, using data from 90 patients with Hodgkin's Lymphoma.","PeriodicalId":6696,"journal":{"name":"2019 Digital Image Computing: Techniques and Applications (DICTA)","volume":"4 1","pages":"1-6"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 Digital Image Computing: Techniques and Applications (DICTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DICTA47822.2019.8945835","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Positron emission tomography (PET) imaging is a nuclear medicine functional imaging technique and as such it is expensive to perform and subjects the human body to radiation. Therefore, it would be ideal to find a technique that could allow for these images to be generated automatically. This generation can be done using deep learning techniques, more specifically with generative adversarial networks. As far as we are aware there have been no attempts at PET-to-PET generation to date. The objective of this article is to develop a generative adversarial network capable of generating after-treatment PET images from pre-treatment PET images. In order to develop this model, PET scans, originally in 3D, were converted to 2D images. Two methods were used, hand picking each slice and maximum intensity projection. After extracting the slices, several image co-registration techniques were applied in order to find which one would produce the best results according to two metrics, peak signal-to-noise ratio and structural similarity index. They achieved results of 18.8 and 0.856, respectively, using data from 90 patients with Hodgkin's Lymphoma.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
使用条件gan自动生成淋巴瘤治疗后pet
正电子发射断层扫描(PET)成像是一种核医学功能成像技术,由于其昂贵的性能和对人体的辐射。因此,最好找到一种能够自动生成这些图像的技术。这一代可以使用深度学习技术来完成,更具体地说,是使用生成对抗网络。据我们所知,迄今为止还没有尝试过PET-to-PET代。本文的目的是开发一个生成对抗网络,能够从预处理PET图像生成后处理PET图像。为了开发这个模型,原本是3D的PET扫描被转换成2D图像。采用两种方法:手工采摘每个切片和最大强度投影。在提取图像切片后,根据峰值信噪比和结构相似度这两个指标,采用几种图像共配准技术,以找出哪一种技术能产生最好的结果。使用90例霍奇金淋巴瘤患者的数据,他们分别获得了18.8和0.856的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Enhanced Micro Target Detection through Local Motion Feedback in Biologically Inspired Algorithms Hyperspectral Image Analysis for Writer Identification using Deep Learning Robust Image Watermarking Framework Powered by Convolutional Encoder-Decoder Network Single View 3D Point Cloud Reconstruction using Novel View Synthesis and Self-Supervised Depth Estimation Semantic Segmentation under Severe Imaging Conditions
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1