{"title":"利用 CT 图像间变形场的分层编码进行数据扩增","authors":"Yuya Kuriyama;Mitsuhiro Nakamura;Megumi Nakao","doi":"10.1109/TRPMS.2024.3408818","DOIUrl":null,"url":null,"abstract":"The field of medical machine learning has encountered the challenge of constructing a large-scale image database that includes both the anatomical variability and teaching labels because there are often not sufficient cases of a specific disease. Adversarial learning has been studied for nonlinear data augmentation. However, deep learning models may produce anatomically unrealistic structures or inaccurate pixel values when applied to small sets of computed tomography (CT) images. To overcome this issue, we propose a data augmentation method that uses the hierarchical encoding of deformation fields between the CT images. This allows for the generation of synthetic CT images with shape variability while preserving the patient-specific CT values. Our framework encodes the spatial features of deformation fields into hierarchical latent variables, and generates the synthetic deformation fields by updating the values in specific layers. To implement this concept, we applied the StyleGAN2 and its encoder pixel2style2pixel to the deformation fields and added the ability to control the level of detail in the deformation through the Style Mixing. Our experiments demonstrated that our framework produced high-quality synthetic CT images compared with a conventional framework. Additionally, we applied the augmented datasets with teaching labels to semantic segmentation tasks targeting the liver and stomach, and found that accuracy improved by 1.3% and 7.9%, respectively, which surpassed the results obtained by the existing data augmentation methods.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 8","pages":"939-949"},"PeriodicalIF":4.6000,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Data Augmentation Using the Hierarchical Encoding of Deformation Fields Between CT Images\",\"authors\":\"Yuya Kuriyama;Mitsuhiro Nakamura;Megumi Nakao\",\"doi\":\"10.1109/TRPMS.2024.3408818\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The field of medical machine learning has encountered the challenge of constructing a large-scale image database that includes both the anatomical variability and teaching labels because there are often not sufficient cases of a specific disease. Adversarial learning has been studied for nonlinear data augmentation. However, deep learning models may produce anatomically unrealistic structures or inaccurate pixel values when applied to small sets of computed tomography (CT) images. To overcome this issue, we propose a data augmentation method that uses the hierarchical encoding of deformation fields between the CT images. This allows for the generation of synthetic CT images with shape variability while preserving the patient-specific CT values. Our framework encodes the spatial features of deformation fields into hierarchical latent variables, and generates the synthetic deformation fields by updating the values in specific layers. To implement this concept, we applied the StyleGAN2 and its encoder pixel2style2pixel to the deformation fields and added the ability to control the level of detail in the deformation through the Style Mixing. Our experiments demonstrated that our framework produced high-quality synthetic CT images compared with a conventional framework. Additionally, we applied the augmented datasets with teaching labels to semantic segmentation tasks targeting the liver and stomach, and found that accuracy improved by 1.3% and 7.9%, respectively, which surpassed the results obtained by the existing data augmentation methods.\",\"PeriodicalId\":46807,\"journal\":{\"name\":\"IEEE Transactions on Radiation and Plasma Medical Sciences\",\"volume\":\"8 8\",\"pages\":\"939-949\"},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2024-06-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Radiation and Plasma Medical Sciences\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10547219/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Radiation and Plasma Medical Sciences","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10547219/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
Data Augmentation Using the Hierarchical Encoding of Deformation Fields Between CT Images
The field of medical machine learning has encountered the challenge of constructing a large-scale image database that includes both the anatomical variability and teaching labels because there are often not sufficient cases of a specific disease. Adversarial learning has been studied for nonlinear data augmentation. However, deep learning models may produce anatomically unrealistic structures or inaccurate pixel values when applied to small sets of computed tomography (CT) images. To overcome this issue, we propose a data augmentation method that uses the hierarchical encoding of deformation fields between the CT images. This allows for the generation of synthetic CT images with shape variability while preserving the patient-specific CT values. Our framework encodes the spatial features of deformation fields into hierarchical latent variables, and generates the synthetic deformation fields by updating the values in specific layers. To implement this concept, we applied the StyleGAN2 and its encoder pixel2style2pixel to the deformation fields and added the ability to control the level of detail in the deformation through the Style Mixing. Our experiments demonstrated that our framework produced high-quality synthetic CT images compared with a conventional framework. Additionally, we applied the augmented datasets with teaching labels to semantic segmentation tasks targeting the liver and stomach, and found that accuracy improved by 1.3% and 7.9%, respectively, which surpassed the results obtained by the existing data augmentation methods.