{"title":"条件生成对抗性网络对生物组织的虚拟荧光翻译。","authors":"Xin Liu, Boyi Li, Chengcheng Liu, Dean Ta","doi":"10.1007/s43657-023-00094-1","DOIUrl":null,"url":null,"abstract":"<p><p>Fluorescence labeling and imaging provide an opportunity to observe the structure of biological tissues, playing a crucial role in the field of histopathology. However, when labeling and imaging biological tissues, there are still some challenges, e.g., time-consuming tissue preparation steps, expensive reagents, and signal bias due to photobleaching. To overcome these limitations, we present a deep-learning-based method for fluorescence translation of tissue sections, which is achieved by conditional generative adversarial network (cGAN). Experimental results from mouse kidney tissues demonstrate that the proposed method can predict the other types of fluorescence images from one raw fluorescence image, and implement the virtual multi-label fluorescent staining by merging the generated different fluorescence images as well. Moreover, this proposed method can also effectively reduce the time-consuming and laborious preparation in imaging processes, and further saves the cost and time.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s43657-023-00094-1.</p>","PeriodicalId":74435,"journal":{"name":"Phenomics (Cham, Switzerland)","volume":"3 4","pages":"408-420"},"PeriodicalIF":3.7000,"publicationDate":"2023-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10425324/pdf/","citationCount":"0","resultStr":"{\"title\":\"Virtual Fluorescence Translation for Biological Tissue by Conditional Generative Adversarial Network.\",\"authors\":\"Xin Liu, Boyi Li, Chengcheng Liu, Dean Ta\",\"doi\":\"10.1007/s43657-023-00094-1\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Fluorescence labeling and imaging provide an opportunity to observe the structure of biological tissues, playing a crucial role in the field of histopathology. However, when labeling and imaging biological tissues, there are still some challenges, e.g., time-consuming tissue preparation steps, expensive reagents, and signal bias due to photobleaching. To overcome these limitations, we present a deep-learning-based method for fluorescence translation of tissue sections, which is achieved by conditional generative adversarial network (cGAN). Experimental results from mouse kidney tissues demonstrate that the proposed method can predict the other types of fluorescence images from one raw fluorescence image, and implement the virtual multi-label fluorescent staining by merging the generated different fluorescence images as well. Moreover, this proposed method can also effectively reduce the time-consuming and laborious preparation in imaging processes, and further saves the cost and time.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s43657-023-00094-1.</p>\",\"PeriodicalId\":74435,\"journal\":{\"name\":\"Phenomics (Cham, Switzerland)\",\"volume\":\"3 4\",\"pages\":\"408-420\"},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2023-03-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10425324/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Phenomics (Cham, Switzerland)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s43657-023-00094-1\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2023/8/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q2\",\"JCRName\":\"GENETICS & HEREDITY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Phenomics (Cham, Switzerland)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s43657-023-00094-1","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/8/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"GENETICS & HEREDITY","Score":null,"Total":0}
Virtual Fluorescence Translation for Biological Tissue by Conditional Generative Adversarial Network.
Fluorescence labeling and imaging provide an opportunity to observe the structure of biological tissues, playing a crucial role in the field of histopathology. However, when labeling and imaging biological tissues, there are still some challenges, e.g., time-consuming tissue preparation steps, expensive reagents, and signal bias due to photobleaching. To overcome these limitations, we present a deep-learning-based method for fluorescence translation of tissue sections, which is achieved by conditional generative adversarial network (cGAN). Experimental results from mouse kidney tissues demonstrate that the proposed method can predict the other types of fluorescence images from one raw fluorescence image, and implement the virtual multi-label fluorescent staining by merging the generated different fluorescence images as well. Moreover, this proposed method can also effectively reduce the time-consuming and laborious preparation in imaging processes, and further saves the cost and time.
Supplementary information: The online version contains supplementary material available at 10.1007/s43657-023-00094-1.