Juan Cisneros, Alain Lalande, Binnaz Yalcin, Fabrice Meriaudeau, Stephan Collins
{"title":"小鼠大脑组织学图像的自动分割","authors":"Juan Cisneros, Alain Lalande, Binnaz Yalcin, Fabrice Meriaudeau, Stephan Collins","doi":"10.3390/a16120553","DOIUrl":null,"url":null,"abstract":"Using a high-throughput neuroanatomical screen of histological brain sections developed in collaboration with the International Mouse Phenotyping Consortium, we previously reported a list of 198 genes whose inactivation leads to neuroanatomical phenotypes. To achieve this milestone, tens of thousands of hours of manual image segmentation were necessary. The present work involved developing a full pipeline to automate the application of deep learning methods for the automated segmentation of 24 anatomical regions used in the aforementioned screen. The dataset includes 2000 annotated parasagittal slides (24,000 × 14,000 pixels). Our approach consists of three main parts: the conversion of images (.ROI to .PNG), the training of the deep learning approach on the compressed images (512 × 256 and 2048 × 1024 pixels of the deep learning approach) to extract the regions of interest using either the U-Net or Attention U-Net architectures, and finally the transformation of the identified regions (.PNG to .ROI), enabling visualization and editing within the Fiji/ImageJ 1.54 software environment. With an image resolution of 2048 × 1024, the Attention U-Net provided the best results with an overall Dice Similarity Coefficient (DSC) of 0.90 ± 0.01 for all 24 regions. Using one command line, the end-user is now able to pre-analyze images automatically, then runs the existing analytical pipeline made of ImageJ macros to validate the automatically generated regions of interest resulting. Even for regions with low DSC, expert neuroanatomists rarely correct the results. We estimate a time savings of 6 to 10 times.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"5 3","pages":""},"PeriodicalIF":1.8000,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Automatic Segmentation of Histological Images of Mouse Brains\",\"authors\":\"Juan Cisneros, Alain Lalande, Binnaz Yalcin, Fabrice Meriaudeau, Stephan Collins\",\"doi\":\"10.3390/a16120553\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Using a high-throughput neuroanatomical screen of histological brain sections developed in collaboration with the International Mouse Phenotyping Consortium, we previously reported a list of 198 genes whose inactivation leads to neuroanatomical phenotypes. To achieve this milestone, tens of thousands of hours of manual image segmentation were necessary. The present work involved developing a full pipeline to automate the application of deep learning methods for the automated segmentation of 24 anatomical regions used in the aforementioned screen. The dataset includes 2000 annotated parasagittal slides (24,000 × 14,000 pixels). Our approach consists of three main parts: the conversion of images (.ROI to .PNG), the training of the deep learning approach on the compressed images (512 × 256 and 2048 × 1024 pixels of the deep learning approach) to extract the regions of interest using either the U-Net or Attention U-Net architectures, and finally the transformation of the identified regions (.PNG to .ROI), enabling visualization and editing within the Fiji/ImageJ 1.54 software environment. With an image resolution of 2048 × 1024, the Attention U-Net provided the best results with an overall Dice Similarity Coefficient (DSC) of 0.90 ± 0.01 for all 24 regions. Using one command line, the end-user is now able to pre-analyze images automatically, then runs the existing analytical pipeline made of ImageJ macros to validate the automatically generated regions of interest resulting. Even for regions with low DSC, expert neuroanatomists rarely correct the results. We estimate a time savings of 6 to 10 times.\",\"PeriodicalId\":7636,\"journal\":{\"name\":\"Algorithms\",\"volume\":\"5 3\",\"pages\":\"\"},\"PeriodicalIF\":1.8000,\"publicationDate\":\"2023-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Algorithms\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3390/a16120553\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Algorithms","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/a16120553","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Automatic Segmentation of Histological Images of Mouse Brains
Using a high-throughput neuroanatomical screen of histological brain sections developed in collaboration with the International Mouse Phenotyping Consortium, we previously reported a list of 198 genes whose inactivation leads to neuroanatomical phenotypes. To achieve this milestone, tens of thousands of hours of manual image segmentation were necessary. The present work involved developing a full pipeline to automate the application of deep learning methods for the automated segmentation of 24 anatomical regions used in the aforementioned screen. The dataset includes 2000 annotated parasagittal slides (24,000 × 14,000 pixels). Our approach consists of three main parts: the conversion of images (.ROI to .PNG), the training of the deep learning approach on the compressed images (512 × 256 and 2048 × 1024 pixels of the deep learning approach) to extract the regions of interest using either the U-Net or Attention U-Net architectures, and finally the transformation of the identified regions (.PNG to .ROI), enabling visualization and editing within the Fiji/ImageJ 1.54 software environment. With an image resolution of 2048 × 1024, the Attention U-Net provided the best results with an overall Dice Similarity Coefficient (DSC) of 0.90 ± 0.01 for all 24 regions. Using one command line, the end-user is now able to pre-analyze images automatically, then runs the existing analytical pipeline made of ImageJ macros to validate the automatically generated regions of interest resulting. Even for regions with low DSC, expert neuroanatomists rarely correct the results. We estimate a time savings of 6 to 10 times.