Benjamin Provencher , Aly Badran , Jonathan Kroll , Mike Marsh
{"title":"Hyperparameter tuning for deep learning semantic image segmentation of micro computed tomography scanned fiber-reinforced composites","authors":"Benjamin Provencher , Aly Badran , Jonathan Kroll , Mike Marsh","doi":"10.1016/j.tmater.2024.100032","DOIUrl":null,"url":null,"abstract":"<div><p>Image segmentation with deep learning models has significantly improved the accuracy of the pixel-wise labeling of scientific imaging which is critical for many quantitative image analyses. This has been feasible through U-Net and related architecture convolutional neural network models. Although the adoption of these models has been widespread, their training data pool and hyperparameters have been mostly determined by educated guesses through trial and error. In this study, we present observations of how training data volume, data augmentation, and patch size affect deep learning performance within a limited data set. Here we study U-Net model training on four different samples of x-ray CT images of fiber-reinforced composites. Because the training process is not deterministic, we relied on seven-fold replication of each experimental condition to avoid under-sampling and observe model training variance. Unsurprisingly, we find greater training data volume strongly benefits individual models’ final accuracy and learning speed while depressing variance among replicates. Importantly, data augmentation has a profound benefit to model performance, especially in cases with a low abundance of ground truth, and we conclude that high coefficients of data augmentation should be used in scientific imaging semantic segmentation models. Future work to describe and measure image complexity is warranted and likely to ultimately guide researchers on the minimum required training data volume for particular scientific imaging deep learning tasks.</p></div>","PeriodicalId":101254,"journal":{"name":"Tomography of Materials and Structures","volume":"5 ","pages":"Article 100032"},"PeriodicalIF":0.0000,"publicationDate":"2024-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949673X24000093/pdfft?md5=da6854daba2f9f8766fa239754b8e37a&pid=1-s2.0-S2949673X24000093-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Tomography of Materials and Structures","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949673X24000093","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Image segmentation with deep learning models has significantly improved the accuracy of the pixel-wise labeling of scientific imaging which is critical for many quantitative image analyses. This has been feasible through U-Net and related architecture convolutional neural network models. Although the adoption of these models has been widespread, their training data pool and hyperparameters have been mostly determined by educated guesses through trial and error. In this study, we present observations of how training data volume, data augmentation, and patch size affect deep learning performance within a limited data set. Here we study U-Net model training on four different samples of x-ray CT images of fiber-reinforced composites. Because the training process is not deterministic, we relied on seven-fold replication of each experimental condition to avoid under-sampling and observe model training variance. Unsurprisingly, we find greater training data volume strongly benefits individual models’ final accuracy and learning speed while depressing variance among replicates. Importantly, data augmentation has a profound benefit to model performance, especially in cases with a low abundance of ground truth, and we conclude that high coefficients of data augmentation should be used in scientific imaging semantic segmentation models. Future work to describe and measure image complexity is warranted and likely to ultimately guide researchers on the minimum required training data volume for particular scientific imaging deep learning tasks.