Dongyu Zheng , Li Hou , Xiumian Hu , Mingcai Hou , Kai Dong , Sihai Hu , Runlin Teng , Chao Ma
{"title":"Sediment grain segmentation in thin-section images using dual-modal Vision Transformer","authors":"Dongyu Zheng , Li Hou , Xiumian Hu , Mingcai Hou , Kai Dong , Sihai Hu , Runlin Teng , Chao Ma","doi":"10.1016/j.cageo.2024.105664","DOIUrl":null,"url":null,"abstract":"<div><p>Accurately identifying grain types in thin sections of sandy sediments or sandstones is crucial for understanding their provenance, depositional environments, and potential as natural resources. Although traditional computer vision methods and machine learning algorithms have been used for automatic grain identification, recent advancements in deep learning techniques have opened up new possibilities for achieving more reliable results with less manual labor. In this study, we present Trans-SedNet, a state-of-the-art dual-modal Vision-Transformer (ViT) model that uses both cross- (XPL) and plane-polarized light (PPL) images to achieve semantic segmentation of thin-section images. Our model classifies a total of ten grain types, including subtypes of quartz, feldspar, and lithic fragments, to emulate the manual identification process in sedimentary petrology. To optimize performance, we use SegFormer as the model backbone and add window- and mix-attention to the encoder to identify local information in the images and to best use XPL and PPL images. We also use a combination of focal and dice loss and a smoothing procedure to address imbalances and reduce over-segmentation. Our comparative analysis of several deep convolution neural networks and ViT models, including FCN, U-Net, DeepLabV3Plus, SegNeXT, and CMX, shows that Trans-SedNet outperforms the other models with a significant increase in evaluation metrics of mIoU and mPA. We also conduct an experiment to test the models' ability to handle dual-modal information, which reveals that the dual-modal models, including Trans-SedNet, achieve better results than single-modal models with the extra input of PPL images. Our study demonstrates the potential of ViT models in semantic segmentation of thin-section images and highlights the importance of dual-modal models for handling complex input in various geoscience disciplines. By improving data quality and quantity, our model has the potential to enhance the efficiency and reliability of grain identification in sedimentary petrology and relevant subjects.</p></div>","PeriodicalId":55221,"journal":{"name":"Computers & Geosciences","volume":"191 ","pages":"Article 105664"},"PeriodicalIF":4.2000,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Geosciences","FirstCategoryId":"89","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S009830042400147X","RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
Accurately identifying grain types in thin sections of sandy sediments or sandstones is crucial for understanding their provenance, depositional environments, and potential as natural resources. Although traditional computer vision methods and machine learning algorithms have been used for automatic grain identification, recent advancements in deep learning techniques have opened up new possibilities for achieving more reliable results with less manual labor. In this study, we present Trans-SedNet, a state-of-the-art dual-modal Vision-Transformer (ViT) model that uses both cross- (XPL) and plane-polarized light (PPL) images to achieve semantic segmentation of thin-section images. Our model classifies a total of ten grain types, including subtypes of quartz, feldspar, and lithic fragments, to emulate the manual identification process in sedimentary petrology. To optimize performance, we use SegFormer as the model backbone and add window- and mix-attention to the encoder to identify local information in the images and to best use XPL and PPL images. We also use a combination of focal and dice loss and a smoothing procedure to address imbalances and reduce over-segmentation. Our comparative analysis of several deep convolution neural networks and ViT models, including FCN, U-Net, DeepLabV3Plus, SegNeXT, and CMX, shows that Trans-SedNet outperforms the other models with a significant increase in evaluation metrics of mIoU and mPA. We also conduct an experiment to test the models' ability to handle dual-modal information, which reveals that the dual-modal models, including Trans-SedNet, achieve better results than single-modal models with the extra input of PPL images. Our study demonstrates the potential of ViT models in semantic segmentation of thin-section images and highlights the importance of dual-modal models for handling complex input in various geoscience disciplines. By improving data quality and quantity, our model has the potential to enhance the efficiency and reliability of grain identification in sedimentary petrology and relevant subjects.
期刊介绍:
Computers & Geosciences publishes high impact, original research at the interface between Computer Sciences and Geosciences. Publications should apply modern computer science paradigms, whether computational or informatics-based, to address problems in the geosciences.