{"title":"Globally Deformable Information Selection Transformer for Underwater Image Enhancement","authors":"Junbin Zhuang;Yan Zheng;Baolong Guo;Yunyi Yan","doi":"10.1109/TCSVT.2024.3451553","DOIUrl":null,"url":null,"abstract":"In the rapidly evolving image processing domain, transformers have emerged as powerful tools, yet significant challenges are encountered when they are applied to underwater image enhancement, such as visual disparity and computational inefficiency. Transformers do not have a unique module to maintain their performance while reducing the number of parameters. This study addresses the gap in the literature by introducing the globally deformable selection transformer (GS-Transformer), which is a model designed to enhance global feature selection and pixel connectivity, thereby reducing the computational complexity of the model while maintaining the image processing effect. Our novel multiresolution encoder-decoder module explicitly incorporates global information, overcoming the limitations of traditional transformers, whereas the multilocal coherence preserving loss (MCPL) mechanism ensures content integrity and coherence. Compared with the latest transform-based underwater image algorithms, this method is 15 times faster and utilizes only 41.7% (or approximately a half less) of the number of parameters. The experimental results on the UIEB, EUVP, and Synthesize datasets reveal that GS-Transformer achieves state-of-the-art performance in underwater image enhancement, with a reduced parameter number and improved efficiency, representing a significant advancement in the field. Our research will promote the application of the transformer in scenarios with high real-time performance.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 1","pages":"19-32"},"PeriodicalIF":11.1000,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10659034","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10659034/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
In the rapidly evolving image processing domain, transformers have emerged as powerful tools, yet significant challenges are encountered when they are applied to underwater image enhancement, such as visual disparity and computational inefficiency. Transformers do not have a unique module to maintain their performance while reducing the number of parameters. This study addresses the gap in the literature by introducing the globally deformable selection transformer (GS-Transformer), which is a model designed to enhance global feature selection and pixel connectivity, thereby reducing the computational complexity of the model while maintaining the image processing effect. Our novel multiresolution encoder-decoder module explicitly incorporates global information, overcoming the limitations of traditional transformers, whereas the multilocal coherence preserving loss (MCPL) mechanism ensures content integrity and coherence. Compared with the latest transform-based underwater image algorithms, this method is 15 times faster and utilizes only 41.7% (or approximately a half less) of the number of parameters. The experimental results on the UIEB, EUVP, and Synthesize datasets reveal that GS-Transformer achieves state-of-the-art performance in underwater image enhancement, with a reduced parameter number and improved efficiency, representing a significant advancement in the field. Our research will promote the application of the transformer in scenarios with high real-time performance.
期刊介绍:
The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.