Bin Kong, Jing Qian, Pinhao Song, Jing Yang, Amir Hussain
{"title":"Underwater image clarifying based on human visual colour constancy using double-opponency","authors":"Bin Kong, Jing Qian, Pinhao Song, Jing Yang, Amir Hussain","doi":"10.1049/cit2.12260","DOIUrl":null,"url":null,"abstract":"<p>Underwater images are often with biased colours and reduced contrast because of the absorption and scattering effects when light propagates in water. Such images with degradation cannot meet the needs of underwater operations. The main problem in classic underwater image restoration or enhancement methods is that they consume long calculation time, and often, the colour or contrast of the result images is still unsatisfied. Instead of using the complicated physical model of underwater imaging degradation, we propose a new method to deal with underwater images by imitating the colour constancy mechanism of human vision using double-opponency. Firstly, the original image is converted to the LMS space. Then the signals are linearly combined, and Gaussian convolutions are performed to imitate the function of receptive fields (RFs). Next, two RFs with different sizes work together to constitute the double-opponency response. Finally, the underwater light is estimated to correct the colours in the image. Further contrast stretching on the luminance is optional. Experiments show that the proposed method can obtain clarified underwater images with higher quality than before, and it spends significantly less time cost compared to other previously published typical methods.</p>","PeriodicalId":46211,"journal":{"name":"CAAI Transactions on Intelligence Technology","volume":"9 3","pages":"632-648"},"PeriodicalIF":8.4000,"publicationDate":"2023-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cit2.12260","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"CAAI Transactions on Intelligence Technology","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/cit2.12260","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Underwater images are often with biased colours and reduced contrast because of the absorption and scattering effects when light propagates in water. Such images with degradation cannot meet the needs of underwater operations. The main problem in classic underwater image restoration or enhancement methods is that they consume long calculation time, and often, the colour or contrast of the result images is still unsatisfied. Instead of using the complicated physical model of underwater imaging degradation, we propose a new method to deal with underwater images by imitating the colour constancy mechanism of human vision using double-opponency. Firstly, the original image is converted to the LMS space. Then the signals are linearly combined, and Gaussian convolutions are performed to imitate the function of receptive fields (RFs). Next, two RFs with different sizes work together to constitute the double-opponency response. Finally, the underwater light is estimated to correct the colours in the image. Further contrast stretching on the luminance is optional. Experiments show that the proposed method can obtain clarified underwater images with higher quality than before, and it spends significantly less time cost compared to other previously published typical methods.
期刊介绍:
CAAI Transactions on Intelligence Technology is a leading venue for original research on the theoretical and experimental aspects of artificial intelligence technology. We are a fully open access journal co-published by the Institution of Engineering and Technology (IET) and the Chinese Association for Artificial Intelligence (CAAI) providing research which is openly accessible to read and share worldwide.