Shuaibin Wang, Li Li, Juan Wang, Tao Peng, Zhenwei Li
{"title":"Highlight mask-guided adaptive residual network for single image highlight detection and removal","authors":"Shuaibin Wang, Li Li, Juan Wang, Tao Peng, Zhenwei Li","doi":"10.1002/cav.2271","DOIUrl":null,"url":null,"abstract":"<p>Specular highlights detection and removal is a challenging task. Although various methods exist for removing specular highlights, they often fail to effectively preserve the color and texture details of objects after highlight removal due to the high brightness and nonuniform distribution characteristics of highlights. Furthermore, when processing scenes with complex highlight properties, existing methods frequently encounter performance bottlenecks, which restrict their applicability. Therefore, we introduce a highlight mask-guided adaptive residual network (HMGARN). HMGARN comprises three main components: detection-net, adaptive-removal network (AR-Net), and reconstruct-net. Specifically, detection-net can accurately predict highlight mask from a single RGB image. The predicted highlight mask is then inputted into the AR-Net, which adaptively guides the model to remove specular highlights and estimate an image without specular highlights. Subsequently, reconstruct-net is used to progressively refine this result, remove any residual specular highlights, and construct the final high-quality image without specular highlights. We evaluated our method on the public dataset (SHIQ) and confirmed its superiority through comparative experimental results.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":0.9000,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Animation and Virtual Worlds","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cav.2271","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
Specular highlights detection and removal is a challenging task. Although various methods exist for removing specular highlights, they often fail to effectively preserve the color and texture details of objects after highlight removal due to the high brightness and nonuniform distribution characteristics of highlights. Furthermore, when processing scenes with complex highlight properties, existing methods frequently encounter performance bottlenecks, which restrict their applicability. Therefore, we introduce a highlight mask-guided adaptive residual network (HMGARN). HMGARN comprises three main components: detection-net, adaptive-removal network (AR-Net), and reconstruct-net. Specifically, detection-net can accurately predict highlight mask from a single RGB image. The predicted highlight mask is then inputted into the AR-Net, which adaptively guides the model to remove specular highlights and estimate an image without specular highlights. Subsequently, reconstruct-net is used to progressively refine this result, remove any residual specular highlights, and construct the final high-quality image without specular highlights. We evaluated our method on the public dataset (SHIQ) and confirmed its superiority through comparative experimental results.
期刊介绍:
With the advent of very powerful PCs and high-end graphics cards, there has been an incredible development in Virtual Worlds, real-time computer animation and simulation, games. But at the same time, new and cheaper Virtual Reality devices have appeared allowing an interaction with these real-time Virtual Worlds and even with real worlds through Augmented Reality. Three-dimensional characters, especially Virtual Humans are now of an exceptional quality, which allows to use them in the movie industry. But this is only a beginning, as with the development of Artificial Intelligence and Agent technology, these characters will become more and more autonomous and even intelligent. They will inhabit the Virtual Worlds in a Virtual Life together with animals and plants.