Xinbo Wang, Qing Zhang, Yongwei Nie, Wei-Shi Zheng
{"title":"Towards Photorealistic Portrait Style Transfer in Unconstrained Conditions.","authors":"Xinbo Wang, Qing Zhang, Yongwei Nie, Wei-Shi Zheng","doi":"10.1109/TVCG.2025.3529751","DOIUrl":null,"url":null,"abstract":"<p><p>We present a photorealistic portrait style transfer approach that allows for producing high-quality results in previously challenging unconstrained conditions, e.g., large facial perspective difference between portraits, faces with complex illumination (e.g., shadow and highlight) and occlusion, and can test without portrait parsing masks. We achieve this by developing a framework to learn robust dense correspondence across portraits for semantically aligned style transfer, where a regional style contrastive learning strategy is devised to boost the effectiveness of semantic-aware style transfer while enhancing the robustness to complex illumination. Extensive experiments demonstrate the superiority of our method. Our code is available at https://github.com/wangxb29/PPST.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on visualization and computer graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TVCG.2025.3529751","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We present a photorealistic portrait style transfer approach that allows for producing high-quality results in previously challenging unconstrained conditions, e.g., large facial perspective difference between portraits, faces with complex illumination (e.g., shadow and highlight) and occlusion, and can test without portrait parsing masks. We achieve this by developing a framework to learn robust dense correspondence across portraits for semantically aligned style transfer, where a regional style contrastive learning strategy is devised to boost the effectiveness of semantic-aware style transfer while enhancing the robustness to complex illumination. Extensive experiments demonstrate the superiority of our method. Our code is available at https://github.com/wangxb29/PPST.