模拟双眼皮术后图像的无监督生成模型。

IF 2.4 4区 医学 Q3 ENGINEERING, BIOMEDICAL Physical and Engineering Sciences in Medicine Pub Date : 2024-10-21 DOI:10.1007/s13246-024-01488-9
Renzhong Wu, Shenghui Liao, Peishan Dai, Fuchang Han, Xiaoyan Kui, Xuefei Song
{"title":"模拟双眼皮术后图像的无监督生成模型。","authors":"Renzhong Wu, Shenghui Liao, Peishan Dai, Fuchang Han, Xiaoyan Kui, Xuefei Song","doi":"10.1007/s13246-024-01488-9","DOIUrl":null,"url":null,"abstract":"<p><p>Simulating the outcome of double eyelid surgery is a challenging task. Many existing approaches rely on complex and time-consuming 3D digital models to reconstruct facial features for simulating facial plastic surgery outcomes. Some recent research performed a simple affine transformation approach based on 2D images to simulate double eyelid surgery outcomes. However, these methods have faced challenges, such as generating unnatural simulation outcomes and requiring manual removal of masks from images. To address these issues, we have pioneered the use of an unsupervised generative model to generate post-operative double eyelid images. Firstly, we created a dataset involving pre- and post-operative 2D images of double eyelid surgery. Secondly, we proposed a novel attention-class activation map module, which was embedded in a generative adversarial model to facilitate translating a single eyelid image to a double eyelid image. This innovative module enables the generator to selectively focus on the eyelid region that differentiates between the source and target domain, while enhancing the discriminator's ability to discern differences between real and generated images. Finally, we have adjusted the adversarial consistency loss to guide the generator in preserving essential features from the source image and eliminating any masks when generating the double eyelid image. Experimental results have demonstrated the superiority of our approach over existing state-of-the-art techniques.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":null,"pages":null},"PeriodicalIF":2.4000,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Unsupervised generative model for simulating post-operative double eyelid image.\",\"authors\":\"Renzhong Wu, Shenghui Liao, Peishan Dai, Fuchang Han, Xiaoyan Kui, Xuefei Song\",\"doi\":\"10.1007/s13246-024-01488-9\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Simulating the outcome of double eyelid surgery is a challenging task. Many existing approaches rely on complex and time-consuming 3D digital models to reconstruct facial features for simulating facial plastic surgery outcomes. Some recent research performed a simple affine transformation approach based on 2D images to simulate double eyelid surgery outcomes. However, these methods have faced challenges, such as generating unnatural simulation outcomes and requiring manual removal of masks from images. To address these issues, we have pioneered the use of an unsupervised generative model to generate post-operative double eyelid images. Firstly, we created a dataset involving pre- and post-operative 2D images of double eyelid surgery. Secondly, we proposed a novel attention-class activation map module, which was embedded in a generative adversarial model to facilitate translating a single eyelid image to a double eyelid image. This innovative module enables the generator to selectively focus on the eyelid region that differentiates between the source and target domain, while enhancing the discriminator's ability to discern differences between real and generated images. Finally, we have adjusted the adversarial consistency loss to guide the generator in preserving essential features from the source image and eliminating any masks when generating the double eyelid image. Experimental results have demonstrated the superiority of our approach over existing state-of-the-art techniques.</p>\",\"PeriodicalId\":48490,\"journal\":{\"name\":\"Physical and Engineering Sciences in Medicine\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.4000,\"publicationDate\":\"2024-10-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Physical and Engineering Sciences in Medicine\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1007/s13246-024-01488-9\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Physical and Engineering Sciences in Medicine","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s13246-024-01488-9","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

摘要

模拟双眼皮手术效果是一项具有挑战性的任务。现有的许多方法都依赖于复杂耗时的三维数字模型来重建面部特征,从而模拟面部整形手术的效果。最近的一些研究基于二维图像采用简单的仿射变换方法来模拟双眼皮手术效果。然而,这些方法都面临着一些挑战,如产生不自然的模拟结果,以及需要手动去除图像中的遮罩等。为了解决这些问题,我们率先使用无监督生成模型来生成双眼皮术后图像。首先,我们创建了一个涉及双眼皮手术术前和术后二维图像的数据集。其次,我们提出了一个新颖的注意力类激活图模块,将其嵌入生成对抗模型中,以方便将单眼皮图像转换为双眼皮图像。这一创新模块使生成器能够选择性地聚焦于区分源域和目标域的眼睑区域,同时增强了判别器辨别真实图像和生成图像之间差异的能力。最后,我们调整了对抗一致性损失,以指导生成器在生成双眼皮图像时保留源图像的基本特征并消除任何掩码。实验结果表明,我们的方法优于现有的先进技术。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Unsupervised generative model for simulating post-operative double eyelid image.

Simulating the outcome of double eyelid surgery is a challenging task. Many existing approaches rely on complex and time-consuming 3D digital models to reconstruct facial features for simulating facial plastic surgery outcomes. Some recent research performed a simple affine transformation approach based on 2D images to simulate double eyelid surgery outcomes. However, these methods have faced challenges, such as generating unnatural simulation outcomes and requiring manual removal of masks from images. To address these issues, we have pioneered the use of an unsupervised generative model to generate post-operative double eyelid images. Firstly, we created a dataset involving pre- and post-operative 2D images of double eyelid surgery. Secondly, we proposed a novel attention-class activation map module, which was embedded in a generative adversarial model to facilitate translating a single eyelid image to a double eyelid image. This innovative module enables the generator to selectively focus on the eyelid region that differentiates between the source and target domain, while enhancing the discriminator's ability to discern differences between real and generated images. Finally, we have adjusted the adversarial consistency loss to guide the generator in preserving essential features from the source image and eliminating any masks when generating the double eyelid image. Experimental results have demonstrated the superiority of our approach over existing state-of-the-art techniques.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
8.40
自引率
4.50%
发文量
110
期刊最新文献
Improving deep learning U-Net++ by discrete wavelet and attention gate mechanisms for effective pathological lung segmentation in chest X-ray imaging. Investigating 4D respiratory cone-beam CT imaging for thoracic interventions on robotic C-arm systems: a deformable phantom study. Sensitivity improvement of a deuterium-deuterium neutron generator based in vivo neutron activation analysis (IVNAA) system. Unsupervised generative model for simulating post-operative double eyelid image. Quality management of head and neck patient treatments using statistical process control techniques.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1