Inpainting of damaged temple murals using edge- and line-guided diffusion patch GAN.

IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Frontiers in Artificial Intelligence Pub Date : 2024-11-06 eCollection Date: 2024-01-01 DOI:10.3389/frai.2024.1453847
G Sumathi, M Uma Devi
{"title":"Inpainting of damaged temple murals using edge- and line-guided diffusion patch GAN.","authors":"G Sumathi, M Uma Devi","doi":"10.3389/frai.2024.1453847","DOIUrl":null,"url":null,"abstract":"<p><p>Mural paintings are vital cultural expressions, enriching our lives by beautifying spaces, conveying messages, telling stories, and evoking emotions. Ancient temple murals degrade over time due to natural aging, physical damage, etc. Preserving these cultural treasures is challenging. Image inpainting is often used for digital restoration, but existing methods typically overlook naturally degraded areas, using randomly generated binary masks or small, narrow regions for repair. This study proposes a novel architecture to reconstruct large areas of naturally degraded murals, maintaining intrinsic details, avoiding color bias, and preserving artistic excellence. The architecture integrates generative adversarial networks (GANs) and the diffusion model, including a whole structure formation network (WSFN), a semantic color network (SCN), and a diffusion mixture distribution (DIMD) discriminator. The WSFN uses the original image, a line drawing, and an edge map to capture mural details, which are then texturally inpainted in the SCN using gated convolution for enhanced results. Special attention is given to globally extending the receptive field for large-area inpainting. The model is evaluated using custom-degraded mural images collected from Tamil Nadu temples. Quantitative analysis showed superior results than state-of-the-art methods, with SSIM, MSE, PSNR, and LPIPS values of 0.8853, 0.0021, 29.8826, and 0.0426, respectively.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1453847"},"PeriodicalIF":3.0000,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11576470/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/frai.2024.1453847","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Mural paintings are vital cultural expressions, enriching our lives by beautifying spaces, conveying messages, telling stories, and evoking emotions. Ancient temple murals degrade over time due to natural aging, physical damage, etc. Preserving these cultural treasures is challenging. Image inpainting is often used for digital restoration, but existing methods typically overlook naturally degraded areas, using randomly generated binary masks or small, narrow regions for repair. This study proposes a novel architecture to reconstruct large areas of naturally degraded murals, maintaining intrinsic details, avoiding color bias, and preserving artistic excellence. The architecture integrates generative adversarial networks (GANs) and the diffusion model, including a whole structure formation network (WSFN), a semantic color network (SCN), and a diffusion mixture distribution (DIMD) discriminator. The WSFN uses the original image, a line drawing, and an edge map to capture mural details, which are then texturally inpainted in the SCN using gated convolution for enhanced results. Special attention is given to globally extending the receptive field for large-area inpainting. The model is evaluated using custom-degraded mural images collected from Tamil Nadu temples. Quantitative analysis showed superior results than state-of-the-art methods, with SSIM, MSE, PSNR, and LPIPS values of 0.8853, 0.0021, 29.8826, and 0.0426, respectively.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用边缘和线条引导的扩散补丁 GAN 为受损寺庙壁画上色。
壁画是重要的文化表现形式,通过美化空间、传递信息、讲述故事和唤起情感来丰富我们的生活。由于自然老化、物理损坏等原因,古代寺庙壁画会随着时间的推移而退化。保护这些文化瑰宝极具挑战性。图像内绘通常用于数字修复,但现有的方法通常会忽略自然退化的区域,使用随机生成的二进制掩膜或小而窄的区域进行修复。本研究提出了一种新颖的架构,用于重建大面积自然退化的壁画,保持内在细节,避免色彩偏差,并保留艺术精髓。该架构整合了生成对抗网络(GAN)和扩散模型,包括整体结构形成网络(WSFN)、语义色彩网络(SCN)和扩散混合分布(DIMD)判别器。WSFN 使用原始图像、线条图和边缘图来捕捉壁画细节,然后在 SCN 中使用门控卷积进行纹理绘制,以增强效果。该模型特别关注在全局范围内扩展大面积内绘的感受野。该模型使用从泰米尔纳德邦寺庙收集的定制降级壁画图像进行了评估。定量分析结果显示,该模型的 SSIM、MSE、PSNR 和 LPIPS 值分别为 0.8853、0.0021、29.8826 和 0.0426,优于最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
6.10
自引率
2.50%
发文量
272
审稿时长
13 weeks
期刊最新文献
Advancing smart city factories: enhancing industrial mechanical operations via deep learning techniques. Inpainting of damaged temple murals using edge- and line-guided diffusion patch GAN. Catalyzing IVF outcome prediction: exploring advanced machine learning paradigms for enhanced success rate prognostication. Predicting patient reported outcome measures: a scoping review for the artificial intelligence-guided patient preference predictor. A generative AI-driven interactive listening assessment task.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1