基于变压器的唐卡线条图图像提取

Fubo Wang, Shenglin Geng, Dan Zhang, Mingquan Zhou, Lujia Li, Wei Nian
{"title":"基于变压器的唐卡线条图图像提取","authors":"Fubo Wang, Shenglin Geng, Dan Zhang, Mingquan Zhou, Lujia Li, Wei Nian","doi":"10.1109/ICACTE55855.2022.9943668","DOIUrl":null,"url":null,"abstract":"In the drawing process of Thangka, the painter can draw different types of Thangka on the same line drawing, but the painter needs to redraw an identical line drawing every time. The drawing and coloring of the line draft are time-consuming and laborious. Therefore, in view of the difficulty in obtaining the real line drawing image data of Thangka and the distortion of the effect of the existing line drawing extraction methods, this paper proposes a Thangka line drawing extraction method based on Transformer: ETLTER. By introducing Vision Transformer, ETLTER captures coarse-grained global context, medium-grained local context, and fine-grained detail context features simultaneously in the three stages. In addition, the feature fusion module (FFM) fuses the feature information extracted from the three stages to predict the final Thangka manuscript effect. Through the processing results of the above three stages, ETLTER can generate clear and concise Thangka line drawings. Based on our own Thangka image dataset TK1500, the Thangka line drawings extracted by the model in this paper have less noise, and clear lines and are close to the line drawings drawn by the real Thangka painter compared with the existing line drawings extraction methods. The average rank of manuscript images extracted by this method is 1.167, ranking first among the 30 methods. The comprehensive evaluation results show that our methods achieve the state-of-art performance in Thangka line drawing extraction, and ETLTER is of great significance to the training of new Thangka painters.","PeriodicalId":165068,"journal":{"name":"2022 15th International Conference on Advanced Computer Theory and Engineering (ICACTE)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Image Extraction of Thangka Line Drawings with Transformer\",\"authors\":\"Fubo Wang, Shenglin Geng, Dan Zhang, Mingquan Zhou, Lujia Li, Wei Nian\",\"doi\":\"10.1109/ICACTE55855.2022.9943668\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the drawing process of Thangka, the painter can draw different types of Thangka on the same line drawing, but the painter needs to redraw an identical line drawing every time. The drawing and coloring of the line draft are time-consuming and laborious. Therefore, in view of the difficulty in obtaining the real line drawing image data of Thangka and the distortion of the effect of the existing line drawing extraction methods, this paper proposes a Thangka line drawing extraction method based on Transformer: ETLTER. By introducing Vision Transformer, ETLTER captures coarse-grained global context, medium-grained local context, and fine-grained detail context features simultaneously in the three stages. In addition, the feature fusion module (FFM) fuses the feature information extracted from the three stages to predict the final Thangka manuscript effect. Through the processing results of the above three stages, ETLTER can generate clear and concise Thangka line drawings. Based on our own Thangka image dataset TK1500, the Thangka line drawings extracted by the model in this paper have less noise, and clear lines and are close to the line drawings drawn by the real Thangka painter compared with the existing line drawings extraction methods. The average rank of manuscript images extracted by this method is 1.167, ranking first among the 30 methods. The comprehensive evaluation results show that our methods achieve the state-of-art performance in Thangka line drawing extraction, and ETLTER is of great significance to the training of new Thangka painters.\",\"PeriodicalId\":165068,\"journal\":{\"name\":\"2022 15th International Conference on Advanced Computer Theory and Engineering (ICACTE)\",\"volume\":\"26 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-09-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 15th International Conference on Advanced Computer Theory and Engineering (ICACTE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICACTE55855.2022.9943668\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 15th International Conference on Advanced Computer Theory and Engineering (ICACTE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICACTE55855.2022.9943668","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在唐卡的绘制过程中,画家可以在同一张线条图上绘制不同类型的唐卡,但画家每次都需要重新绘制一幅相同的线条图。线稿的绘制和上色是费时费力的。因此,针对唐卡真实线条图像数据难以获取以及现有线条提取方法效果失真的问题,本文提出了一种基于Transformer: ETLTER的唐卡线条提取方法。通过引入Vision Transformer, ETLTER在三个阶段中同时捕获粗粒度的全局上下文、中粒度的局部上下文和细粒度的详细上下文特征。此外,特征融合模块(FFM)将三个阶段提取的特征信息进行融合,预测唐卡手稿的最终效果。通过以上三个阶段的处理结果,ETLTER可以生成清晰简洁的唐卡线条图。基于我们自己的唐卡图像数据集TK1500,与现有的唐卡线条提取方法相比,本文模型提取的唐卡线条图噪声小,线条清晰,接近真实唐卡画家绘制的线条图。该方法提取的手稿图像平均排名为1.167,在30种方法中排名第一。综合评价结果表明,我们的方法在唐卡线条提取方面达到了最先进的水平,ETLTER对培养新的唐卡画家具有重要意义。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Image Extraction of Thangka Line Drawings with Transformer
In the drawing process of Thangka, the painter can draw different types of Thangka on the same line drawing, but the painter needs to redraw an identical line drawing every time. The drawing and coloring of the line draft are time-consuming and laborious. Therefore, in view of the difficulty in obtaining the real line drawing image data of Thangka and the distortion of the effect of the existing line drawing extraction methods, this paper proposes a Thangka line drawing extraction method based on Transformer: ETLTER. By introducing Vision Transformer, ETLTER captures coarse-grained global context, medium-grained local context, and fine-grained detail context features simultaneously in the three stages. In addition, the feature fusion module (FFM) fuses the feature information extracted from the three stages to predict the final Thangka manuscript effect. Through the processing results of the above three stages, ETLTER can generate clear and concise Thangka line drawings. Based on our own Thangka image dataset TK1500, the Thangka line drawings extracted by the model in this paper have less noise, and clear lines and are close to the line drawings drawn by the real Thangka painter compared with the existing line drawings extraction methods. The average rank of manuscript images extracted by this method is 1.167, ranking first among the 30 methods. The comprehensive evaluation results show that our methods achieve the state-of-art performance in Thangka line drawing extraction, and ETLTER is of great significance to the training of new Thangka painters.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Analysis and Sharing System of the Second Pollution Source Census Results Data Based on Apache Kylin and WebGIS ICACTE 2022 Cover Page Matrix-based Genetic Algorithm for Mobile Robot Global Path Planning Evaluation in Development of E-Government: Taking Macao E-Government as An Example A Modular Reasoning Approach to Knowledge Graph
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1