ABHINAW:自动评估人工智能生成图像中排版的方法

Abhinaw Jagtap, Nachiket Tapas, R. G. Brajesh
{"title":"ABHINAW:自动评估人工智能生成图像中排版的方法","authors":"Abhinaw Jagtap, Nachiket Tapas, R. G. Brajesh","doi":"arxiv-2409.11874","DOIUrl":null,"url":null,"abstract":"In the fast-evolving field of Generative AI, platforms like MidJourney,\nDALL-E, and Stable Diffusion have transformed Text-to-Image (T2I) Generation.\nHowever, despite their impressive ability to create high-quality images, they\noften struggle to generate accurate text within these images. Theoretically, if\nwe could achieve accurate text generation in AI images in a ``zero-shot''\nmanner, it would not only make AI-generated images more meaningful but also\ndemocratize the graphic design industry. The first step towards this goal is to\ncreate a robust scoring matrix for evaluating text accuracy in AI-generated\nimages. Although there are existing bench-marking methods like CLIP SCORE and\nT2I-CompBench++, there's still a gap in systematically evaluating text and\ntypography in AI-generated images, especially with diffusion-based methods. In\nthis paper, we introduce a novel evaluation matrix designed explicitly for\nquantifying the performance of text and typography generation within\nAI-generated images. We have used letter by letter matching strategy to compute\nthe exact matching scores from the reference text to the AI generated text. Our\nnovel approach to calculate the score takes care of multiple redundancies such\nas repetition of words, case sensitivity, mixing of words, irregular\nincorporation of letters etc. Moreover, we have developed a Novel method named\nas brevity adjustment to handle excess text. In addition we have also done a\nquantitative analysis of frequent errors arise due to frequently used words and\nless frequently used words. Project page is available at:\nhttps://github.com/Abhinaw3906/ABHINAW-MATRIX.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ABHINAW: A method for Automatic Evaluation of Typography within AI-Generated Images\",\"authors\":\"Abhinaw Jagtap, Nachiket Tapas, R. G. Brajesh\",\"doi\":\"arxiv-2409.11874\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the fast-evolving field of Generative AI, platforms like MidJourney,\\nDALL-E, and Stable Diffusion have transformed Text-to-Image (T2I) Generation.\\nHowever, despite their impressive ability to create high-quality images, they\\noften struggle to generate accurate text within these images. Theoretically, if\\nwe could achieve accurate text generation in AI images in a ``zero-shot''\\nmanner, it would not only make AI-generated images more meaningful but also\\ndemocratize the graphic design industry. The first step towards this goal is to\\ncreate a robust scoring matrix for evaluating text accuracy in AI-generated\\nimages. Although there are existing bench-marking methods like CLIP SCORE and\\nT2I-CompBench++, there's still a gap in systematically evaluating text and\\ntypography in AI-generated images, especially with diffusion-based methods. In\\nthis paper, we introduce a novel evaluation matrix designed explicitly for\\nquantifying the performance of text and typography generation within\\nAI-generated images. We have used letter by letter matching strategy to compute\\nthe exact matching scores from the reference text to the AI generated text. Our\\nnovel approach to calculate the score takes care of multiple redundancies such\\nas repetition of words, case sensitivity, mixing of words, irregular\\nincorporation of letters etc. Moreover, we have developed a Novel method named\\nas brevity adjustment to handle excess text. In addition we have also done a\\nquantitative analysis of frequent errors arise due to frequently used words and\\nless frequently used words. Project page is available at:\\nhttps://github.com/Abhinaw3906/ABHINAW-MATRIX.\",\"PeriodicalId\":501289,\"journal\":{\"name\":\"arXiv - EE - Image and Video Processing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - EE - Image and Video Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11874\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Image and Video Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11874","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在快速发展的生成式人工智能领域,MidJourney、DALL-E 和 Stable Diffusion 等平台已经改变了文本到图像(T2I)的生成方式。然而,尽管这些平台具有令人印象深刻的创建高质量图像的能力,但它们往往难以在这些图像中生成准确的文本。从理论上讲,如果我们能以 "零误差 "的方式在人工智能图像中实现准确的文本生成,这不仅会使人工智能生成的图像更有意义,而且还会使平面设计行业民主化。实现这一目标的第一步是创建一个强大的评分矩阵,用于评估人工智能生成图像中文字的准确性。虽然已有 CLIP SCORE 和T2I-CompBench++ 等基准标记方法,但在系统评估人工智能生成图像中的文字和排版方面仍存在差距,尤其是基于扩散的方法。在本文中,我们引入了一个新颖的评估矩阵,专门用于量化人工智能生成的图像中文本和排版生成的性能。我们采用逐个字母匹配的策略来计算参考文本与人工智能生成文本的精确匹配分数。我们计算分数的新方法考虑到了多种冗余,如单词重复、大小写敏感性、单词混合、字母的不规则合并等。此外,我们还开发了一种名为 "简短度调整 "的新方法来处理多余的文本。此外,我们还对因常用词和非常用词而产生的常见错误进行了定量分析。项目网页:https://github.com/Abhinaw3906/ABHINAW-MATRIX。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
ABHINAW: A method for Automatic Evaluation of Typography within AI-Generated Images
In the fast-evolving field of Generative AI, platforms like MidJourney, DALL-E, and Stable Diffusion have transformed Text-to-Image (T2I) Generation. However, despite their impressive ability to create high-quality images, they often struggle to generate accurate text within these images. Theoretically, if we could achieve accurate text generation in AI images in a ``zero-shot'' manner, it would not only make AI-generated images more meaningful but also democratize the graphic design industry. The first step towards this goal is to create a robust scoring matrix for evaluating text accuracy in AI-generated images. Although there are existing bench-marking methods like CLIP SCORE and T2I-CompBench++, there's still a gap in systematically evaluating text and typography in AI-generated images, especially with diffusion-based methods. In this paper, we introduce a novel evaluation matrix designed explicitly for quantifying the performance of text and typography generation within AI-generated images. We have used letter by letter matching strategy to compute the exact matching scores from the reference text to the AI generated text. Our novel approach to calculate the score takes care of multiple redundancies such as repetition of words, case sensitivity, mixing of words, irregular incorporation of letters etc. Moreover, we have developed a Novel method named as brevity adjustment to handle excess text. In addition we have also done a quantitative analysis of frequent errors arise due to frequently used words and less frequently used words. Project page is available at: https://github.com/Abhinaw3906/ABHINAW-MATRIX.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
multiPI-TransBTS: A Multi-Path Learning Framework for Brain Tumor Image Segmentation Based on Multi-Physical Information Autopet III challenge: Incorporating anatomical knowledge into nnUNet for lesion segmentation in PET/CT Denoising diffusion models for high-resolution microscopy image restoration Tumor aware recurrent inter-patient deformable image registration of computed tomography scans with lung cancer Cross-Organ and Cross-Scanner Adenocarcinoma Segmentation using Rein to Fine-tune Vision Foundation Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1