SynthDoc:用于可视化文档理解的双语文档合成

Chuanghao Ding, Xuejing Liu, Wei Tang, Juan Li, Xiaoliang Wang, Rui Zhao, Cam-Tu Nguyen, Fei Tan
{"title":"SynthDoc:用于可视化文档理解的双语文档合成","authors":"Chuanghao Ding, Xuejing Liu, Wei Tang, Juan Li, Xiaoliang Wang, Rui Zhao, Cam-Tu Nguyen, Fei Tan","doi":"arxiv-2408.14764","DOIUrl":null,"url":null,"abstract":"This paper introduces SynthDoc, a novel synthetic document generation\npipeline designed to enhance Visual Document Understanding (VDU) by generating\nhigh-quality, diverse datasets that include text, images, tables, and charts.\nAddressing the challenges of data acquisition and the limitations of existing\ndatasets, SynthDoc leverages publicly available corpora and advanced rendering\ntools to create a comprehensive and versatile dataset. Our experiments,\nconducted using the Donut model, demonstrate that models trained with\nSynthDoc's data achieve superior performance in pre-training read tasks and\nmaintain robustness in downstream tasks, despite language inconsistencies. The\nrelease of a benchmark dataset comprising 5,000 image-text pairs not only\nshowcases the pipeline's capabilities but also provides a valuable resource for\nthe VDU community to advance research and development in document image\nrecognition. This work significantly contributes to the field by offering a\nscalable solution to data scarcity and by validating the efficacy of end-to-end\nmodels in parsing complex, real-world documents.","PeriodicalId":501480,"journal":{"name":"arXiv - CS - Multimedia","volume":"11 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SynthDoc: Bilingual Documents Synthesis for Visual Document Understanding\",\"authors\":\"Chuanghao Ding, Xuejing Liu, Wei Tang, Juan Li, Xiaoliang Wang, Rui Zhao, Cam-Tu Nguyen, Fei Tan\",\"doi\":\"arxiv-2408.14764\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper introduces SynthDoc, a novel synthetic document generation\\npipeline designed to enhance Visual Document Understanding (VDU) by generating\\nhigh-quality, diverse datasets that include text, images, tables, and charts.\\nAddressing the challenges of data acquisition and the limitations of existing\\ndatasets, SynthDoc leverages publicly available corpora and advanced rendering\\ntools to create a comprehensive and versatile dataset. Our experiments,\\nconducted using the Donut model, demonstrate that models trained with\\nSynthDoc's data achieve superior performance in pre-training read tasks and\\nmaintain robustness in downstream tasks, despite language inconsistencies. The\\nrelease of a benchmark dataset comprising 5,000 image-text pairs not only\\nshowcases the pipeline's capabilities but also provides a valuable resource for\\nthe VDU community to advance research and development in document image\\nrecognition. This work significantly contributes to the field by offering a\\nscalable solution to data scarcity and by validating the efficacy of end-to-end\\nmodels in parsing complex, real-world documents.\",\"PeriodicalId\":501480,\"journal\":{\"name\":\"arXiv - CS - Multimedia\",\"volume\":\"11 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Multimedia\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.14764\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.14764","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

本文介绍了 SynthDoc,这是一种新颖的合成文档生成管道,旨在通过生成包括文本、图像、表格和图表在内的高质量、多样化数据集来增强可视化文档理解(VDU)能力。为了应对数据获取方面的挑战和现有数据集的局限性,SynthDoc 利用公开可用的语料库和先进的渲染工具创建了一个全面而多用途的数据集。我们使用 Donut 模型进行的实验表明,使用 SynthDoc 数据训练的模型在预训练阅读任务中表现出色,并且在下游任务中保持稳健性,尽管存在语言不一致的问题。此次发布的基准数据集包括 5000 个图像-文本对,不仅展示了该管道的能力,还为 VDU 界提供了宝贵的资源,有助于推动文档图像识别的研究和开发。这项工作为数据稀缺问题提供了可扩展的解决方案,并验证了端到端模型在解析复杂的真实世界文档中的有效性,从而为该领域做出了重大贡献。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
SynthDoc: Bilingual Documents Synthesis for Visual Document Understanding
This paper introduces SynthDoc, a novel synthetic document generation pipeline designed to enhance Visual Document Understanding (VDU) by generating high-quality, diverse datasets that include text, images, tables, and charts. Addressing the challenges of data acquisition and the limitations of existing datasets, SynthDoc leverages publicly available corpora and advanced rendering tools to create a comprehensive and versatile dataset. Our experiments, conducted using the Donut model, demonstrate that models trained with SynthDoc's data achieve superior performance in pre-training read tasks and maintain robustness in downstream tasks, despite language inconsistencies. The release of a benchmark dataset comprising 5,000 image-text pairs not only showcases the pipeline's capabilities but also provides a valuable resource for the VDU community to advance research and development in document image recognition. This work significantly contributes to the field by offering a scalable solution to data scarcity and by validating the efficacy of end-to-end models in parsing complex, real-world documents.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Vista3D: Unravel the 3D Darkside of a Single Image MoRAG -- Multi-Fusion Retrieval Augmented Generation for Human Motion Efficient Low-Resolution Face Recognition via Bridge Distillation Enhancing Few-Shot Classification without Forgetting through Multi-Level Contrastive Constraints NVLM: Open Frontier-Class Multimodal LLMs
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1