MERIT 数据集:建模并高效地渲染可解释的记录誊本

I. de Rodrigo, A. Sanchez-Cuadrado, J. Boal, A. J. Lopez-Lopez
{"title":"MERIT 数据集:建模并高效地渲染可解释的记录誊本","authors":"I. de Rodrigo, A. Sanchez-Cuadrado, J. Boal, A. J. Lopez-Lopez","doi":"arxiv-2409.00447","DOIUrl":null,"url":null,"abstract":"This paper introduces the MERIT Dataset, a multimodal (text + image + layout)\nfully labeled dataset within the context of school reports. Comprising over 400\nlabels and 33k samples, the MERIT Dataset is a valuable resource for training\nmodels in demanding Visually-rich Document Understanding (VrDU) tasks. By its\nnature (student grade reports), the MERIT Dataset can potentially include\nbiases in a controlled way, making it a valuable tool to benchmark biases\ninduced in Language Models (LLMs). The paper outlines the dataset's generation\npipeline and highlights its main features in the textual, visual, layout, and\nbias domains. To demonstrate the dataset's utility, we present a benchmark with\ntoken classification models, showing that the dataset poses a significant\nchallenge even for SOTA models and that these would greatly benefit from\nincluding samples from the MERIT Dataset in their pretraining phase.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":"156 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The MERIT Dataset: Modelling and Efficiently Rendering Interpretable Transcripts\",\"authors\":\"I. de Rodrigo, A. Sanchez-Cuadrado, J. Boal, A. J. Lopez-Lopez\",\"doi\":\"arxiv-2409.00447\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper introduces the MERIT Dataset, a multimodal (text + image + layout)\\nfully labeled dataset within the context of school reports. Comprising over 400\\nlabels and 33k samples, the MERIT Dataset is a valuable resource for training\\nmodels in demanding Visually-rich Document Understanding (VrDU) tasks. By its\\nnature (student grade reports), the MERIT Dataset can potentially include\\nbiases in a controlled way, making it a valuable tool to benchmark biases\\ninduced in Language Models (LLMs). The paper outlines the dataset's generation\\npipeline and highlights its main features in the textual, visual, layout, and\\nbias domains. To demonstrate the dataset's utility, we present a benchmark with\\ntoken classification models, showing that the dataset poses a significant\\nchallenge even for SOTA models and that these would greatly benefit from\\nincluding samples from the MERIT Dataset in their pretraining phase.\",\"PeriodicalId\":501479,\"journal\":{\"name\":\"arXiv - CS - Artificial Intelligence\",\"volume\":\"156 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.00447\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.00447","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

本文介绍了 MERIT 数据集,这是一个以学校报告为背景的多模态(文本+图像+布局)全标记数据集。MERIT 数据集包含 400 多个标签和 33k 个样本,是在要求苛刻的视觉丰富文档理解(VrDU)任务中训练模型的宝贵资源。由于其性质(学生成绩报告),MERIT 数据集有可能以可控的方式包含偏差,从而成为对语言模型(LLM)中的偏差进行基准测试的重要工具。本文概述了该数据集的生成流程,并重点介绍了其在文本、视觉、布局和偏差领域的主要特点。为了证明该数据集的实用性,我们用口语分类模型进行了一次基准测试,结果表明该数据集甚至对 SOTA 模型也构成了巨大挑战,在预训练阶段加入 MERIT 数据集的样本将使这些模型受益匪浅。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
The MERIT Dataset: Modelling and Efficiently Rendering Interpretable Transcripts
This paper introduces the MERIT Dataset, a multimodal (text + image + layout) fully labeled dataset within the context of school reports. Comprising over 400 labels and 33k samples, the MERIT Dataset is a valuable resource for training models in demanding Visually-rich Document Understanding (VrDU) tasks. By its nature (student grade reports), the MERIT Dataset can potentially include biases in a controlled way, making it a valuable tool to benchmark biases induced in Language Models (LLMs). The paper outlines the dataset's generation pipeline and highlights its main features in the textual, visual, layout, and bias domains. To demonstrate the dataset's utility, we present a benchmark with token classification models, showing that the dataset poses a significant challenge even for SOTA models and that these would greatly benefit from including samples from the MERIT Dataset in their pretraining phase.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Abductive explanations of classifiers under constraints: Complexity and properties Explaining Non-monotonic Normative Reasoning using Argumentation Theory with Deontic Logic Towards Explainable Goal Recognition Using Weight of Evidence (WoE): A Human-Centered Approach A Metric Hybrid Planning Approach to Solving Pandemic Planning Problems with Simple SIR Models Neural Networks for Vehicle Routing Problem
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1