I. de Rodrigo, A. Sanchez-Cuadrado, J. Boal, A. J. Lopez-Lopez
{"title":"MERIT 数据集:建模并高效地渲染可解释的记录誊本","authors":"I. de Rodrigo, A. Sanchez-Cuadrado, J. Boal, A. J. Lopez-Lopez","doi":"arxiv-2409.00447","DOIUrl":null,"url":null,"abstract":"This paper introduces the MERIT Dataset, a multimodal (text + image + layout)\nfully labeled dataset within the context of school reports. Comprising over 400\nlabels and 33k samples, the MERIT Dataset is a valuable resource for training\nmodels in demanding Visually-rich Document Understanding (VrDU) tasks. By its\nnature (student grade reports), the MERIT Dataset can potentially include\nbiases in a controlled way, making it a valuable tool to benchmark biases\ninduced in Language Models (LLMs). The paper outlines the dataset's generation\npipeline and highlights its main features in the textual, visual, layout, and\nbias domains. To demonstrate the dataset's utility, we present a benchmark with\ntoken classification models, showing that the dataset poses a significant\nchallenge even for SOTA models and that these would greatly benefit from\nincluding samples from the MERIT Dataset in their pretraining phase.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":"156 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The MERIT Dataset: Modelling and Efficiently Rendering Interpretable Transcripts\",\"authors\":\"I. de Rodrigo, A. Sanchez-Cuadrado, J. Boal, A. J. Lopez-Lopez\",\"doi\":\"arxiv-2409.00447\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper introduces the MERIT Dataset, a multimodal (text + image + layout)\\nfully labeled dataset within the context of school reports. Comprising over 400\\nlabels and 33k samples, the MERIT Dataset is a valuable resource for training\\nmodels in demanding Visually-rich Document Understanding (VrDU) tasks. By its\\nnature (student grade reports), the MERIT Dataset can potentially include\\nbiases in a controlled way, making it a valuable tool to benchmark biases\\ninduced in Language Models (LLMs). The paper outlines the dataset's generation\\npipeline and highlights its main features in the textual, visual, layout, and\\nbias domains. To demonstrate the dataset's utility, we present a benchmark with\\ntoken classification models, showing that the dataset poses a significant\\nchallenge even for SOTA models and that these would greatly benefit from\\nincluding samples from the MERIT Dataset in their pretraining phase.\",\"PeriodicalId\":501479,\"journal\":{\"name\":\"arXiv - CS - Artificial Intelligence\",\"volume\":\"156 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.00447\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.00447","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The MERIT Dataset: Modelling and Efficiently Rendering Interpretable Transcripts
This paper introduces the MERIT Dataset, a multimodal (text + image + layout)
fully labeled dataset within the context of school reports. Comprising over 400
labels and 33k samples, the MERIT Dataset is a valuable resource for training
models in demanding Visually-rich Document Understanding (VrDU) tasks. By its
nature (student grade reports), the MERIT Dataset can potentially include
biases in a controlled way, making it a valuable tool to benchmark biases
induced in Language Models (LLMs). The paper outlines the dataset's generation
pipeline and highlights its main features in the textual, visual, layout, and
bias domains. To demonstrate the dataset's utility, we present a benchmark with
token classification models, showing that the dataset poses a significant
challenge even for SOTA models and that these would greatly benefit from
including samples from the MERIT Dataset in their pretraining phase.