{"title":"L3iTC at the FinLLM Challenge Task: Quantization for Financial Text Classification & Summarization","authors":"Elvys Linhares Pontes, Carlos-Emiliano González-Gallardo, Mohamed Benjannet, Caryn Qu, Antoine Doucet","doi":"arxiv-2408.03033","DOIUrl":null,"url":null,"abstract":"This article details our participation (L3iTC) in the FinLLM Challenge Task\n2024, focusing on two key areas: Task 1, financial text classification, and\nTask 2, financial text summarization. To address these challenges, we\nfine-tuned several large language models (LLMs) to optimize performance for\neach task. Specifically, we used 4-bit quantization and LoRA to determine which\nlayers of the LLMs should be trained at a lower precision. This approach not\nonly accelerated the fine-tuning process on the training data provided by the\norganizers but also enabled us to run the models on low GPU memory. Our\nfine-tuned models achieved third place for the financial classification task\nwith an F1-score of 0.7543 and secured sixth place in the financial\nsummarization task on the official test datasets.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computational Engineering, Finance, and Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.03033","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This article details our participation (L3iTC) in the FinLLM Challenge Task
2024, focusing on two key areas: Task 1, financial text classification, and
Task 2, financial text summarization. To address these challenges, we
fine-tuned several large language models (LLMs) to optimize performance for
each task. Specifically, we used 4-bit quantization and LoRA to determine which
layers of the LLMs should be trained at a lower precision. This approach not
only accelerated the fine-tuning process on the training data provided by the
organizers but also enabled us to run the models on low GPU memory. Our
fine-tuned models achieved third place for the financial classification task
with an F1-score of 0.7543 and secured sixth place in the financial
summarization task on the official test datasets.