{"title":"CatMemo 参加 FinLLM 挑战任务:利用金融应用中的数据融合微调大型语言模型","authors":"Yupeng Cao, Zhiyuan Yao, Zhi Chen, Zhiyang Deng","doi":"arxiv-2407.01953","DOIUrl":null,"url":null,"abstract":"The integration of Large Language Models (LLMs) into financial analysis has\ngarnered significant attention in the NLP community. This paper presents our\nsolution to IJCAI-2024 FinLLM challenge, investigating the capabilities of LLMs\nwithin three critical areas of financial tasks: financial classification,\nfinancial text summarization, and single stock trading. We adopted Llama3-8B\nand Mistral-7B as base models, fine-tuning them through Parameter Efficient\nFine-Tuning (PEFT) and Low-Rank Adaptation (LoRA) approaches. To enhance model\nperformance, we combine datasets from task 1 and task 2 for data fusion. Our\napproach aims to tackle these diverse tasks in a comprehensive and integrated\nmanner, showcasing LLMs' capacity to address diverse and complex financial\ntasks with improved accuracy and decision-making capabilities.","PeriodicalId":501294,"journal":{"name":"arXiv - QuantFin - Computational Finance","volume":"13 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CatMemo at the FinLLM Challenge Task: Fine-Tuning Large Language Models using Data Fusion in Financial Applications\",\"authors\":\"Yupeng Cao, Zhiyuan Yao, Zhi Chen, Zhiyang Deng\",\"doi\":\"arxiv-2407.01953\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The integration of Large Language Models (LLMs) into financial analysis has\\ngarnered significant attention in the NLP community. This paper presents our\\nsolution to IJCAI-2024 FinLLM challenge, investigating the capabilities of LLMs\\nwithin three critical areas of financial tasks: financial classification,\\nfinancial text summarization, and single stock trading. We adopted Llama3-8B\\nand Mistral-7B as base models, fine-tuning them through Parameter Efficient\\nFine-Tuning (PEFT) and Low-Rank Adaptation (LoRA) approaches. To enhance model\\nperformance, we combine datasets from task 1 and task 2 for data fusion. Our\\napproach aims to tackle these diverse tasks in a comprehensive and integrated\\nmanner, showcasing LLMs' capacity to address diverse and complex financial\\ntasks with improved accuracy and decision-making capabilities.\",\"PeriodicalId\":501294,\"journal\":{\"name\":\"arXiv - QuantFin - Computational Finance\",\"volume\":\"13 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - QuantFin - Computational Finance\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2407.01953\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuantFin - Computational Finance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.01953","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
CatMemo at the FinLLM Challenge Task: Fine-Tuning Large Language Models using Data Fusion in Financial Applications
The integration of Large Language Models (LLMs) into financial analysis has
garnered significant attention in the NLP community. This paper presents our
solution to IJCAI-2024 FinLLM challenge, investigating the capabilities of LLMs
within three critical areas of financial tasks: financial classification,
financial text summarization, and single stock trading. We adopted Llama3-8B
and Mistral-7B as base models, fine-tuning them through Parameter Efficient
Fine-Tuning (PEFT) and Low-Rank Adaptation (LoRA) approaches. To enhance model
performance, we combine datasets from task 1 and task 2 for data fusion. Our
approach aims to tackle these diverse tasks in a comprehensive and integrated
manner, showcasing LLMs' capacity to address diverse and complex financial
tasks with improved accuracy and decision-making capabilities.