CatMemo at the FinLLM Challenge Task: Fine-Tuning Large Language Models using Data Fusion in Financial Applications

Yupeng Cao, Zhiyuan Yao, Zhi Chen, Zhiyang Deng
{"title":"CatMemo at the FinLLM Challenge Task: Fine-Tuning Large Language Models using Data Fusion in Financial Applications","authors":"Yupeng Cao, Zhiyuan Yao, Zhi Chen, Zhiyang Deng","doi":"arxiv-2407.01953","DOIUrl":null,"url":null,"abstract":"The integration of Large Language Models (LLMs) into financial analysis has\ngarnered significant attention in the NLP community. This paper presents our\nsolution to IJCAI-2024 FinLLM challenge, investigating the capabilities of LLMs\nwithin three critical areas of financial tasks: financial classification,\nfinancial text summarization, and single stock trading. We adopted Llama3-8B\nand Mistral-7B as base models, fine-tuning them through Parameter Efficient\nFine-Tuning (PEFT) and Low-Rank Adaptation (LoRA) approaches. To enhance model\nperformance, we combine datasets from task 1 and task 2 for data fusion. Our\napproach aims to tackle these diverse tasks in a comprehensive and integrated\nmanner, showcasing LLMs' capacity to address diverse and complex financial\ntasks with improved accuracy and decision-making capabilities.","PeriodicalId":501294,"journal":{"name":"arXiv - QuantFin - Computational Finance","volume":"13 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuantFin - Computational Finance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.01953","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The integration of Large Language Models (LLMs) into financial analysis has garnered significant attention in the NLP community. This paper presents our solution to IJCAI-2024 FinLLM challenge, investigating the capabilities of LLMs within three critical areas of financial tasks: financial classification, financial text summarization, and single stock trading. We adopted Llama3-8B and Mistral-7B as base models, fine-tuning them through Parameter Efficient Fine-Tuning (PEFT) and Low-Rank Adaptation (LoRA) approaches. To enhance model performance, we combine datasets from task 1 and task 2 for data fusion. Our approach aims to tackle these diverse tasks in a comprehensive and integrated manner, showcasing LLMs' capacity to address diverse and complex financial tasks with improved accuracy and decision-making capabilities.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
CatMemo 参加 FinLLM 挑战任务:利用金融应用中的数据融合微调大型语言模型
将大型语言模型(LLMs)集成到金融分析中已引起 NLP 界的极大关注。本文针对 IJCAI-2024 FinLLM 挑战提出了我们的解决方案,研究了 LLM 在金融任务的三个关键领域中的能力:金融分类、金融文本摘要和单一股票交易。我们采用 Llama3-8B 和 Mistral-7B 作为基础模型,通过参数高效微调(PEFT)和低级别自适应(LoRA)方法对其进行微调。为了提高模型性能,我们将任务 1 和任务 2 的数据集结合起来进行数据融合。我们的方法旨在以全面、综合的方式解决这些不同的任务,展示 LLMs 解决多样化、复杂的金融任务的能力,并提高准确性和决策能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A deep primal-dual BSDE method for optimal stopping problems Robust financial calibration: a Bayesian approach for neural SDEs MANA-Net: Mitigating Aggregated Sentiment Homogenization with News Weighting for Enhanced Market Prediction QuantFactor REINFORCE: Mining Steady Formulaic Alpha Factors with Variance-bounded REINFORCE Signature of maturity in cryptocurrency volatility
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1