Open-FinLLMs:用于金融应用的开放式多模态大语言模型

Qianqian Xie, Dong Li, Mengxi Xiao, Zihao Jiang, Ruoyu Xiang, Xiao Zhang, Zhengyu Chen, Yueru He, Weiguang Han, Yuzhe Yang, Shunian Chen, Yifei Zhang, Lihang Shen, Daniel Kim, Zhiwei Liu, Zheheng Luo, Yangyang Yu, Yupeng Cao, Zhiyang Deng, Zhiyuan Yao, Haohang Li, Duanyu Feng, Yongfu Dai, VijayaSai Somasundaram, Peng Lu, Yilun Zhao, Yitao Long, Guojun Xiong, Kaleb Smith, Honghai Yu, Yanzhao Lai, Min Peng, Jianyun Nie, Jordan W. Suchow, Xiao-Yang Liu, Benyou Wang, Alejandro Lopez-Lira, Jimin Huang, Sophia Ananiadou
{"title":"Open-FinLLMs:用于金融应用的开放式多模态大语言模型","authors":"Qianqian Xie, Dong Li, Mengxi Xiao, Zihao Jiang, Ruoyu Xiang, Xiao Zhang, Zhengyu Chen, Yueru He, Weiguang Han, Yuzhe Yang, Shunian Chen, Yifei Zhang, Lihang Shen, Daniel Kim, Zhiwei Liu, Zheheng Luo, Yangyang Yu, Yupeng Cao, Zhiyang Deng, Zhiyuan Yao, Haohang Li, Duanyu Feng, Yongfu Dai, VijayaSai Somasundaram, Peng Lu, Yilun Zhao, Yitao Long, Guojun Xiong, Kaleb Smith, Honghai Yu, Yanzhao Lai, Min Peng, Jianyun Nie, Jordan W. Suchow, Xiao-Yang Liu, Benyou Wang, Alejandro Lopez-Lira, Jimin Huang, Sophia Ananiadou","doi":"arxiv-2408.11878","DOIUrl":null,"url":null,"abstract":"Large language models (LLMs) have advanced financial applications, yet they\noften lack sufficient financial knowledge and struggle with tasks involving\nmulti-modal inputs like tables and time series data. To address these\nlimitations, we introduce \\textit{Open-FinLLMs}, a series of Financial LLMs. We\nbegin with FinLLaMA, pre-trained on a 52 billion token financial corpus,\nincorporating text, tables, and time-series data to embed comprehensive\nfinancial knowledge. FinLLaMA is then instruction fine-tuned with 573K\nfinancial instructions, resulting in FinLLaMA-instruct, which enhances task\nperformance. Finally, we present FinLLaVA, a multimodal LLM trained with 1.43M\nimage-text instructions to handle complex financial data types. Extensive\nevaluations demonstrate FinLLaMA's superior performance over LLaMA3-8B,\nLLaMA3.1-8B, and BloombergGPT in both zero-shot and few-shot settings across 19\nand 4 datasets, respectively. FinLLaMA-instruct outperforms GPT-4 and other\nFinancial LLMs on 15 datasets. FinLLaVA excels in understanding tables and\ncharts across 4 multimodal tasks. Additionally, FinLLaMA achieves impressive\nSharpe Ratios in trading simulations, highlighting its robust financial\napplication capabilities. We will continually maintain and improve our models\nand benchmarks to support ongoing innovation in academia and industry.","PeriodicalId":501294,"journal":{"name":"arXiv - QuantFin - Computational Finance","volume":"66 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Open-FinLLMs: Open Multimodal Large Language Models for Financial Applications\",\"authors\":\"Qianqian Xie, Dong Li, Mengxi Xiao, Zihao Jiang, Ruoyu Xiang, Xiao Zhang, Zhengyu Chen, Yueru He, Weiguang Han, Yuzhe Yang, Shunian Chen, Yifei Zhang, Lihang Shen, Daniel Kim, Zhiwei Liu, Zheheng Luo, Yangyang Yu, Yupeng Cao, Zhiyang Deng, Zhiyuan Yao, Haohang Li, Duanyu Feng, Yongfu Dai, VijayaSai Somasundaram, Peng Lu, Yilun Zhao, Yitao Long, Guojun Xiong, Kaleb Smith, Honghai Yu, Yanzhao Lai, Min Peng, Jianyun Nie, Jordan W. Suchow, Xiao-Yang Liu, Benyou Wang, Alejandro Lopez-Lira, Jimin Huang, Sophia Ananiadou\",\"doi\":\"arxiv-2408.11878\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Large language models (LLMs) have advanced financial applications, yet they\\noften lack sufficient financial knowledge and struggle with tasks involving\\nmulti-modal inputs like tables and time series data. To address these\\nlimitations, we introduce \\\\textit{Open-FinLLMs}, a series of Financial LLMs. We\\nbegin with FinLLaMA, pre-trained on a 52 billion token financial corpus,\\nincorporating text, tables, and time-series data to embed comprehensive\\nfinancial knowledge. FinLLaMA is then instruction fine-tuned with 573K\\nfinancial instructions, resulting in FinLLaMA-instruct, which enhances task\\nperformance. Finally, we present FinLLaVA, a multimodal LLM trained with 1.43M\\nimage-text instructions to handle complex financial data types. Extensive\\nevaluations demonstrate FinLLaMA's superior performance over LLaMA3-8B,\\nLLaMA3.1-8B, and BloombergGPT in both zero-shot and few-shot settings across 19\\nand 4 datasets, respectively. FinLLaMA-instruct outperforms GPT-4 and other\\nFinancial LLMs on 15 datasets. FinLLaVA excels in understanding tables and\\ncharts across 4 multimodal tasks. Additionally, FinLLaMA achieves impressive\\nSharpe Ratios in trading simulations, highlighting its robust financial\\napplication capabilities. We will continually maintain and improve our models\\nand benchmarks to support ongoing innovation in academia and industry.\",\"PeriodicalId\":501294,\"journal\":{\"name\":\"arXiv - QuantFin - Computational Finance\",\"volume\":\"66 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - QuantFin - Computational Finance\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.11878\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuantFin - Computational Finance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.11878","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

大型语言模型(LLMs)在金融领域有着先进的应用,但它们往往缺乏足够的金融知识,在处理涉及表格和时间序列数据等多模态输入的任务时举步维艰。为了解决这些限制,我们引入了一系列金融 LLMs:textit{Open-FinLLMs}。我们从 FinLLaMA 开始,在 520 亿个代币的金融语料库上进行预训练,结合文本、表格和时间序列数据,嵌入全面的金融知识。然后,我们使用 573K 条金融指令对 FinLLaMA 进行了指令微调,最终形成了 FinLLaMA-instruct,从而提高了任务性能。最后,我们介绍了 FinLLaVA,这是一种使用 1.43Mimage-text 指令训练的多模态 LLM,用于处理复杂的金融数据类型。广泛的评估结果表明,在 19 个数据集和 4 个数据集上,FinLLaMA 在零点击和少点击设置下的性能分别优于 LLaMA3-8B、LLaMA3.1-8B 和 BloombergGPT。在 15 个数据集上,FinLLaMA-instruct 优于 GPT-4 和其他金融 LLM。在 4 项多模态任务中,FinLLaVA 在理解表格和图表方面表现出色。此外,FinLLaMA 还在模拟交易中取得了令人印象深刻的沙普比率(Sharpe Ratios),彰显了其强大的金融应用能力。我们将不断维护和改进我们的模型和基准,以支持学术界和业界的持续创新。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Open-FinLLMs: Open Multimodal Large Language Models for Financial Applications
Large language models (LLMs) have advanced financial applications, yet they often lack sufficient financial knowledge and struggle with tasks involving multi-modal inputs like tables and time series data. To address these limitations, we introduce \textit{Open-FinLLMs}, a series of Financial LLMs. We begin with FinLLaMA, pre-trained on a 52 billion token financial corpus, incorporating text, tables, and time-series data to embed comprehensive financial knowledge. FinLLaMA is then instruction fine-tuned with 573K financial instructions, resulting in FinLLaMA-instruct, which enhances task performance. Finally, we present FinLLaVA, a multimodal LLM trained with 1.43M image-text instructions to handle complex financial data types. Extensive evaluations demonstrate FinLLaMA's superior performance over LLaMA3-8B, LLaMA3.1-8B, and BloombergGPT in both zero-shot and few-shot settings across 19 and 4 datasets, respectively. FinLLaMA-instruct outperforms GPT-4 and other Financial LLMs on 15 datasets. FinLLaVA excels in understanding tables and charts across 4 multimodal tasks. Additionally, FinLLaMA achieves impressive Sharpe Ratios in trading simulations, highlighting its robust financial application capabilities. We will continually maintain and improve our models and benchmarks to support ongoing innovation in academia and industry.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A deep primal-dual BSDE method for optimal stopping problems Robust financial calibration: a Bayesian approach for neural SDEs MANA-Net: Mitigating Aggregated Sentiment Homogenization with News Weighting for Enhanced Market Prediction QuantFactor REINFORCE: Mining Steady Formulaic Alpha Factors with Variance-bounded REINFORCE Signature of maturity in cryptocurrency volatility
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1