{"title":"Open-FinLLMs:用于金融应用的开放式多模态大语言模型","authors":"Qianqian Xie, Dong Li, Mengxi Xiao, Zihao Jiang, Ruoyu Xiang, Xiao Zhang, Zhengyu Chen, Yueru He, Weiguang Han, Yuzhe Yang, Shunian Chen, Yifei Zhang, Lihang Shen, Daniel Kim, Zhiwei Liu, Zheheng Luo, Yangyang Yu, Yupeng Cao, Zhiyang Deng, Zhiyuan Yao, Haohang Li, Duanyu Feng, Yongfu Dai, VijayaSai Somasundaram, Peng Lu, Yilun Zhao, Yitao Long, Guojun Xiong, Kaleb Smith, Honghai Yu, Yanzhao Lai, Min Peng, Jianyun Nie, Jordan W. Suchow, Xiao-Yang Liu, Benyou Wang, Alejandro Lopez-Lira, Jimin Huang, Sophia Ananiadou","doi":"arxiv-2408.11878","DOIUrl":null,"url":null,"abstract":"Large language models (LLMs) have advanced financial applications, yet they\noften lack sufficient financial knowledge and struggle with tasks involving\nmulti-modal inputs like tables and time series data. To address these\nlimitations, we introduce \\textit{Open-FinLLMs}, a series of Financial LLMs. We\nbegin with FinLLaMA, pre-trained on a 52 billion token financial corpus,\nincorporating text, tables, and time-series data to embed comprehensive\nfinancial knowledge. FinLLaMA is then instruction fine-tuned with 573K\nfinancial instructions, resulting in FinLLaMA-instruct, which enhances task\nperformance. Finally, we present FinLLaVA, a multimodal LLM trained with 1.43M\nimage-text instructions to handle complex financial data types. Extensive\nevaluations demonstrate FinLLaMA's superior performance over LLaMA3-8B,\nLLaMA3.1-8B, and BloombergGPT in both zero-shot and few-shot settings across 19\nand 4 datasets, respectively. FinLLaMA-instruct outperforms GPT-4 and other\nFinancial LLMs on 15 datasets. FinLLaVA excels in understanding tables and\ncharts across 4 multimodal tasks. Additionally, FinLLaMA achieves impressive\nSharpe Ratios in trading simulations, highlighting its robust financial\napplication capabilities. We will continually maintain and improve our models\nand benchmarks to support ongoing innovation in academia and industry.","PeriodicalId":501294,"journal":{"name":"arXiv - QuantFin - Computational Finance","volume":"66 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Open-FinLLMs: Open Multimodal Large Language Models for Financial Applications\",\"authors\":\"Qianqian Xie, Dong Li, Mengxi Xiao, Zihao Jiang, Ruoyu Xiang, Xiao Zhang, Zhengyu Chen, Yueru He, Weiguang Han, Yuzhe Yang, Shunian Chen, Yifei Zhang, Lihang Shen, Daniel Kim, Zhiwei Liu, Zheheng Luo, Yangyang Yu, Yupeng Cao, Zhiyang Deng, Zhiyuan Yao, Haohang Li, Duanyu Feng, Yongfu Dai, VijayaSai Somasundaram, Peng Lu, Yilun Zhao, Yitao Long, Guojun Xiong, Kaleb Smith, Honghai Yu, Yanzhao Lai, Min Peng, Jianyun Nie, Jordan W. Suchow, Xiao-Yang Liu, Benyou Wang, Alejandro Lopez-Lira, Jimin Huang, Sophia Ananiadou\",\"doi\":\"arxiv-2408.11878\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Large language models (LLMs) have advanced financial applications, yet they\\noften lack sufficient financial knowledge and struggle with tasks involving\\nmulti-modal inputs like tables and time series data. To address these\\nlimitations, we introduce \\\\textit{Open-FinLLMs}, a series of Financial LLMs. We\\nbegin with FinLLaMA, pre-trained on a 52 billion token financial corpus,\\nincorporating text, tables, and time-series data to embed comprehensive\\nfinancial knowledge. FinLLaMA is then instruction fine-tuned with 573K\\nfinancial instructions, resulting in FinLLaMA-instruct, which enhances task\\nperformance. Finally, we present FinLLaVA, a multimodal LLM trained with 1.43M\\nimage-text instructions to handle complex financial data types. Extensive\\nevaluations demonstrate FinLLaMA's superior performance over LLaMA3-8B,\\nLLaMA3.1-8B, and BloombergGPT in both zero-shot and few-shot settings across 19\\nand 4 datasets, respectively. FinLLaMA-instruct outperforms GPT-4 and other\\nFinancial LLMs on 15 datasets. FinLLaVA excels in understanding tables and\\ncharts across 4 multimodal tasks. Additionally, FinLLaMA achieves impressive\\nSharpe Ratios in trading simulations, highlighting its robust financial\\napplication capabilities. We will continually maintain and improve our models\\nand benchmarks to support ongoing innovation in academia and industry.\",\"PeriodicalId\":501294,\"journal\":{\"name\":\"arXiv - QuantFin - Computational Finance\",\"volume\":\"66 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - QuantFin - Computational Finance\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.11878\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuantFin - Computational Finance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.11878","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Open-FinLLMs: Open Multimodal Large Language Models for Financial Applications
Large language models (LLMs) have advanced financial applications, yet they
often lack sufficient financial knowledge and struggle with tasks involving
multi-modal inputs like tables and time series data. To address these
limitations, we introduce \textit{Open-FinLLMs}, a series of Financial LLMs. We
begin with FinLLaMA, pre-trained on a 52 billion token financial corpus,
incorporating text, tables, and time-series data to embed comprehensive
financial knowledge. FinLLaMA is then instruction fine-tuned with 573K
financial instructions, resulting in FinLLaMA-instruct, which enhances task
performance. Finally, we present FinLLaVA, a multimodal LLM trained with 1.43M
image-text instructions to handle complex financial data types. Extensive
evaluations demonstrate FinLLaMA's superior performance over LLaMA3-8B,
LLaMA3.1-8B, and BloombergGPT in both zero-shot and few-shot settings across 19
and 4 datasets, respectively. FinLLaMA-instruct outperforms GPT-4 and other
Financial LLMs on 15 datasets. FinLLaVA excels in understanding tables and
charts across 4 multimodal tasks. Additionally, FinLLaMA achieves impressive
Sharpe Ratios in trading simulations, highlighting its robust financial
application capabilities. We will continually maintain and improve our models
and benchmarks to support ongoing innovation in academia and industry.