Jetsons at the FinNLP-2022 ERAI Task: BERT-Chinese for mining high MPP posts

Alolika Gon, Sihan Zha, Sai Krishna Rallabandi, Parag Dakle, Preethi Raghavan
{"title":"Jetsons at the FinNLP-2022 ERAI Task: BERT-Chinese for mining high MPP posts","authors":"Alolika Gon, Sihan Zha, Sai Krishna Rallabandi, Parag Dakle, Preethi Raghavan","doi":"10.18653/v1/2022.finnlp-1.19","DOIUrl":null,"url":null,"abstract":"In this paper, we discuss the various approaches by the Jetsons team for the “Pairwise Comparison” sub-task of the ERAI shared task to compare financial opinions for profitability and loss. Our BERT-Chinese model considers a pair of opinions and predicts the one with a higher maximum potential profit (MPP) with 62.07% accuracy. We analyze the performance of our approaches on both the MPP and maximal loss (ML) problems and deeply dive into why BERT-Chinese outperforms other models.","PeriodicalId":331851,"journal":{"name":"Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.18653/v1/2022.finnlp-1.19","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

In this paper, we discuss the various approaches by the Jetsons team for the “Pairwise Comparison” sub-task of the ERAI shared task to compare financial opinions for profitability and loss. Our BERT-Chinese model considers a pair of opinions and predicts the one with a higher maximum potential profit (MPP) with 62.07% accuracy. We analyze the performance of our approaches on both the MPP and maximal loss (ML) problems and deeply dive into why BERT-Chinese outperforms other models.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
《杰森一家》在芬兰nlp -2022 ERAI任务:BERT-Chinese挖掘高MPP岗位
在本文中,我们讨论了《杰森一家》团队在ERAI共享任务的“两两比较”子任务中比较盈利和亏损财务意见的各种方法。我们的BERT-Chinese模型考虑了一对意见,并以62.07%的准确率预测出最大潜在利润(MPP)更高的意见。我们分析了我们的方法在MPP和最大损失(ML)问题上的性能,并深入探讨了BERT-Chinese优于其他模型的原因。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
AstBERT: Enabling Language Model for Financial Code Understanding with Abstract Syntax Trees Automatic Term and Sentence Classification Via Augmented Term and Pre-trained language model in ESG Taxonomy texts Ranking Environment, Social And Governance Related Concepts And Assessing Sustainability Aspect of Financial Texts TweetFinSent: A Dataset of Stock Sentiments on Twitter Using Transformer-based Models for Taxonomy Enrichment and Sentence Classification
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1