BERT for Twitter Sentiment Analysis: Achieving High Accuracy and Balanced Performance

Oladri Renuka, Niranchana Radhakrishnan
{"title":"BERT for Twitter Sentiment Analysis: Achieving High Accuracy and Balanced Performance","authors":"Oladri Renuka, Niranchana Radhakrishnan","doi":"10.36548/jtcsst.2024.1.003","DOIUrl":null,"url":null,"abstract":"The Bidirectional Encoder Representations from Transformers (BERT) model is used in this work to analyse sentiment on Twitter data. A Kaggle dataset of manually annotated and anonymized COVID-19-related tweets was used to refine the model. Location, tweet date, original tweet content, and sentiment labels are all included in the dataset. When compared to the Multinomial Naive Bayes (MNB) baseline, BERT's performance was assessed, and it achieved an overall accuracy of 87% on the test set. The results indicated that for negative feelings, the accuracy was 0.93, the recall was 0.84, and the F1-score was 0.88; for neutral sentiments, the precision was 0.86, the recall was 0.78, and the F1-score was 0.82; and for positive sentiments, the precision was 0.82, the recall was 0.94, and the F1-score was 0.88. The model's proficiency with the linguistic nuances of Twitter, including slang and sarcasm, was demonstrated. This study also identifies the flaws of BERT and makes recommendations for future research paths, such as the integration of external knowledge and alternative designs.","PeriodicalId":107574,"journal":{"name":"Journal of Trends in Computer Science and Smart Technology","volume":"162 2","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Trends in Computer Science and Smart Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.36548/jtcsst.2024.1.003","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The Bidirectional Encoder Representations from Transformers (BERT) model is used in this work to analyse sentiment on Twitter data. A Kaggle dataset of manually annotated and anonymized COVID-19-related tweets was used to refine the model. Location, tweet date, original tweet content, and sentiment labels are all included in the dataset. When compared to the Multinomial Naive Bayes (MNB) baseline, BERT's performance was assessed, and it achieved an overall accuracy of 87% on the test set. The results indicated that for negative feelings, the accuracy was 0.93, the recall was 0.84, and the F1-score was 0.88; for neutral sentiments, the precision was 0.86, the recall was 0.78, and the F1-score was 0.82; and for positive sentiments, the precision was 0.82, the recall was 0.94, and the F1-score was 0.88. The model's proficiency with the linguistic nuances of Twitter, including slang and sarcasm, was demonstrated. This study also identifies the flaws of BERT and makes recommendations for future research paths, such as the integration of external knowledge and alternative designs.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用于 Twitter 情感分析的 BERT:实现高精度和均衡性能
在这项工作中,使用了来自变换器的双向编码器表示(BERT)模型来分析 Twitter 数据的情感。Kaggle 数据集包含人工标注和匿名的 COVID-19 相关推文,用于完善该模型。位置、推文日期、原始推文内容和情感标签都包含在数据集中。与多项式奈何贝叶斯(MNB)基线相比,BERT 的性能得到了评估,它在测试集上的总体准确率达到了 87%。结果表明,对于负面情绪,精确度为 0.93,召回率为 0.84,F1 分数为 0.88;对于中性情绪,精确度为 0.86,召回率为 0.78,F1 分数为 0.82;对于正面情绪,精确度为 0.82,召回率为 0.94,F1 分数为 0.88。该模型熟练掌握了 Twitter 语言的细微差别,包括俚语和讽刺。本研究还指出了 BERT 的缺陷,并对未来的研究路径提出了建议,如整合外部知识和替代设计。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
BERT for Twitter Sentiment Analysis: Achieving High Accuracy and Balanced Performance Brain Tumor Classification using Transfer Learning Winnowing vs Extended-Winnowing: A Comparative Analysis of Plagiarism Detection Algorithms Strengthening Smart Grid Cybersecurity: An In-Depth Investigation into the Fusion of Machine Learning and Natural Language Processing Interactive Guide Assignment System with Destination Recommendation and Built-in Chatbox
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1