Hilman Singgih Wicaksana, Retno Kusumaningrum, R. Gernowo
{"title":"利用变换器和基于注意力的深度学习确定社区幸福指数","authors":"Hilman Singgih Wicaksana, Retno Kusumaningrum, R. Gernowo","doi":"10.11591/ijai.v13.i2.pp1753-1761","DOIUrl":null,"url":null,"abstract":"In the current digital era, evaluating the quality of people's lives and their happiness index is closely related to their expressions and opinions on Twitter social media. Measuring population welfare goes beyond monetary aspects, focusing more on subjective well-being, and sentiment analysis helps evaluate people's perceptions of happiness aspects. Aspect-based sentiment analysis (ABSA) effectively identifies sentiments on predetermined aspects. The previous study has used Word-to-Vector (Word2Vec) and long short-term memory (LSTM) methods with or without attention mechanism (AM) to solve ABSA cases. However, the problem with the previous study is that Word2Vec has the disadvantage of being unable to handle the context of words in a sentence. Therefore, this study will address the problem with bidirectional encoder representations from transformers (BERT), which has the advantage of performing bidirectional training. Bayesian optimization as a hyperparameter tuning technique is used to find the best combination of parameters during the training process. Here we show that BERT-LSTM-AM outperforms the Word2Vec-LSTM-AM model in predicting aspect and sentiment. Furthermore, we found that BERT is the best state-of-the-art embedding technique for representing words in a sentence. Our results demonstrate how BERT as an embedding technique can significantly improve the model performance over Word2Vec.","PeriodicalId":507934,"journal":{"name":"IAES International Journal of Artificial Intelligence (IJ-AI)","volume":"4 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Determining community happiness index with transformers and attention-based deep learning\",\"authors\":\"Hilman Singgih Wicaksana, Retno Kusumaningrum, R. Gernowo\",\"doi\":\"10.11591/ijai.v13.i2.pp1753-1761\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the current digital era, evaluating the quality of people's lives and their happiness index is closely related to their expressions and opinions on Twitter social media. Measuring population welfare goes beyond monetary aspects, focusing more on subjective well-being, and sentiment analysis helps evaluate people's perceptions of happiness aspects. Aspect-based sentiment analysis (ABSA) effectively identifies sentiments on predetermined aspects. The previous study has used Word-to-Vector (Word2Vec) and long short-term memory (LSTM) methods with or without attention mechanism (AM) to solve ABSA cases. However, the problem with the previous study is that Word2Vec has the disadvantage of being unable to handle the context of words in a sentence. Therefore, this study will address the problem with bidirectional encoder representations from transformers (BERT), which has the advantage of performing bidirectional training. Bayesian optimization as a hyperparameter tuning technique is used to find the best combination of parameters during the training process. Here we show that BERT-LSTM-AM outperforms the Word2Vec-LSTM-AM model in predicting aspect and sentiment. Furthermore, we found that BERT is the best state-of-the-art embedding technique for representing words in a sentence. Our results demonstrate how BERT as an embedding technique can significantly improve the model performance over Word2Vec.\",\"PeriodicalId\":507934,\"journal\":{\"name\":\"IAES International Journal of Artificial Intelligence (IJ-AI)\",\"volume\":\"4 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IAES International Journal of Artificial Intelligence (IJ-AI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.11591/ijai.v13.i2.pp1753-1761\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IAES International Journal of Artificial Intelligence (IJ-AI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.11591/ijai.v13.i2.pp1753-1761","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Determining community happiness index with transformers and attention-based deep learning
In the current digital era, evaluating the quality of people's lives and their happiness index is closely related to their expressions and opinions on Twitter social media. Measuring population welfare goes beyond monetary aspects, focusing more on subjective well-being, and sentiment analysis helps evaluate people's perceptions of happiness aspects. Aspect-based sentiment analysis (ABSA) effectively identifies sentiments on predetermined aspects. The previous study has used Word-to-Vector (Word2Vec) and long short-term memory (LSTM) methods with or without attention mechanism (AM) to solve ABSA cases. However, the problem with the previous study is that Word2Vec has the disadvantage of being unable to handle the context of words in a sentence. Therefore, this study will address the problem with bidirectional encoder representations from transformers (BERT), which has the advantage of performing bidirectional training. Bayesian optimization as a hyperparameter tuning technique is used to find the best combination of parameters during the training process. Here we show that BERT-LSTM-AM outperforms the Word2Vec-LSTM-AM model in predicting aspect and sentiment. Furthermore, we found that BERT is the best state-of-the-art embedding technique for representing words in a sentence. Our results demonstrate how BERT as an embedding technique can significantly improve the model performance over Word2Vec.