{"title":"An Efficient Text Classification Using fastText for Bahasa Indonesia Documents Classification","authors":"A. Amalia, O. S. Sitompul, E. Nababan, T. Mantoro","doi":"10.1109/DATABIA50434.2020.9190447","DOIUrl":null,"url":null,"abstract":"Text classification using a simple word representation with a linear classifier often considered as strong baselines to gain the best performances. However, a simple word representation like Bag of Word (BOW) has a deficiency of curse dimensionality, so it is only suitable for small datasets. BOW also needs some dependent pre-processing steps like stopwords-removal and stemming. Therefore, the BOW model cannot be implemented automatically because of the dependency in a specific language. On the other hand, deep neural network classifiers can eliminate the pre-processing prerequisite, but this model not efficient in time processing and need a large dataset for the learning process. It becomes a challenge for language that has limitation resources like Bahasa Indonesia. Another novel approach of text classifier is using the fastText model for text classification. This model can minimize pre-processing dependencies and more efficient in training time processing. However, there hasn't been much observation whether the fastText model outperformed the BOW model for small datasets. This paper aims to compare text classification using the TFIDF model as one of the BOW models with a fastText model for 500 news articles in Bahasa Indonesia. The result of this study showed both models gain an outstanding performance, which is 0.97 F-Score. The TFIDF model needs longer pre-processing stages and requiring more training time. Meanwhile, the fastText model only needs to tune some hyperparameters and get similar performance results to the TFIDF model. Based on this study, we can conclude that the fastText model is efficient text classification.","PeriodicalId":165106,"journal":{"name":"2020 International Conference on Data Science, Artificial Intelligence, and Business Analytics (DATABIA)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 International Conference on Data Science, Artificial Intelligence, and Business Analytics (DATABIA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DATABIA50434.2020.9190447","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Text classification using a simple word representation with a linear classifier often considered as strong baselines to gain the best performances. However, a simple word representation like Bag of Word (BOW) has a deficiency of curse dimensionality, so it is only suitable for small datasets. BOW also needs some dependent pre-processing steps like stopwords-removal and stemming. Therefore, the BOW model cannot be implemented automatically because of the dependency in a specific language. On the other hand, deep neural network classifiers can eliminate the pre-processing prerequisite, but this model not efficient in time processing and need a large dataset for the learning process. It becomes a challenge for language that has limitation resources like Bahasa Indonesia. Another novel approach of text classifier is using the fastText model for text classification. This model can minimize pre-processing dependencies and more efficient in training time processing. However, there hasn't been much observation whether the fastText model outperformed the BOW model for small datasets. This paper aims to compare text classification using the TFIDF model as one of the BOW models with a fastText model for 500 news articles in Bahasa Indonesia. The result of this study showed both models gain an outstanding performance, which is 0.97 F-Score. The TFIDF model needs longer pre-processing stages and requiring more training time. Meanwhile, the fastText model only needs to tune some hyperparameters and get similar performance results to the TFIDF model. Based on this study, we can conclude that the fastText model is efficient text classification.
使用简单的词表示和线性分类器进行文本分类通常被认为是获得最佳性能的强基线。然而,像Bag of word (BOW)这样的简单的词表示存在诅咒维数不足的问题,因此它只适用于小数据集。BOW还需要一些相关的预处理步骤,如停词删除和词干提取。因此,由于特定语言的依赖性,BOW模型不能自动实现。另一方面,深度神经网络分类器可以消除预处理的前提条件,但该模型在时间处理上效率不高,并且需要大量的数据集进行学习。对于像印尼语这样资源有限的语言来说,这是一个挑战。文本分类器的另一种新方法是使用fastText模型进行文本分类。该模型可以减少预处理依赖,提高训练时间处理效率。然而,对于小数据集,fastText模型是否优于BOW模型还没有太多的观察。本文旨在比较使用TFIDF模型作为BOW模型之一的文本分类与使用fastText模型的500篇印尼语新闻文章。本研究的结果表明,两种模型都获得了出色的性能,F-Score为0.97。TFIDF模型需要较长的预处理阶段和较长的训练时间。同时,fastText模型只需要调优一些超参数,就可以获得与TFIDF模型相似的性能结果。基于本研究,我们可以得出fastText模型是一种高效的文本分类方法。