{"title":"古兰经印尼语翻译的标记化和N-Gram索引","authors":"S. Putra, M. Gunawan, Agung Suryatno","doi":"10.1109/ICOICT.2018.8528762","DOIUrl":null,"url":null,"abstract":"Tokenization is an important process used to break the text into parts of a word. N-gram model now is widely used in computational linguistics for predicting the next item in such a contiguous sequence of $\\mathbf{n}$ items from a particular sample of text. This paper focuses on the implementation of tokenization and n-gram model using RapidMiner to produce unigram and bigram word for indexing Indonesian Translation of the Quran (ITQ). This study uses ITQ data sets consisting of 114 documents. The methods are data extracting and preprocessing text including tokenization, stemming, stopword removal, transformation cases, and n-grams. The results of this study showed the model produces the 6794 and 60323 tokens combination unigram and bigram use for index ITQ. Significant the contribution of this study is to enhance the digital index of ITQ.","PeriodicalId":266335,"journal":{"name":"2018 6th International Conference on Information and Communication Technology (ICoICT)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2018-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Tokenization and N-Gram for Indexing Indonesian Translation of the Quran\",\"authors\":\"S. Putra, M. Gunawan, Agung Suryatno\",\"doi\":\"10.1109/ICOICT.2018.8528762\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Tokenization is an important process used to break the text into parts of a word. N-gram model now is widely used in computational linguistics for predicting the next item in such a contiguous sequence of $\\\\mathbf{n}$ items from a particular sample of text. This paper focuses on the implementation of tokenization and n-gram model using RapidMiner to produce unigram and bigram word for indexing Indonesian Translation of the Quran (ITQ). This study uses ITQ data sets consisting of 114 documents. The methods are data extracting and preprocessing text including tokenization, stemming, stopword removal, transformation cases, and n-grams. The results of this study showed the model produces the 6794 and 60323 tokens combination unigram and bigram use for index ITQ. Significant the contribution of this study is to enhance the digital index of ITQ.\",\"PeriodicalId\":266335,\"journal\":{\"name\":\"2018 6th International Conference on Information and Communication Technology (ICoICT)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 6th International Conference on Information and Communication Technology (ICoICT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICOICT.2018.8528762\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 6th International Conference on Information and Communication Technology (ICoICT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICOICT.2018.8528762","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Tokenization and N-Gram for Indexing Indonesian Translation of the Quran
Tokenization is an important process used to break the text into parts of a word. N-gram model now is widely used in computational linguistics for predicting the next item in such a contiguous sequence of $\mathbf{n}$ items from a particular sample of text. This paper focuses on the implementation of tokenization and n-gram model using RapidMiner to produce unigram and bigram word for indexing Indonesian Translation of the Quran (ITQ). This study uses ITQ data sets consisting of 114 documents. The methods are data extracting and preprocessing text including tokenization, stemming, stopword removal, transformation cases, and n-grams. The results of this study showed the model produces the 6794 and 60323 tokens combination unigram and bigram use for index ITQ. Significant the contribution of this study is to enhance the digital index of ITQ.