Christian Nathaniel Purwanto, Ary Hermawan, Joan Santoso, Gunawan
{"title":"Distributed Training for Multilingual Combined Tokenizer using Deep Learning Model and Simple Communication Protocol","authors":"Christian Nathaniel Purwanto, Ary Hermawan, Joan Santoso, Gunawan","doi":"10.1109/ICORIS.2019.8874898","DOIUrl":null,"url":null,"abstract":"In the big data era, text processing tends to be harder as the data increase. There is also the growth of deep learning model for solving natural language processing tasks without a need for hand-crafted rules. In this research, we provide two big solutions in the area of text preprocessing and distributed training for any neural-based model. We try to solve the most common text preprocessing which are word and sentence tokenization. Our proposed combined tokenizer is compared by using a single language model and multilanguage model. We also provide a simple communication using MQTT protocol to help the training distribution.","PeriodicalId":118443,"journal":{"name":"2019 1st International Conference on Cybernetics and Intelligent System (ICORIS)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 1st International Conference on Cybernetics and Intelligent System (ICORIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICORIS.2019.8874898","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
In the big data era, text processing tends to be harder as the data increase. There is also the growth of deep learning model for solving natural language processing tasks without a need for hand-crafted rules. In this research, we provide two big solutions in the area of text preprocessing and distributed training for any neural-based model. We try to solve the most common text preprocessing which are word and sentence tokenization. Our proposed combined tokenizer is compared by using a single language model and multilanguage model. We also provide a simple communication using MQTT protocol to help the training distribution.