Galangkangin Gotera, Radityo Eko Prasojo, Y. K. Isal
{"title":"利用ELECTRA建立新加坡式英语神经语言模型","authors":"Galangkangin Gotera, Radityo Eko Prasojo, Y. K. Isal","doi":"10.1109/ICACSIS56558.2022.9923521","DOIUrl":null,"url":null,"abstract":"We develop and benchmark a Singlish pretrained neural language model. To this end, we build a novel 3 GB Singlish freetext dataset collected through various Singaporean websites. Then, we leverage ELECTRA (Efficiently Learning an Encoder that Classifies Token Replacements Accurately) to train a transformer-based Singlish language model. ELECTRA is chosen due to its resource-efficiency to better ensure reproducibility. We further build two text classification datasets in Singlish: sentiment analysis and language identification. We use the two datasets to fine-tune our ELECTRA model and benchmark the results against other available pretrained models in English and Singlish. Our experiments show that our Singlish ELECTRA model is competitive against the best open-source models we found despite being pretrained within a significantly less amount of time. We publicly release the benchmarking dataset.","PeriodicalId":165728,"journal":{"name":"2022 International Conference on Advanced Computer Science and Information Systems (ICACSIS)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Developing a Singlish Neural Language Model using ELECTRA\",\"authors\":\"Galangkangin Gotera, Radityo Eko Prasojo, Y. K. Isal\",\"doi\":\"10.1109/ICACSIS56558.2022.9923521\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We develop and benchmark a Singlish pretrained neural language model. To this end, we build a novel 3 GB Singlish freetext dataset collected through various Singaporean websites. Then, we leverage ELECTRA (Efficiently Learning an Encoder that Classifies Token Replacements Accurately) to train a transformer-based Singlish language model. ELECTRA is chosen due to its resource-efficiency to better ensure reproducibility. We further build two text classification datasets in Singlish: sentiment analysis and language identification. We use the two datasets to fine-tune our ELECTRA model and benchmark the results against other available pretrained models in English and Singlish. Our experiments show that our Singlish ELECTRA model is competitive against the best open-source models we found despite being pretrained within a significantly less amount of time. We publicly release the benchmarking dataset.\",\"PeriodicalId\":165728,\"journal\":{\"name\":\"2022 International Conference on Advanced Computer Science and Information Systems (ICACSIS)\",\"volume\":\"69 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 International Conference on Advanced Computer Science and Information Systems (ICACSIS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICACSIS56558.2022.9923521\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Advanced Computer Science and Information Systems (ICACSIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICACSIS56558.2022.9923521","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Developing a Singlish Neural Language Model using ELECTRA
We develop and benchmark a Singlish pretrained neural language model. To this end, we build a novel 3 GB Singlish freetext dataset collected through various Singaporean websites. Then, we leverage ELECTRA (Efficiently Learning an Encoder that Classifies Token Replacements Accurately) to train a transformer-based Singlish language model. ELECTRA is chosen due to its resource-efficiency to better ensure reproducibility. We further build two text classification datasets in Singlish: sentiment analysis and language identification. We use the two datasets to fine-tune our ELECTRA model and benchmark the results against other available pretrained models in English and Singlish. Our experiments show that our Singlish ELECTRA model is competitive against the best open-source models we found despite being pretrained within a significantly less amount of time. We publicly release the benchmarking dataset.