{"title":"ArMT-TNN:通过阿拉伯语硬参数多任务学习提高自然语言理解性能","authors":"Ali Alkhathlan, Khalid Alomar","doi":"10.3233/kes-230192","DOIUrl":null,"url":null,"abstract":"Multitask learning (MTL) is a machine learning paradigm where a single model is trained to perform several tasks simultaneously. Despite the considerable amount of research on MTL, the majority of it has been centered around English language, while other language such as Arabic have not received as much attention. Most existing Arabic NLP techniques concentrate on single or multitask learning, sharing just a limited number of tasks, between two or three tasks. To address this gap, we present ArMT-TNN, an Arabic Multi-Task Learning using Transformer Neural Network, designed for Arabic natural language understanding (ANLU) tasks. Our approach involves sharing learned information between eight ANLU tasks, allowing for a single model to solve all of them. We achieve this by fine-tuning all tasks simultaneously and using multiple pre-trained Bidirectional Transformer language models, like BERT, that are specifically designed for Arabic language processing. Additionally, we explore the effectiveness of various Arabic language models (LMs) that have been pre-trained on different types of Arabic text, such as Modern Standard Arabic (MSA) and Arabic dialects. Our approach demonstrated outstanding performance compared to all current models on four test sets within the ALUE benchmark, namely MQ2Q, OOLD, SVREG, and SEC, by margins of 3.9%, 3.8%, 10.1%, and 3.7%, respectively. Nonetheless, our approach did not perform as well on the remaining tasks due to the negative transfer of knowledge. This finding highlights the importance of carefully selecting tasks when constructing a benchmark. Our experiments also show that LMs which were pretrained on text types that differ from the text type used for finetuned tasks can still perform well.","PeriodicalId":44076,"journal":{"name":"International Journal of Knowledge-Based and Intelligent Engineering Systems","volume":null,"pages":null},"PeriodicalIF":0.6000,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ArMT-TNN: Enhancing natural language understanding performance through hard parameter multitask learning in Arabic\",\"authors\":\"Ali Alkhathlan, Khalid Alomar\",\"doi\":\"10.3233/kes-230192\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Multitask learning (MTL) is a machine learning paradigm where a single model is trained to perform several tasks simultaneously. Despite the considerable amount of research on MTL, the majority of it has been centered around English language, while other language such as Arabic have not received as much attention. Most existing Arabic NLP techniques concentrate on single or multitask learning, sharing just a limited number of tasks, between two or three tasks. To address this gap, we present ArMT-TNN, an Arabic Multi-Task Learning using Transformer Neural Network, designed for Arabic natural language understanding (ANLU) tasks. Our approach involves sharing learned information between eight ANLU tasks, allowing for a single model to solve all of them. We achieve this by fine-tuning all tasks simultaneously and using multiple pre-trained Bidirectional Transformer language models, like BERT, that are specifically designed for Arabic language processing. Additionally, we explore the effectiveness of various Arabic language models (LMs) that have been pre-trained on different types of Arabic text, such as Modern Standard Arabic (MSA) and Arabic dialects. Our approach demonstrated outstanding performance compared to all current models on four test sets within the ALUE benchmark, namely MQ2Q, OOLD, SVREG, and SEC, by margins of 3.9%, 3.8%, 10.1%, and 3.7%, respectively. Nonetheless, our approach did not perform as well on the remaining tasks due to the negative transfer of knowledge. This finding highlights the importance of carefully selecting tasks when constructing a benchmark. Our experiments also show that LMs which were pretrained on text types that differ from the text type used for finetuned tasks can still perform well.\",\"PeriodicalId\":44076,\"journal\":{\"name\":\"International Journal of Knowledge-Based and Intelligent Engineering Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.6000,\"publicationDate\":\"2024-01-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Knowledge-Based and Intelligent Engineering Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3233/kes-230192\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Knowledge-Based and Intelligent Engineering Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3233/kes-230192","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
ArMT-TNN: Enhancing natural language understanding performance through hard parameter multitask learning in Arabic
Multitask learning (MTL) is a machine learning paradigm where a single model is trained to perform several tasks simultaneously. Despite the considerable amount of research on MTL, the majority of it has been centered around English language, while other language such as Arabic have not received as much attention. Most existing Arabic NLP techniques concentrate on single or multitask learning, sharing just a limited number of tasks, between two or three tasks. To address this gap, we present ArMT-TNN, an Arabic Multi-Task Learning using Transformer Neural Network, designed for Arabic natural language understanding (ANLU) tasks. Our approach involves sharing learned information between eight ANLU tasks, allowing for a single model to solve all of them. We achieve this by fine-tuning all tasks simultaneously and using multiple pre-trained Bidirectional Transformer language models, like BERT, that are specifically designed for Arabic language processing. Additionally, we explore the effectiveness of various Arabic language models (LMs) that have been pre-trained on different types of Arabic text, such as Modern Standard Arabic (MSA) and Arabic dialects. Our approach demonstrated outstanding performance compared to all current models on four test sets within the ALUE benchmark, namely MQ2Q, OOLD, SVREG, and SEC, by margins of 3.9%, 3.8%, 10.1%, and 3.7%, respectively. Nonetheless, our approach did not perform as well on the remaining tasks due to the negative transfer of knowledge. This finding highlights the importance of carefully selecting tasks when constructing a benchmark. Our experiments also show that LMs which were pretrained on text types that differ from the text type used for finetuned tasks can still perform well.