日语Winograd图式挑战中预训练语言模型的有效性

IF 0.7 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Journal of Advanced Computational Intelligence and Intelligent Informatics Pub Date : 2023-05-20 DOI:10.20965/jaciii.2023.p0511
Keigo Takahashi, Teruaki Oka, Mamoru Komachi
{"title":"日语Winograd图式挑战中预训练语言模型的有效性","authors":"Keigo Takahashi, Teruaki Oka, Mamoru Komachi","doi":"10.20965/jaciii.2023.p0511","DOIUrl":null,"url":null,"abstract":"This paper compares Japanese and multilingual language models (LMs) in a Japanese pronoun reference resolution task to determine the factors of LMs that contribute to Japanese pronoun resolution. Specifically, we tackle the Japanese Winograd schema challenge task (WSC task), which is a well-known pronoun reference resolution task. The Japanese WSC task requires inter-sentential analysis, which is more challenging to solve than intra-sentential analysis. A previous study evaluated pre-trained multilingual LMs in terms of training language on the target WSC task, including Japanese. However, the study did not perform pre-trained LM-wise evaluations, focusing on the training language-wise evaluations with a multilingual WSC task. Furthermore, it did not investigate the effectiveness of factors (e.g., model size, learning settings in the pre-training phase, or multilingualism) to improve the performance. In our study, we compare the performance of inter-sentential analysis on the Japanese WSC task for several pre-trained LMs, including multilingual ones. Our results confirm that XLM, a pre-trained LM on multiple languages, performs the best among all considered LMs, which we attribute to the amount of data in the pre-training phase.","PeriodicalId":45921,"journal":{"name":"Journal of Advanced Computational Intelligence and Intelligent Informatics","volume":"49 1","pages":"511-521"},"PeriodicalIF":0.7000,"publicationDate":"2023-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Effectiveness of Pre-Trained Language Models for the Japanese Winograd Schema Challenge\",\"authors\":\"Keigo Takahashi, Teruaki Oka, Mamoru Komachi\",\"doi\":\"10.20965/jaciii.2023.p0511\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper compares Japanese and multilingual language models (LMs) in a Japanese pronoun reference resolution task to determine the factors of LMs that contribute to Japanese pronoun resolution. Specifically, we tackle the Japanese Winograd schema challenge task (WSC task), which is a well-known pronoun reference resolution task. The Japanese WSC task requires inter-sentential analysis, which is more challenging to solve than intra-sentential analysis. A previous study evaluated pre-trained multilingual LMs in terms of training language on the target WSC task, including Japanese. However, the study did not perform pre-trained LM-wise evaluations, focusing on the training language-wise evaluations with a multilingual WSC task. Furthermore, it did not investigate the effectiveness of factors (e.g., model size, learning settings in the pre-training phase, or multilingualism) to improve the performance. In our study, we compare the performance of inter-sentential analysis on the Japanese WSC task for several pre-trained LMs, including multilingual ones. Our results confirm that XLM, a pre-trained LM on multiple languages, performs the best among all considered LMs, which we attribute to the amount of data in the pre-training phase.\",\"PeriodicalId\":45921,\"journal\":{\"name\":\"Journal of Advanced Computational Intelligence and Intelligent Informatics\",\"volume\":\"49 1\",\"pages\":\"511-521\"},\"PeriodicalIF\":0.7000,\"publicationDate\":\"2023-05-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Advanced Computational Intelligence and Intelligent Informatics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.20965/jaciii.2023.p0511\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Advanced Computational Intelligence and Intelligent Informatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.20965/jaciii.2023.p0511","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

本文比较了日语和多语言模型在日语代词指代解析任务中的作用,以确定多语言模型对日语代词指代解析的影响因素。具体来说,我们解决了日本Winograd模式挑战任务(WSC任务),这是一个众所周知的代词引用解析任务。日语WSC任务需要句间分析,这比句内分析更具挑战性。先前的一项研究评估了预训练的多语言LMs在目标WSC任务上的训练语言,包括日语。然而,该研究没有进行预训练的lm智能评估,而是关注多语言WSC任务的训练语言智能评估。此外,它没有调查因素(例如,模型大小,预训练阶段的学习设置或多语言)对提高性能的有效性。在我们的研究中,我们比较了几个预训练的LMs(包括多语言LMs)在日语WSC任务上的句子间分析性能。我们的结果证实,在所有考虑的LM中,基于多种语言的预训练LM XLM表现最好,我们将其归因于预训练阶段的数据量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Effectiveness of Pre-Trained Language Models for the Japanese Winograd Schema Challenge
This paper compares Japanese and multilingual language models (LMs) in a Japanese pronoun reference resolution task to determine the factors of LMs that contribute to Japanese pronoun resolution. Specifically, we tackle the Japanese Winograd schema challenge task (WSC task), which is a well-known pronoun reference resolution task. The Japanese WSC task requires inter-sentential analysis, which is more challenging to solve than intra-sentential analysis. A previous study evaluated pre-trained multilingual LMs in terms of training language on the target WSC task, including Japanese. However, the study did not perform pre-trained LM-wise evaluations, focusing on the training language-wise evaluations with a multilingual WSC task. Furthermore, it did not investigate the effectiveness of factors (e.g., model size, learning settings in the pre-training phase, or multilingualism) to improve the performance. In our study, we compare the performance of inter-sentential analysis on the Japanese WSC task for several pre-trained LMs, including multilingual ones. Our results confirm that XLM, a pre-trained LM on multiple languages, performs the best among all considered LMs, which we attribute to the amount of data in the pre-training phase.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
1.50
自引率
14.30%
发文量
89
期刊介绍: JACIII focuses on advanced computational intelligence and intelligent informatics. The topics include, but are not limited to; Fuzzy logic, Fuzzy control, Neural Networks, GA and Evolutionary Computation, Hybrid Systems, Adaptation and Learning Systems, Distributed Intelligent Systems, Network systems, Multi-media, Human interface, Biologically inspired evolutionary systems, Artificial life, Chaos, Complex systems, Fractals, Robotics, Medical applications, Pattern recognition, Virtual reality, Wavelet analysis, Scientific applications, Industrial applications, and Artistic applications.
期刊最新文献
The Impact of Individual Heterogeneity on Household Asset Choice: An Empirical Study Based on China Family Panel Studies Private Placement, Investor Sentiment, and Stock Price Anomaly Does Increasing Public Service Expenditure Slow the Long-Term Economic Growth Rate?—Evidence from China Prediction and Characteristic Analysis of Enterprise Digital Transformation Integrating XGBoost and SHAP Industrial Chain Map and Linkage Network Characteristics of Digital Economy
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1