Transformer-based text similarity and second language proficiency: A case of written production by learners of Korean

Gyu-Ho Shin , Boo Kyung Jung , Seongmin Mun
{"title":"Transformer-based text similarity and second language proficiency: A case of written production by learners of Korean","authors":"Gyu-Ho Shin ,&nbsp;Boo Kyung Jung ,&nbsp;Seongmin Mun","doi":"10.1016/j.nlp.2024.100060","DOIUrl":null,"url":null,"abstract":"<div><p>The present study applies two transformer models (BERT; GPT-2) to analyse argumentative essays produced by two first-language groups (Czech; English) of second-language learners of Korean and investigates how informative similarity scores of learner writing obtained by these models explain general language proficiency in Korean. Results show three major aspects on model performance. First, the relationships between the similarity scores and the proficiency scores differ from the tendencies between the human rating scores and the proficiency scores. Second, the degree to which the similarity scores obtained by each model explain the proficiency scores is asymmetric and idiosyncratic. Third, the performance of the two models is affected by learners’ native language and essay topic. These findings invite the need for researchers and educators to pay attention to how computational algorithms operate, together with learner language characteristics and language-specific properties of the target language, in utilising Natural Language Processing methods and techniques for their research or instructional purposes.</p></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"6 ","pages":"Article 100060"},"PeriodicalIF":0.0000,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949719124000086/pdfft?md5=c5357abe0301e49c473990485a85a9a2&pid=1-s2.0-S2949719124000086-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Natural Language Processing Journal","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949719124000086","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The present study applies two transformer models (BERT; GPT-2) to analyse argumentative essays produced by two first-language groups (Czech; English) of second-language learners of Korean and investigates how informative similarity scores of learner writing obtained by these models explain general language proficiency in Korean. Results show three major aspects on model performance. First, the relationships between the similarity scores and the proficiency scores differ from the tendencies between the human rating scores and the proficiency scores. Second, the degree to which the similarity scores obtained by each model explain the proficiency scores is asymmetric and idiosyncratic. Third, the performance of the two models is affected by learners’ native language and essay topic. These findings invite the need for researchers and educators to pay attention to how computational algorithms operate, together with learner language characteristics and language-specific properties of the target language, in utilising Natural Language Processing methods and techniques for their research or instructional purposes.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于转换器的文本相似性与第二语言能力:韩语学习者的书面表达案例
本研究运用两个转换器模型(BERT;GPT-2)分析了两个第一语言群体(捷克语;英语)的韩语第二语言学习者的议论文,并研究了通过这些模型获得的学习者写作的信息相似性分数如何解释韩语的一般语言能力。研究结果显示了模型性能的三个主要方面。首先,相似性得分与能力得分之间的关系不同于人类评分与能力得分之间的关系。其次,每个模型得到的相似性得分对能力得分的解释程度是不对称的,具有特异性。第三,两种模型的性能受学习者母语和论文题目的影响。这些发现表明,研究人员和教育工作者在利用自然语言处理方法和技术进行研究或教学时,需要关注计算算法的运行方式,以及学习者的语言特点和目标语言的特定语言属性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
CapsF: Capsule Fusion for Extracting psychiatric stressors for suicide from Twitter Token and part-of-speech fusion for pretraining of transformers with application in automatic cyberbullying detection A comparative analysis of encoder only and decoder only models for challenging LLM-generated STEM MCQs using a self-evaluation approach Machine learning vs. rule-based methods for document classification of electronic health records within mental health care—A systematic literature review A survey on chatbots and large language models: Testing and evaluation techniques
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1