支持计算机辅助语言教学的自动生成问题质量的众包评估

IF 4.6 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Recall Pub Date : 2019-10-04 DOI:10.1017/S0958344019000193
Maria Chinkina, Simón Ruiz, Walt Detmar Meurers
{"title":"支持计算机辅助语言教学的自动生成问题质量的众包评估","authors":"Maria Chinkina, Simón Ruiz, Walt Detmar Meurers","doi":"10.1017/S0958344019000193","DOIUrl":null,"url":null,"abstract":"Abstract How can state-of-the-art computational linguistic technology reduce the workload and increase the efficiency of language teachers? To address this question, we combine insights from research in second language acquisition and computational linguistics to automatically generate text-based questions to a given text. The questions are designed to draw the learner’s attention to target linguistic forms – phrasal verbs, in this particular case – by requiring them to use the forms or their paraphrases in the answer. Such questions help learners create form-meaning connections and are well suited for both practice and testing. We discuss the generation of a novel type of question combining a wh- question with a gapped sentence, and report the results of two crowdsourcing evaluation studies investigating how well automatically generated questions compare to those written by a language teacher. The first study compares our system output to gold standard human-written questions via crowdsourcing rating. An equivalence test shows that automatically generated questions are comparable to human-written ones. The second crowdsourcing study investigates two types of questions (wh- questions with and without a gapped sentence), their perceived quality, and the responses they elicit. Finally, we discuss the challenges and limitations of creating and evaluating question-generation systems for language learners.","PeriodicalId":47046,"journal":{"name":"Recall","volume":"32 1","pages":"145 - 161"},"PeriodicalIF":4.6000,"publicationDate":"2019-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/S0958344019000193","citationCount":"5","resultStr":"{\"title\":\"Crowdsourcing evaluation of the quality of automatically generated questions for supporting computer-assisted language teaching\",\"authors\":\"Maria Chinkina, Simón Ruiz, Walt Detmar Meurers\",\"doi\":\"10.1017/S0958344019000193\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract How can state-of-the-art computational linguistic technology reduce the workload and increase the efficiency of language teachers? To address this question, we combine insights from research in second language acquisition and computational linguistics to automatically generate text-based questions to a given text. The questions are designed to draw the learner’s attention to target linguistic forms – phrasal verbs, in this particular case – by requiring them to use the forms or their paraphrases in the answer. Such questions help learners create form-meaning connections and are well suited for both practice and testing. We discuss the generation of a novel type of question combining a wh- question with a gapped sentence, and report the results of two crowdsourcing evaluation studies investigating how well automatically generated questions compare to those written by a language teacher. The first study compares our system output to gold standard human-written questions via crowdsourcing rating. An equivalence test shows that automatically generated questions are comparable to human-written ones. The second crowdsourcing study investigates two types of questions (wh- questions with and without a gapped sentence), their perceived quality, and the responses they elicit. Finally, we discuss the challenges and limitations of creating and evaluating question-generation systems for language learners.\",\"PeriodicalId\":47046,\"journal\":{\"name\":\"Recall\",\"volume\":\"32 1\",\"pages\":\"145 - 161\"},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2019-10-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1017/S0958344019000193\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Recall\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1017/S0958344019000193\",\"RegionNum\":1,\"RegionCategory\":\"文学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Recall","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1017/S0958344019000193","RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 5

摘要

最先进的计算语言技术如何减少语言教师的工作量,提高他们的工作效率?为了解决这个问题,我们结合了第二语言习得研究和计算语言学的见解,对给定文本自动生成基于文本的问题。这些问题的设计目的是通过要求学习者在答案中使用这些形式或对它们的解释,将学习者的注意力吸引到目标语言形式上——在这个特殊的例子中是动词短语。这样的问题可以帮助学习者建立形式与意义的联系,非常适合练习和测试。我们讨论了一种新型问题的生成,该问题结合了一个wh-问句和一个空白句子,并报告了两项众包评估研究的结果,该研究调查了自动生成的问题与语言教师编写的问题相比有多好。第一项研究通过众包评级将我们的系统输出与黄金标准的人工编写问题进行比较。等效性测试表明,自动生成的问题与人工编写的问题相当。第二个众包研究调查了两种类型的问题(有和没有间隔句的问题),它们的感知质量,以及它们引起的反应。最后,我们讨论了为语言学习者创建和评估问题生成系统的挑战和局限性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Crowdsourcing evaluation of the quality of automatically generated questions for supporting computer-assisted language teaching
Abstract How can state-of-the-art computational linguistic technology reduce the workload and increase the efficiency of language teachers? To address this question, we combine insights from research in second language acquisition and computational linguistics to automatically generate text-based questions to a given text. The questions are designed to draw the learner’s attention to target linguistic forms – phrasal verbs, in this particular case – by requiring them to use the forms or their paraphrases in the answer. Such questions help learners create form-meaning connections and are well suited for both practice and testing. We discuss the generation of a novel type of question combining a wh- question with a gapped sentence, and report the results of two crowdsourcing evaluation studies investigating how well automatically generated questions compare to those written by a language teacher. The first study compares our system output to gold standard human-written questions via crowdsourcing rating. An equivalence test shows that automatically generated questions are comparable to human-written ones. The second crowdsourcing study investigates two types of questions (wh- questions with and without a gapped sentence), their perceived quality, and the responses they elicit. Finally, we discuss the challenges and limitations of creating and evaluating question-generation systems for language learners.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Recall
Recall Multiple-
CiteScore
8.50
自引率
4.40%
发文量
17
期刊最新文献
Forty-two years of computer-assisted language learning research: A scientometric study of hotspot research and trending issues Different interlocutors, different EFL interactional strategies: A case study of intercultural telecollaborative projects in secondary classrooms Examining the relationships among motivation, informal digital learning of English, and foreign language enjoyment: An explanatory mixed-method study ReCALL editorial September 2023 issue Sampling and randomisation in experimental and quasi-experimental CALL studies: Issues and recommendations for design, reporting, review, and interpretation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1