{"title":"检测和评估人工智能生成的文本和人类制作的文本:以第二语言写作教师为例","authors":"Loc Nguyen , Jessie S. Barrot","doi":"10.1016/j.asw.2024.100899","DOIUrl":null,"url":null,"abstract":"<div><div>Artificial intelligence (AI) technologies have recently attracted the attention of second language (L2) writing scholars and practitioners. While they recognize the tool’s viability, they also raised the potential adverse effects of these tools on accurately reflecting students’ actual level of writing performance. It is, therefore, crucial for teachers to discern AI-generated essays from human-produced work for more accurate assessment. However, limited information is available about how they assess and distinguish between essays produced by AI and human authors. Thus, this study analyzed the scores and comments teachers gave and looked into their strategies for identifying the source of the essays. Findings showed that essays by a native English-speaking (NS) lecturer and ChatGPT were rated highly. Meanwhile, essays by an NS college student, non-native English-speaking (NNS) college student, and NNS lecturer scored lower, which made them distinguishable from an AI-generated text. The study also revealed that teachers could not consistently identify the AI-generated text, particularly those written by an NS professional. These findings were attributed to teachers’ past engagement with AI writing tools, familiarity with common L2 learner errors, and exposure to native and non-native English writing. From these results, implications for L2 writing instruction and future research are discussed.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"62 ","pages":"Article 100899"},"PeriodicalIF":4.2000,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Detecting and assessing AI-generated and human-produced texts: The case of second language writing teachers\",\"authors\":\"Loc Nguyen , Jessie S. Barrot\",\"doi\":\"10.1016/j.asw.2024.100899\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Artificial intelligence (AI) technologies have recently attracted the attention of second language (L2) writing scholars and practitioners. While they recognize the tool’s viability, they also raised the potential adverse effects of these tools on accurately reflecting students’ actual level of writing performance. It is, therefore, crucial for teachers to discern AI-generated essays from human-produced work for more accurate assessment. However, limited information is available about how they assess and distinguish between essays produced by AI and human authors. Thus, this study analyzed the scores and comments teachers gave and looked into their strategies for identifying the source of the essays. Findings showed that essays by a native English-speaking (NS) lecturer and ChatGPT were rated highly. Meanwhile, essays by an NS college student, non-native English-speaking (NNS) college student, and NNS lecturer scored lower, which made them distinguishable from an AI-generated text. The study also revealed that teachers could not consistently identify the AI-generated text, particularly those written by an NS professional. These findings were attributed to teachers’ past engagement with AI writing tools, familiarity with common L2 learner errors, and exposure to native and non-native English writing. From these results, implications for L2 writing instruction and future research are discussed.</div></div>\",\"PeriodicalId\":46865,\"journal\":{\"name\":\"Assessing Writing\",\"volume\":\"62 \",\"pages\":\"Article 100899\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2024-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Assessing Writing\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1075293524000928\",\"RegionNum\":1,\"RegionCategory\":\"文学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Assessing Writing","FirstCategoryId":"98","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1075293524000928","RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
Detecting and assessing AI-generated and human-produced texts: The case of second language writing teachers
Artificial intelligence (AI) technologies have recently attracted the attention of second language (L2) writing scholars and practitioners. While they recognize the tool’s viability, they also raised the potential adverse effects of these tools on accurately reflecting students’ actual level of writing performance. It is, therefore, crucial for teachers to discern AI-generated essays from human-produced work for more accurate assessment. However, limited information is available about how they assess and distinguish between essays produced by AI and human authors. Thus, this study analyzed the scores and comments teachers gave and looked into their strategies for identifying the source of the essays. Findings showed that essays by a native English-speaking (NS) lecturer and ChatGPT were rated highly. Meanwhile, essays by an NS college student, non-native English-speaking (NNS) college student, and NNS lecturer scored lower, which made them distinguishable from an AI-generated text. The study also revealed that teachers could not consistently identify the AI-generated text, particularly those written by an NS professional. These findings were attributed to teachers’ past engagement with AI writing tools, familiarity with common L2 learner errors, and exposure to native and non-native English writing. From these results, implications for L2 writing instruction and future research are discussed.
期刊介绍:
Assessing Writing is a refereed international journal providing a forum for ideas, research and practice on the assessment of written language. Assessing Writing publishes articles, book reviews, conference reports, and academic exchanges concerning writing assessments of all kinds, including traditional (direct and standardised forms of) testing of writing, alternative performance assessments (such as portfolios), workplace sampling and classroom assessment. The journal focuses on all stages of the writing assessment process, including needs evaluation, assessment creation, implementation, and validation, and test development.