Can linguists distinguish between ChatGPT/AI and human writing?: A study of research ethics and academic publishing

J. Elliott Casal , Matt Kessler
{"title":"Can linguists distinguish between ChatGPT/AI and human writing?: A study of research ethics and academic publishing","authors":"J. Elliott Casal ,&nbsp;Matt Kessler","doi":"10.1016/j.rmal.2023.100068","DOIUrl":null,"url":null,"abstract":"<div><p>There has been considerable intrigue surrounding the use of Large Language Model powered AI chatbots such as ChatGPT in research, educational contexts, and beyond. However, most studies have explored such tools’ general capabilities and applications for language teaching purposes. The current study advances this discussion to examine issues pertaining to human judgements, accuracy, and research ethics. Specifically, we investigate: 1) the extent to which linguists/reviewers from top journals can distinguish AI- from human-generated writing, 2) what the basis of reviewers’ decisions are, and 3) the extent to which editors of top Applied Linguistics journals believe AI tools are ethical for research purposes. In the study, reviewers (<em>N</em> = 72) completed a judgement task involving AI- and human-generated research abstracts, and several reviewers participated in follow-up interviews to explain their rationales. Similarly, editors (<em>N</em> = 27) completed a survey and interviews to discuss their beliefs. Findings suggest that despite employing multiple rationales to judge texts, reviewers were largely unsuccessful in identifying AI versus human writing, with an overall positive identification rate of only 38.9%. Additionally, many editors believed there are ethical uses of AI tools for facilitating research processes, yet some disagreed. Future research directions are discussed involving AI tools and academic publishing.</p></div>","PeriodicalId":101075,"journal":{"name":"Research Methods in Applied Linguistics","volume":"2 3","pages":"Article 100068"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Research Methods in Applied Linguistics","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772766123000289","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

Abstract

There has been considerable intrigue surrounding the use of Large Language Model powered AI chatbots such as ChatGPT in research, educational contexts, and beyond. However, most studies have explored such tools’ general capabilities and applications for language teaching purposes. The current study advances this discussion to examine issues pertaining to human judgements, accuracy, and research ethics. Specifically, we investigate: 1) the extent to which linguists/reviewers from top journals can distinguish AI- from human-generated writing, 2) what the basis of reviewers’ decisions are, and 3) the extent to which editors of top Applied Linguistics journals believe AI tools are ethical for research purposes. In the study, reviewers (N = 72) completed a judgement task involving AI- and human-generated research abstracts, and several reviewers participated in follow-up interviews to explain their rationales. Similarly, editors (N = 27) completed a survey and interviews to discuss their beliefs. Findings suggest that despite employing multiple rationales to judge texts, reviewers were largely unsuccessful in identifying AI versus human writing, with an overall positive identification rate of only 38.9%. Additionally, many editors believed there are ethical uses of AI tools for facilitating research processes, yet some disagreed. Future research directions are discussed involving AI tools and academic publishing.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
语言学家能区分ChatGPT/AI和人类写作吗?:研究伦理与学术出版研究
围绕着大型语言模型驱动的人工智能聊天机器人(如ChatGPT)在研究、教育环境等领域的使用,人们产生了相当大的兴趣。然而,大多数研究都探讨了这些工具的一般功能和在语言教学中的应用。目前的研究推进了这一讨论,以审查与人类判断、准确性和研究伦理有关的问题。具体而言,我们调查了:1)顶级期刊的语言学家/审稿人能够在多大程度上区分人工智能和人类生成的写作,2)审稿人的决策基础是什么,以及3)顶级应用语言学期刊的编辑在多大程度上认为人工智能工具符合研究目的。在这项研究中,评审员(N=72)完成了一项涉及人工智能和人类生成的研究摘要的判断任务,几位评审员参加了后续访谈,以解释他们的理由。同样,编辑(N=27)完成了一项调查和访谈,以讨论他们的信仰。研究结果表明,尽管使用了多种理由来判断文本,但评审人员在识别人工智能与人类写作方面基本上没有成功,总体阳性识别率仅为38.9%。此外,许多编辑认为人工智能工具在促进研究过程中有道德用途,但一些编辑不同意。讨论了未来的研究方向,包括人工智能工具和学术出版。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
4.10
自引率
0.00%
发文量
0
期刊最新文献
Editorial Board Toward ethical praxis in longitudinal research with children: Reflecting on ethical tensions in participatory research A conversation analysis-complex dynamics systems theory (CA-CDST) approach for analyzing longitudinal development in L2 pragmatics Categorising speakers’ language background: Theoretical assumptions and methodological challenges for learner corpus research Data from role plays and elicited conversations: What do they show about L2 interactional competence?
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1