披露使用人工智能工具撰写学术手稿的伦理问题

IF 2.1 Q2 ETHICS Research Ethics Pub Date : 2023-06-15 DOI:10.1177/17470161231180449
Mohammad Hosseini, D. Resnik, Kristi L Holmes
{"title":"披露使用人工智能工具撰写学术手稿的伦理问题","authors":"Mohammad Hosseini, D. Resnik, Kristi L Holmes","doi":"10.1177/17470161231180449","DOIUrl":null,"url":null,"abstract":"In this article, we discuss ethical issues related to using and disclosing artificial intelligence (AI) tools, such as ChatGPT and other systems based on large language models (LLMs), to write or edit scholarly manuscripts. Some journals, such as Science, have banned the use of LLMs because of the ethical problems they raise concerning responsible authorship. We argue that this is not a reasonable response to the moral conundrums created by the use of LLMs because bans are unenforceable and would encourage undisclosed use of LLMs. Furthermore, LLMs can be useful in writing, reviewing and editing text, and promote equity in science. Others have argued that LLMs should be mentioned in the acknowledgments since they do not meet all the authorship criteria. We argue that naming LLMs as authors or mentioning them in the acknowledgments are both inappropriate forms of recognition because LLMs do not have free will and therefore cannot be held morally or legally responsible for what they do. Tools in general, and software in particular, are usually cited in-text, followed by being mentioned in the references. We provide suggestions to improve APA Style for referencing ChatGPT to specifically indicate the contributor who used LLMs (because interactions are stored on personal user accounts), the used version and model (because the same version could use different language models and generate dissimilar responses, e.g., ChatGPT May 12 Version GPT3.5 or GPT4), and the time of usage (because LLMs evolve fast and generate dissimilar responses over time). We recommend that researchers who use LLMs: (1) disclose their use in the introduction or methods section to transparently describe details such as used prompts and note which parts of the text are affected, (2) use in-text citations and references (to recognize their used applications and improve findability and indexing), and (3) record and submit their relevant interactions with LLMs as supplementary material or appendices.","PeriodicalId":38096,"journal":{"name":"Research Ethics","volume":null,"pages":null},"PeriodicalIF":2.1000,"publicationDate":"2023-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":"{\"title\":\"The ethics of disclosing the use of artificial intelligence tools in writing scholarly manuscripts\",\"authors\":\"Mohammad Hosseini, D. Resnik, Kristi L Holmes\",\"doi\":\"10.1177/17470161231180449\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this article, we discuss ethical issues related to using and disclosing artificial intelligence (AI) tools, such as ChatGPT and other systems based on large language models (LLMs), to write or edit scholarly manuscripts. Some journals, such as Science, have banned the use of LLMs because of the ethical problems they raise concerning responsible authorship. We argue that this is not a reasonable response to the moral conundrums created by the use of LLMs because bans are unenforceable and would encourage undisclosed use of LLMs. Furthermore, LLMs can be useful in writing, reviewing and editing text, and promote equity in science. Others have argued that LLMs should be mentioned in the acknowledgments since they do not meet all the authorship criteria. We argue that naming LLMs as authors or mentioning them in the acknowledgments are both inappropriate forms of recognition because LLMs do not have free will and therefore cannot be held morally or legally responsible for what they do. Tools in general, and software in particular, are usually cited in-text, followed by being mentioned in the references. We provide suggestions to improve APA Style for referencing ChatGPT to specifically indicate the contributor who used LLMs (because interactions are stored on personal user accounts), the used version and model (because the same version could use different language models and generate dissimilar responses, e.g., ChatGPT May 12 Version GPT3.5 or GPT4), and the time of usage (because LLMs evolve fast and generate dissimilar responses over time). We recommend that researchers who use LLMs: (1) disclose their use in the introduction or methods section to transparently describe details such as used prompts and note which parts of the text are affected, (2) use in-text citations and references (to recognize their used applications and improve findability and indexing), and (3) record and submit their relevant interactions with LLMs as supplementary material or appendices.\",\"PeriodicalId\":38096,\"journal\":{\"name\":\"Research Ethics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.1000,\"publicationDate\":\"2023-06-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"13\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Research Ethics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1177/17470161231180449\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ETHICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Research Ethics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/17470161231180449","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 13

摘要

在本文中,我们讨论了与使用和披露人工智能(AI)工具(如ChatGPT和其他基于大型语言模型(llm)的系统)撰写或编辑学术手稿相关的伦理问题。《科学》(Science)等一些期刊已经禁止使用法学硕士,因为法学硕士会引发有关作者责任的伦理问题。我们认为,这不是对使用法学硕士所造成的道德难题的合理回应,因为禁令是不可执行的,并且会鼓励未公开使用法学硕士。此外,法学硕士可以在写作、审查和编辑文本方面发挥作用,并促进科学公平。其他人认为法学硕士应该在致谢中提到,因为他们不符合所有的作者标准。我们认为,将法学硕士命名为作者或在致谢中提及他们都是不恰当的认可形式,因为法学硕士没有自由意志,因此不能对他们的行为承担道德或法律责任。一般来说,工具,特别是软件,通常在文本中引用,然后在参考文献中提到。我们为参考ChatGPT提供了改进APA风格的建议,以明确指出使用llm的贡献者(因为交互存储在个人用户帐户上),使用的版本和模型(因为相同的版本可以使用不同的语言模型并产生不同的响应,例如,ChatGPT 5月12日版本GPT3.5或GPT4),以及使用时间(因为llm发展很快,随着时间的推移会产生不同的响应)。我们建议使用llm的研究人员:(1)在介绍或方法部分披露其使用情况,以透明地描述细节,如使用的提示,并注明文本的哪些部分受到影响;(2)使用文本引用和参考文献(以识别其使用的应用程序,提高可查找性和索引性);(3)记录并提交与llm的相关互动,作为补充材料或附录。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
The ethics of disclosing the use of artificial intelligence tools in writing scholarly manuscripts
In this article, we discuss ethical issues related to using and disclosing artificial intelligence (AI) tools, such as ChatGPT and other systems based on large language models (LLMs), to write or edit scholarly manuscripts. Some journals, such as Science, have banned the use of LLMs because of the ethical problems they raise concerning responsible authorship. We argue that this is not a reasonable response to the moral conundrums created by the use of LLMs because bans are unenforceable and would encourage undisclosed use of LLMs. Furthermore, LLMs can be useful in writing, reviewing and editing text, and promote equity in science. Others have argued that LLMs should be mentioned in the acknowledgments since they do not meet all the authorship criteria. We argue that naming LLMs as authors or mentioning them in the acknowledgments are both inappropriate forms of recognition because LLMs do not have free will and therefore cannot be held morally or legally responsible for what they do. Tools in general, and software in particular, are usually cited in-text, followed by being mentioned in the references. We provide suggestions to improve APA Style for referencing ChatGPT to specifically indicate the contributor who used LLMs (because interactions are stored on personal user accounts), the used version and model (because the same version could use different language models and generate dissimilar responses, e.g., ChatGPT May 12 Version GPT3.5 or GPT4), and the time of usage (because LLMs evolve fast and generate dissimilar responses over time). We recommend that researchers who use LLMs: (1) disclose their use in the introduction or methods section to transparently describe details such as used prompts and note which parts of the text are affected, (2) use in-text citations and references (to recognize their used applications and improve findability and indexing), and (3) record and submit their relevant interactions with LLMs as supplementary material or appendices.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Research Ethics
Research Ethics Arts and Humanities-Philosophy
CiteScore
4.30
自引率
11.80%
发文量
17
审稿时长
15 weeks
期刊最新文献
Institutional requirement and central tracking of RCR training of all researchers and research eligible individuals Student interactions with ethical issues in the lab: results from a qualitative study Animal behaviour and welfare research: A One Health perspective No recognised ethical standards, no broad consent: navigating the quandary in computational social science research Research misconduct in China: towards an institutional analysis
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1