编辑关于在学术期刊出版中负责任地使用生成人工智能技术的声明。

Pub Date : 2023-10-01 DOI:10.1111/dewb.12424
Gregory E. Kaebnick, David Christopher Magnus, Audiey Kao, Mohammad Hosseini, David Resnik, Veljko Dubljević, Christy Rentmeester, Bert Gordijn
{"title":"编辑关于在学术期刊出版中负责任地使用生成人工智能技术的声明。","authors":"Gregory E. Kaebnick,&nbsp;David Christopher Magnus,&nbsp;Audiey Kao,&nbsp;Mohammad Hosseini,&nbsp;David Resnik,&nbsp;Veljko Dubljević,&nbsp;Christy Rentmeester,&nbsp;Bert Gordijn","doi":"10.1111/dewb.12424","DOIUrl":null,"url":null,"abstract":"<p>The new generative artificial intelligence (AI) tools, and especially the large language models (LLMs) of which ChatGPT is the most prominent example, have the potential to transform many aspects of scholarly publishing. How the transformations will play out remains to be seen, both because the different parties involved in the production and publication of scholarly work are still learning about these tools and because the tools themselves are still in development, but the tools have a vast range of potential uses. Authors are likely to use generative AI to conduct research, frame their thoughts, produce data, search for ways of articulating their thoughts, develop drafts, generate text, revise their writing, and create visuals. Peer reviewers might use AI to help them produce their reviews. Editors might use AI in the initial editorial screening of manuscripts, to locate reviewers, or for copyediting.</p><p>We are editors of bioethics and humanities journals who have been contemplating the implications of this ongoing transformation. We believe that generative AI may pose a threat to the goals that animate our work but could also be valuable for achieving those goals. We do not pretend to have resolved the many social questions that we think generative AI raises for scholarly publishing, but in the interest of fostering a wider conversation about these questions, we have developed a preliminary set of recommendations about generative AI in scholarly publishing. We hope that the recommendations and rationales set out here will help the scholarly community navigate toward a deeper understanding of the strengths, limits, and challenges of AI for responsible scholarly work.</p><p>The stance set out here is consistent with those taken by the Committee on Publishing Ethics and many journal publishers, including those that publish or provide publishing services to the journals we edit. Previous position statements have addressed concerns about the use of AI for peer review and the importance of reviewers revealing to authors if they used AI in their review.5 However, to our knowledge, none have addressed the importance of using human reviewers to review manuscripts and editors retaining final decisions over what reviewers to select. Our stance differs from the position of <i>Science</i> magazine, which holds not only that a generative AI tool cannot be an author but also that “text generated by ChatGPT (or any other AI tools) cannot be used in the work, nor can figures, images, or graphics be the products of such tools.”6 Such a proscription is too broad and may be impossible to enforce, in our view. Yet we recognize that the ethical issues raised by generative AI are complex, and we have struggled to decide how editors should promote responsible use of these technologies. Over time, we hope, the community of scholars will develop professional norms about the appropriate ways of using these new tools. Reviewers and readers, not just editors, will have much to say about these norms. The variety of ways in which generative AI technologies can be used and the pace of change may, in fact, render detailed editorial policy statements ineffective or impracticable. Instead, reliance on evolving professional norms based on broader public conversation about generative AI technologies may turn out to be the best way forward. Our shared statement is intended to promote this wider social discourse.</p><p>David Resnik's contribution to this editorial was supported by the Intramural Research Program of the National Institute of Environmental Health Sciences (NIEHS) at the National Institutes of Health (NIH). Mohammad Hosseini's contribution was supported by the National Center for Advancing Translational Sciences (NCATS) (through grant UL1TR001422). The funders have not played a role in the design, analysis, decision to publish, or preparation of the manuscript. Veljko Dubljević's contribution was partially supported by the National Science Foundation (NSF) CAREER award (#2043612). This work does not represent the views of the NIEHS, NCATS, NIH, NSF, or US government.</p>","PeriodicalId":0,"journal":{"name":"","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/dewb.12424","citationCount":"0","resultStr":"{\"title\":\"Editors' statement on the responsible use of generative artificial intelligence technologies in scholarly journal publishing\",\"authors\":\"Gregory E. Kaebnick,&nbsp;David Christopher Magnus,&nbsp;Audiey Kao,&nbsp;Mohammad Hosseini,&nbsp;David Resnik,&nbsp;Veljko Dubljević,&nbsp;Christy Rentmeester,&nbsp;Bert Gordijn\",\"doi\":\"10.1111/dewb.12424\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>The new generative artificial intelligence (AI) tools, and especially the large language models (LLMs) of which ChatGPT is the most prominent example, have the potential to transform many aspects of scholarly publishing. How the transformations will play out remains to be seen, both because the different parties involved in the production and publication of scholarly work are still learning about these tools and because the tools themselves are still in development, but the tools have a vast range of potential uses. Authors are likely to use generative AI to conduct research, frame their thoughts, produce data, search for ways of articulating their thoughts, develop drafts, generate text, revise their writing, and create visuals. Peer reviewers might use AI to help them produce their reviews. Editors might use AI in the initial editorial screening of manuscripts, to locate reviewers, or for copyediting.</p><p>We are editors of bioethics and humanities journals who have been contemplating the implications of this ongoing transformation. We believe that generative AI may pose a threat to the goals that animate our work but could also be valuable for achieving those goals. We do not pretend to have resolved the many social questions that we think generative AI raises for scholarly publishing, but in the interest of fostering a wider conversation about these questions, we have developed a preliminary set of recommendations about generative AI in scholarly publishing. We hope that the recommendations and rationales set out here will help the scholarly community navigate toward a deeper understanding of the strengths, limits, and challenges of AI for responsible scholarly work.</p><p>The stance set out here is consistent with those taken by the Committee on Publishing Ethics and many journal publishers, including those that publish or provide publishing services to the journals we edit. Previous position statements have addressed concerns about the use of AI for peer review and the importance of reviewers revealing to authors if they used AI in their review.5 However, to our knowledge, none have addressed the importance of using human reviewers to review manuscripts and editors retaining final decisions over what reviewers to select. Our stance differs from the position of <i>Science</i> magazine, which holds not only that a generative AI tool cannot be an author but also that “text generated by ChatGPT (or any other AI tools) cannot be used in the work, nor can figures, images, or graphics be the products of such tools.”6 Such a proscription is too broad and may be impossible to enforce, in our view. Yet we recognize that the ethical issues raised by generative AI are complex, and we have struggled to decide how editors should promote responsible use of these technologies. Over time, we hope, the community of scholars will develop professional norms about the appropriate ways of using these new tools. Reviewers and readers, not just editors, will have much to say about these norms. The variety of ways in which generative AI technologies can be used and the pace of change may, in fact, render detailed editorial policy statements ineffective or impracticable. Instead, reliance on evolving professional norms based on broader public conversation about generative AI technologies may turn out to be the best way forward. Our shared statement is intended to promote this wider social discourse.</p><p>David Resnik's contribution to this editorial was supported by the Intramural Research Program of the National Institute of Environmental Health Sciences (NIEHS) at the National Institutes of Health (NIH). Mohammad Hosseini's contribution was supported by the National Center for Advancing Translational Sciences (NCATS) (through grant UL1TR001422). The funders have not played a role in the design, analysis, decision to publish, or preparation of the manuscript. Veljko Dubljević's contribution was partially supported by the National Science Foundation (NSF) CAREER award (#2043612). This work does not represent the views of the NIEHS, NCATS, NIH, NSF, or US government.</p>\",\"PeriodicalId\":0,\"journal\":{\"name\":\"\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0,\"publicationDate\":\"2023-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/dewb.12424\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/dewb.12424\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"","FirstCategoryId":"98","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/dewb.12424","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

新的生成式人工智能(AI)工具,尤其是大型语言模型(LLMs)(ChatGPT 就是其中最突出的例子),有可能改变学术出版的许多方面。由于参与学术著作制作和出版的各方仍在学习这些工具,而且这些工具本身仍在开发之中,因此如何实现变革仍有待观察,但这些工具具有广泛的潜在用途。作者可能会使用生成式人工智能进行研究、构思、生成数据、寻找表达思想的方式、编写草稿、生成文本、修改文章和创建视觉效果。同行评审员可以使用人工智能来帮助他们撰写评审意见。我们是生物伦理学和人文学科期刊的编辑,一直在思考这种持续变革的影响。我们认为,生成式人工智能可能会对我们的工作目标构成威胁,但也可能对实现这些目标有价值。我们并不自诩已经解决了我们认为人工智能生成技术给学术出版业带来的诸多社会问题,但为了促进有关这些问题的更广泛对话,我们就学术出版业中的人工智能生成技术提出了一系列初步建议。我们希望这里提出的建议和理由能够帮助学术界更深入地了解人工智能在负责任的学术工作中的优势、局限和挑战。这里提出的立场与出版伦理委员会和许多期刊出版商(包括那些出版我们编辑的期刊或为其提供出版服务的期刊出版商)所采取的立场是一致的。以前的立场声明曾涉及对使用人工智能进行同行评审的担忧,以及审稿人向作者透露他们是否在评审中使用了人工智能的重要性。我们的立场与《科学》杂志的立场不同,后者不仅认为人工智能生成工具不能成为作者,而且认为 "作品中不能使用ChatGPT(或任何其他人工智能工具)生成的文本,数字、图像或图形也不能是此类工具的产物 "6 。然而,我们认识到,生成式人工智能所引发的伦理问题非常复杂,我们一直在努力决定编辑应如何促进负责任地使用这些技术。我们希望,随着时间的推移,学者群体将就使用这些新工具的适当方式制定出专业规范。对于这些规范,审稿人和读者,而不仅仅是编辑,将有很多话要说。事实上,生成式人工智能技术的使用方式多种多样,其变化速度之快可能会使详细的编辑政策声明变得无效或不切实际。相反,依靠基于生成式人工智能技术更广泛的公众对话而不断发展的专业规范,可能会成为最佳的前进方式。戴维-雷斯尼克对这篇社论的贡献得到了美国国立卫生研究院(NIH)国家环境卫生科学研究所(NIEHS)校内研究计划的支持。Mohammad Hosseini的贡献得到了美国国家促进转化科学中心(NCATS)(通过UL1TR001422基金)的支持。资助者没有参与设计、分析、发表决定或手稿的撰写。Veljko Dubljević的部分贡献得到了美国国家科学基金会(NSF)CAREER奖(#2043612)的支持。本研究不代表美国国家健康与卫生研究所(NIEHS)、美国国家癌症研究中心(NCATS)、美国国立卫生研究院(NIH)、美国国家科学基金会(NSF)或美国政府的观点。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
Editors' statement on the responsible use of generative artificial intelligence technologies in scholarly journal publishing

The new generative artificial intelligence (AI) tools, and especially the large language models (LLMs) of which ChatGPT is the most prominent example, have the potential to transform many aspects of scholarly publishing. How the transformations will play out remains to be seen, both because the different parties involved in the production and publication of scholarly work are still learning about these tools and because the tools themselves are still in development, but the tools have a vast range of potential uses. Authors are likely to use generative AI to conduct research, frame their thoughts, produce data, search for ways of articulating their thoughts, develop drafts, generate text, revise their writing, and create visuals. Peer reviewers might use AI to help them produce their reviews. Editors might use AI in the initial editorial screening of manuscripts, to locate reviewers, or for copyediting.

We are editors of bioethics and humanities journals who have been contemplating the implications of this ongoing transformation. We believe that generative AI may pose a threat to the goals that animate our work but could also be valuable for achieving those goals. We do not pretend to have resolved the many social questions that we think generative AI raises for scholarly publishing, but in the interest of fostering a wider conversation about these questions, we have developed a preliminary set of recommendations about generative AI in scholarly publishing. We hope that the recommendations and rationales set out here will help the scholarly community navigate toward a deeper understanding of the strengths, limits, and challenges of AI for responsible scholarly work.

The stance set out here is consistent with those taken by the Committee on Publishing Ethics and many journal publishers, including those that publish or provide publishing services to the journals we edit. Previous position statements have addressed concerns about the use of AI for peer review and the importance of reviewers revealing to authors if they used AI in their review.5 However, to our knowledge, none have addressed the importance of using human reviewers to review manuscripts and editors retaining final decisions over what reviewers to select. Our stance differs from the position of Science magazine, which holds not only that a generative AI tool cannot be an author but also that “text generated by ChatGPT (or any other AI tools) cannot be used in the work, nor can figures, images, or graphics be the products of such tools.”6 Such a proscription is too broad and may be impossible to enforce, in our view. Yet we recognize that the ethical issues raised by generative AI are complex, and we have struggled to decide how editors should promote responsible use of these technologies. Over time, we hope, the community of scholars will develop professional norms about the appropriate ways of using these new tools. Reviewers and readers, not just editors, will have much to say about these norms. The variety of ways in which generative AI technologies can be used and the pace of change may, in fact, render detailed editorial policy statements ineffective or impracticable. Instead, reliance on evolving professional norms based on broader public conversation about generative AI technologies may turn out to be the best way forward. Our shared statement is intended to promote this wider social discourse.

David Resnik's contribution to this editorial was supported by the Intramural Research Program of the National Institute of Environmental Health Sciences (NIEHS) at the National Institutes of Health (NIH). Mohammad Hosseini's contribution was supported by the National Center for Advancing Translational Sciences (NCATS) (through grant UL1TR001422). The funders have not played a role in the design, analysis, decision to publish, or preparation of the manuscript. Veljko Dubljević's contribution was partially supported by the National Science Foundation (NSF) CAREER award (#2043612). This work does not represent the views of the NIEHS, NCATS, NIH, NSF, or US government.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1