聊天机器人,生成人工智能和学术手稿:WAME关于与学术出版物相关的聊天机器人和生成人工智能的建议

Chris Zielinski, Margaret Winker, Rakesh Aggarwal, Lorraine Ferris, Markus Heinemann, Jose Florencio Lapeña, Sanjay Pai, Edsel Ing, Leslie Citrome, Murad Alam, Michael Voight, F. Habibzadeh
{"title":"聊天机器人,生成人工智能和学术手稿:WAME关于与学术出版物相关的聊天机器人和生成人工智能的建议","authors":"Chris Zielinski, Margaret Winker, Rakesh Aggarwal, Lorraine Ferris, Markus Heinemann, Jose Florencio Lapeña, Sanjay Pai, Edsel Ing, Leslie Citrome, Murad Alam, Michael Voight, F. Habibzadeh","doi":"10.21141/pjp.2023.08","DOIUrl":null,"url":null,"abstract":"Introduction \nThis statement revises our earlier “WAME Recommendations on ChatGPT and Chatbots in Relation to Scholarly Publications” (January 20, 2023). The revision reflects the proliferation of chatbots and their expanding use in scholarly publishing over the last few months, as well as emerging concerns regarding lack of authenticity of content when using chatbots. These Recommendations are intended to inform editors and help them develop policies for the use of chatbots in papers published in their journals. They aim to help authors and reviewers understand how best to attribute the use of chatbots in their work, and to address the need for all journal editors to have access to manuscript screening tools. In this rapidly evolving field, we will continue to modify these recommendations as the software and its applications develop. \n     A chatbot is a tool “[d]riven by [artificial intelligence], automated rules, natural-language processing (NLP), and machine learning (ML)…[to] process data to deliver responses to requests of all kinds.”1 Artificial intelligence (AI) is “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.”2 \n     “Generative modeling is an artificial intelligence technique that generates synthetic artifacts by analyzing training examples; learning their patterns and distribution; and then creating realistic facsimiles. Generative AI (GAI) uses generative modeling and advances in deep learning (DL) to produce diverse content at scale by utilizing existing media such as text, graphics, audio, and video.”3, 4 \n     Chatbots are activated by a plain-language instruction, or “prompt,” provided by the user. They generate responses using statistical and probability-based language models.5 This output has some characteristic properties. It is usually linguistically accurate and fluent but, to date, it is often compromised in various ways. For example, chatbot output currently carries the risk of including biases, distortions, irrelevancies, misrepresentations, and plagiarism many of which are caused by the algorithms governing its generation and heavily dependent on the contents of the materials used in its training. Consequently, there are concerns about the effects of chatbots on knowledge creation and dissemination – including their potential to spread and amplify mis- and disinformation6 – and their broader impact on jobs and the economy, as well as the health of individuals and populations. New legal issues have also arisen in connection with chatbots and generative AI.7 \n     Chatbots retain the information supplied to them, including content and prompts, and may use this information in future responses. Therefore, scholarly content that is generated or edited using AI would be retained and as a result, could potentially appear in future responses, further increasing the risk of inadvertent plagiarism on the part of the user and any future users of the technology. Anyone who needs to maintain confidentiality of a document, including authors, editors, and reviewers, should be aware of this issue before considering using chatbots to edit or generate work.9 \n     Chatbots and their applications illustrate the powerful possibilities of generative AI, as well as the risks. These Recommendations seek to suggest a workable approach to valid concerns about the use of chatbots in scholarly publishing.","PeriodicalId":166708,"journal":{"name":"Philippine Journal of Pathology","volume":"39 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"Chatbots, Generative AI, and Scholarly Manuscripts: WAME Recommendations on Chatbots and Generative Artificial Intelligence in Relation to Scholarly Publications\",\"authors\":\"Chris Zielinski, Margaret Winker, Rakesh Aggarwal, Lorraine Ferris, Markus Heinemann, Jose Florencio Lapeña, Sanjay Pai, Edsel Ing, Leslie Citrome, Murad Alam, Michael Voight, F. Habibzadeh\",\"doi\":\"10.21141/pjp.2023.08\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Introduction \\nThis statement revises our earlier “WAME Recommendations on ChatGPT and Chatbots in Relation to Scholarly Publications” (January 20, 2023). The revision reflects the proliferation of chatbots and their expanding use in scholarly publishing over the last few months, as well as emerging concerns regarding lack of authenticity of content when using chatbots. These Recommendations are intended to inform editors and help them develop policies for the use of chatbots in papers published in their journals. They aim to help authors and reviewers understand how best to attribute the use of chatbots in their work, and to address the need for all journal editors to have access to manuscript screening tools. In this rapidly evolving field, we will continue to modify these recommendations as the software and its applications develop. \\n     A chatbot is a tool “[d]riven by [artificial intelligence], automated rules, natural-language processing (NLP), and machine learning (ML)…[to] process data to deliver responses to requests of all kinds.”1 Artificial intelligence (AI) is “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.”2 \\n     “Generative modeling is an artificial intelligence technique that generates synthetic artifacts by analyzing training examples; learning their patterns and distribution; and then creating realistic facsimiles. Generative AI (GAI) uses generative modeling and advances in deep learning (DL) to produce diverse content at scale by utilizing existing media such as text, graphics, audio, and video.”3, 4 \\n     Chatbots are activated by a plain-language instruction, or “prompt,” provided by the user. They generate responses using statistical and probability-based language models.5 This output has some characteristic properties. It is usually linguistically accurate and fluent but, to date, it is often compromised in various ways. For example, chatbot output currently carries the risk of including biases, distortions, irrelevancies, misrepresentations, and plagiarism many of which are caused by the algorithms governing its generation and heavily dependent on the contents of the materials used in its training. Consequently, there are concerns about the effects of chatbots on knowledge creation and dissemination – including their potential to spread and amplify mis- and disinformation6 – and their broader impact on jobs and the economy, as well as the health of individuals and populations. New legal issues have also arisen in connection with chatbots and generative AI.7 \\n     Chatbots retain the information supplied to them, including content and prompts, and may use this information in future responses. Therefore, scholarly content that is generated or edited using AI would be retained and as a result, could potentially appear in future responses, further increasing the risk of inadvertent plagiarism on the part of the user and any future users of the technology. Anyone who needs to maintain confidentiality of a document, including authors, editors, and reviewers, should be aware of this issue before considering using chatbots to edit or generate work.9 \\n     Chatbots and their applications illustrate the powerful possibilities of generative AI, as well as the risks. These Recommendations seek to suggest a workable approach to valid concerns about the use of chatbots in scholarly publishing.\",\"PeriodicalId\":166708,\"journal\":{\"name\":\"Philippine Journal of Pathology\",\"volume\":\"39 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Philippine Journal of Pathology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.21141/pjp.2023.08\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Philippine Journal of Pathology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.21141/pjp.2023.08","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

摘要

本声明修订了我们之前的“WAME关于ChatGPT和聊天机器人与学术出版物的建议”(2023年1月20日)。此次修订反映了聊天机器人的激增,以及它们在过去几个月里在学术出版领域的广泛使用,以及人们对使用聊天机器人时内容缺乏真实性的担忧。这些建议旨在为编辑提供信息,并帮助他们制定在期刊上发表的论文中使用聊天机器人的政策。他们的目标是帮助作者和审稿人了解如何最好地将聊天机器人的使用归因于他们的工作,并解决所有期刊编辑都需要访问手稿筛选工具的问题。在这个快速发展的领域,随着软件及其应用的发展,我们将继续修改这些建议。聊天机器人是一种工具,“由人工智能、自动规则、自然语言处理(NLP)和机器学习(ML)驱动……处理数据,对各种请求做出响应。”1人工智能(AI)是“数字计算机或计算机控制的机器人执行通常与智能生物相关的任务的能力”。“2”生成建模是一种人工智能技术,通过分析训练样本生成合成工件;了解它们的模式和分布;然后制作逼真的复制品。生成式人工智能(GAI)利用生成式建模和深度学习(DL)的进步,通过利用现有媒体(如文本、图形、音频和视频)大规模生产各种内容。3,4聊天机器人由用户提供的简单语言指令或“提示”激活。他们使用基于统计和概率的语言模型来生成响应这个输出具有一些特性。它通常在语言上是准确和流利的,但迄今为止,它经常在各种方面受到损害。例如,聊天机器人目前的输出存在偏见、扭曲、不相关、虚假陈述和抄袭的风险,其中许多是由控制其生成的算法引起的,并且严重依赖于其训练中使用的材料的内容。因此,人们担心聊天机器人对知识创造和传播的影响——包括它们传播和放大错误和虚假信息的潜力——以及它们对就业和经济以及个人和人口健康的更广泛影响。与聊天机器人和生成式人工智能相关的新法律问题也出现了。聊天机器人保留提供给它们的信息,包括内容和提示,并可能在未来的响应中使用这些信息。因此,使用人工智能生成或编辑的学术内容将被保留,因此,可能会出现在未来的回复中,进一步增加用户和该技术的任何未来用户无意剽窃的风险。任何需要维护文档机密性的人,包括作者、编辑和审稿人,在考虑使用聊天机器人编辑或生成工作之前都应该意识到这个问题聊天机器人及其应用说明了生成式人工智能的强大可能性,以及风险。这些建议试图提出一种可行的方法,以解决人们对在学术出版中使用聊天机器人的担忧。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Chatbots, Generative AI, and Scholarly Manuscripts: WAME Recommendations on Chatbots and Generative Artificial Intelligence in Relation to Scholarly Publications
Introduction This statement revises our earlier “WAME Recommendations on ChatGPT and Chatbots in Relation to Scholarly Publications” (January 20, 2023). The revision reflects the proliferation of chatbots and their expanding use in scholarly publishing over the last few months, as well as emerging concerns regarding lack of authenticity of content when using chatbots. These Recommendations are intended to inform editors and help them develop policies for the use of chatbots in papers published in their journals. They aim to help authors and reviewers understand how best to attribute the use of chatbots in their work, and to address the need for all journal editors to have access to manuscript screening tools. In this rapidly evolving field, we will continue to modify these recommendations as the software and its applications develop.      A chatbot is a tool “[d]riven by [artificial intelligence], automated rules, natural-language processing (NLP), and machine learning (ML)…[to] process data to deliver responses to requests of all kinds.”1 Artificial intelligence (AI) is “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.”2      “Generative modeling is an artificial intelligence technique that generates synthetic artifacts by analyzing training examples; learning their patterns and distribution; and then creating realistic facsimiles. Generative AI (GAI) uses generative modeling and advances in deep learning (DL) to produce diverse content at scale by utilizing existing media such as text, graphics, audio, and video.”3, 4      Chatbots are activated by a plain-language instruction, or “prompt,” provided by the user. They generate responses using statistical and probability-based language models.5 This output has some characteristic properties. It is usually linguistically accurate and fluent but, to date, it is often compromised in various ways. For example, chatbot output currently carries the risk of including biases, distortions, irrelevancies, misrepresentations, and plagiarism many of which are caused by the algorithms governing its generation and heavily dependent on the contents of the materials used in its training. Consequently, there are concerns about the effects of chatbots on knowledge creation and dissemination – including their potential to spread and amplify mis- and disinformation6 – and their broader impact on jobs and the economy, as well as the health of individuals and populations. New legal issues have also arisen in connection with chatbots and generative AI.7      Chatbots retain the information supplied to them, including content and prompts, and may use this information in future responses. Therefore, scholarly content that is generated or edited using AI would be retained and as a result, could potentially appear in future responses, further increasing the risk of inadvertent plagiarism on the part of the user and any future users of the technology. Anyone who needs to maintain confidentiality of a document, including authors, editors, and reviewers, should be aware of this issue before considering using chatbots to edit or generate work.9      Chatbots and their applications illustrate the powerful possibilities of generative AI, as well as the risks. These Recommendations seek to suggest a workable approach to valid concerns about the use of chatbots in scholarly publishing.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Defining the Terms of Reference of the PSP’s Committee on Academic and Research Pathology Mucosal Melanoma of the Male Urethra: A Case Report Correlation of Clinicopathologic Features of Filipino Primary Breast Cancer Patients with HER2 Subgroups Classified according to the ASCO/CAP 2018 Breast Cancer HER2 Testing Guidelines Needs Assessment for Establishment of Telepathology in the Philippines Ethical Concerns and Recommendations for Sharing Anatomic Pathology Images on Online Social Media Networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1