在各领域人工智能应用不断增加的背景下,保持科学诚信和高研究标准

Fiona McGowan Martha Morrison, Nima Rezaei, Amanuel Godana Arero, Vasko Graklanov, Sevan Iritsyan, Mariya Ivanovska, Rangariari Makuku, Leander Penaso Marquez, Kseniia Minakova, Lindelwa Phakamile Mmema, Piotr Rzymski, Ganna Zavolodko
{"title":"在各领域人工智能应用不断增加的背景下,保持科学诚信和高研究标准","authors":"Fiona McGowan Martha Morrison, Nima Rezaei, Amanuel Godana Arero, Vasko Graklanov, Sevan Iritsyan, Mariya Ivanovska, Rangariari Makuku, Leander Penaso Marquez, Kseniia Minakova, Lindelwa Phakamile Mmema, Piotr Rzymski, Ganna Zavolodko","doi":"10.21037/jmai-23-63","DOIUrl":null,"url":null,"abstract":"Abstract: Artificial intelligence (AI) technologies have already played a revolutionary role in scientific research, from diagnostics to text-generative AI used in scientific writing. The use of AI in the scientific field needs transparent regulation, especially with a longstanding history of use—the first AI technologies in science were developed in the 1950s. Since then, AI has gone from being able to alter texts to producing them using billions of parameters to generate accurate and natural texts. However, scientific work requires high ethical and professional standards, and the rise of AI use in the field has led to many institutions and journals releasing statements and restrictions on its use. AI, being reliant on its users can exacerbate and increase existing biases in the field without being able to take accountability. AI responses can also often lack specificity and depth. However, it is important not to condemn the use of AI in scientific work as a whole. This article has partial use of an AI large language model (LLM), specifically Chatbot Generative Pre-Trained Transformer (ChatGPT), to demonstrate the theories with clear examples. Several recommendations on both a strategic and regulatory level have been formulated in this paper to enable the complementary use of AI alongside ethically-conducted scientific research or for educational purposes, where it shows great potential as a transformative force in interactive work. Policymakers should create wide-reaching, clear guidelines and legal frameworks for using AI to remove the burden of consideration from educators and senior researchers. Caution in the scientific community is advised, though further understanding and work to improve AI use is encouraged.","PeriodicalId":73815,"journal":{"name":"Journal of medical artificial intelligence","volume":"101 3-4","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Maintaining scientific integrity and high research standards against the backdrop of rising artificial intelligence use across fields\",\"authors\":\"Fiona McGowan Martha Morrison, Nima Rezaei, Amanuel Godana Arero, Vasko Graklanov, Sevan Iritsyan, Mariya Ivanovska, Rangariari Makuku, Leander Penaso Marquez, Kseniia Minakova, Lindelwa Phakamile Mmema, Piotr Rzymski, Ganna Zavolodko\",\"doi\":\"10.21037/jmai-23-63\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract: Artificial intelligence (AI) technologies have already played a revolutionary role in scientific research, from diagnostics to text-generative AI used in scientific writing. The use of AI in the scientific field needs transparent regulation, especially with a longstanding history of use—the first AI technologies in science were developed in the 1950s. Since then, AI has gone from being able to alter texts to producing them using billions of parameters to generate accurate and natural texts. However, scientific work requires high ethical and professional standards, and the rise of AI use in the field has led to many institutions and journals releasing statements and restrictions on its use. AI, being reliant on its users can exacerbate and increase existing biases in the field without being able to take accountability. AI responses can also often lack specificity and depth. However, it is important not to condemn the use of AI in scientific work as a whole. This article has partial use of an AI large language model (LLM), specifically Chatbot Generative Pre-Trained Transformer (ChatGPT), to demonstrate the theories with clear examples. Several recommendations on both a strategic and regulatory level have been formulated in this paper to enable the complementary use of AI alongside ethically-conducted scientific research or for educational purposes, where it shows great potential as a transformative force in interactive work. Policymakers should create wide-reaching, clear guidelines and legal frameworks for using AI to remove the burden of consideration from educators and senior researchers. Caution in the scientific community is advised, though further understanding and work to improve AI use is encouraged.\",\"PeriodicalId\":73815,\"journal\":{\"name\":\"Journal of medical artificial intelligence\",\"volume\":\"101 3-4\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of medical artificial intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.21037/jmai-23-63\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of medical artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.21037/jmai-23-63","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

摘要:人工智能(AI)技术已经在科学研究中发挥了革命性的作用,从诊断到用于科学写作的文本生成AI。人工智能在科学领域的使用需要透明的监管,特别是在使用历史悠久的情况下——科学领域的第一批人工智能技术是在20世纪50年代开发的。从那时起,人工智能已经从能够改变文本发展到使用数十亿个参数生成准确自然的文本。然而,科学工作需要很高的道德和专业标准,人工智能在该领域应用的兴起导致许多机构和期刊发布声明并限制其使用。人工智能对其用户的依赖可能会加剧和增加该领域现有的偏见,而无法承担责任。AI反应通常也缺乏特异性和深度。然而,重要的是不要从整体上谴责人工智能在科学工作中的使用。本文部分使用了AI大型语言模型(LLM),特别是聊天机器人生成预训练转换器(ChatGPT),通过清晰的示例来演示理论。本文在战略和监管层面提出了几项建议,以使人工智能与合乎道德的科学研究或教育目的相辅相成,人工智能在互动工作中显示出作为变革力量的巨大潜力。政策制定者应该为使用人工智能制定广泛而明确的指导方针和法律框架,以消除教育工作者和高级研究人员的考虑负担。尽管鼓励进一步了解和改进人工智能的使用,但建议科学界保持谨慎。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Maintaining scientific integrity and high research standards against the backdrop of rising artificial intelligence use across fields
Abstract: Artificial intelligence (AI) technologies have already played a revolutionary role in scientific research, from diagnostics to text-generative AI used in scientific writing. The use of AI in the scientific field needs transparent regulation, especially with a longstanding history of use—the first AI technologies in science were developed in the 1950s. Since then, AI has gone from being able to alter texts to producing them using billions of parameters to generate accurate and natural texts. However, scientific work requires high ethical and professional standards, and the rise of AI use in the field has led to many institutions and journals releasing statements and restrictions on its use. AI, being reliant on its users can exacerbate and increase existing biases in the field without being able to take accountability. AI responses can also often lack specificity and depth. However, it is important not to condemn the use of AI in scientific work as a whole. This article has partial use of an AI large language model (LLM), specifically Chatbot Generative Pre-Trained Transformer (ChatGPT), to demonstrate the theories with clear examples. Several recommendations on both a strategic and regulatory level have been formulated in this paper to enable the complementary use of AI alongside ethically-conducted scientific research or for educational purposes, where it shows great potential as a transformative force in interactive work. Policymakers should create wide-reaching, clear guidelines and legal frameworks for using AI to remove the burden of consideration from educators and senior researchers. Caution in the scientific community is advised, though further understanding and work to improve AI use is encouraged.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
2.30
自引率
0.00%
发文量
0
期刊最新文献
Artificial intelligence in periodontology and implantology—a narrative review Exploring the capabilities and limitations of large language models in nuclear medicine knowledge with primary focus on GPT-3.5, GPT-4 and Google Bard Hybrid artificial intelligence outcome prediction using features extraction from stress perfusion cardiac magnetic resonance images and electronic health records Analysis of factors influencing maternal mortality and newborn health—a machine learning approach Efficient glioma grade prediction using learned features extracted from convolutional neural networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1