{"title":"Discussion on the Artificial Intelligence (AI) Tools Usage in the Scientific World","authors":"Mazhar Özkan, H. Sasani","doi":"10.58600/eurjther1837","DOIUrl":null,"url":null,"abstract":"Dear Editors,\nWe have been reading with great interest your editorial discussion on “Artificial Intelligence and Co-Authorship” which you initiated some time ago [1]. In the current era, the vast amount of data generated from routine applications, scientific research, and the resulting outcomes has surpassed what the human mind can read and evaluate. Therefore, there has been a need to summarize data and develop information processing-based applications for easy access, leading to the design of automated - artificial intelligence-based - tools. Nowadays, these tools are used in various processes, from data collection and analysis to hypothesis generation, experimentation, and simulation.\nThe use of Artificial Intelligence (AI) tools is highly beneficial in conducting and reporting scientific research. Particularly, for tasks such as literature reviews, identifying research gaps, and learning about collaborations among researchers/institutions, a wide range of AI-based tools has been developed, making it easier for researchers to accomplish these tasks. However, researchers are still seeking solutions to expedite the time-consuming aspects of writing their research.\nAI can automate repetitive tasks efficiently and with minimal errors, allowing humans to focus on more creative and strategic tasks. They can make better decisions by forecasting the future based on evaluating various types of existing data. After analysing similar content, they can generate purposeful creative content. They can answer questions on topics that humans may not understand comprehensively and informatively. And of course, they can translate text and speeches accurately and fluently into other languages.\nMisuse of AI tools or misinterpretation of results obtained from these applications can have significantly adverse consequences. One notable example of this is the unchecked preparation of academic papers by AI-based software. In fact, ChatGPT has been listed as a co-author in at least four articles in the literature, but corrections have been made in some cases due to its inaccuracies. When the Web of Science is searched, it is seen that ChatGPT was removed from authorship by making corrections in 1 article in which ChatGPT was previously mentioned as a co-author [2], and in two articles in the British Journalism Review and in three articles about ChatGPT in different journals, it was mentioned as a group author.\nIt has been observed that while AI models like ChatGPT can generate text that appears human-like, there can be issues with interpretation and the presentation of false references, as highlighted in studies in the literature. Therefore, AI-based software like ChatGPT should not be used as co-authors without control but should be used as tools like other software, with the written text going through human oversight. As a result, the full responsibility for what these AI tools produce should rest with the author(s) submitting the article and cannot be attributed to the AI [3].\nOrganizations such as the Committee on Publication Ethics (COPE), the World Association of Medical Editors (WAME), and the JAMA Network are important regulatory bodies concerning the content and quality of academic publications. They emphasize that individuals who cannot fulfil authorship requirements, such as declaring conflicts of interest, managing publication rights, and licensing agreements because AI tools cannot fulfil these duties, cannot be authors of a paper [4-6]. In line with our recommendations above, these organizations also state that authors must bear full responsibility for everything the AI tool does within the manuscript and for the article's adherence to ethical standards.\nIn conclusion, AI-based applications contribute significantly to academic research, just as they do in many other fields, and serve as important tools for researchers in academic writing. With long-term development and improvements, we believe that they will gain the ability to write a substantial portion of academic papers as their literature review capabilities expand. However, the accuracy and originality of the written information must always be subject to human oversight to make new contributions to the literature. At this point, AI-based applications come into play again, claiming to detect the difference between AI-generated and human-created content with approximately 99% accuracy. Cases perceived as AI-generated content have been corrected through legal action or appeals to higher authorities [7]. Ultimately, the use of AI-based tools like ChatGPT and AI-generated content in academic studies, like other features of academic work, should be regulated with ethical considerations.\nYours Sincerely,","PeriodicalId":42642,"journal":{"name":"European Journal of Therapeutics","volume":"73 1","pages":""},"PeriodicalIF":0.3000,"publicationDate":"2023-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Journal of Therapeutics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.58600/eurjther1837","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
引用次数: 0
Abstract
Dear Editors,
We have been reading with great interest your editorial discussion on “Artificial Intelligence and Co-Authorship” which you initiated some time ago [1]. In the current era, the vast amount of data generated from routine applications, scientific research, and the resulting outcomes has surpassed what the human mind can read and evaluate. Therefore, there has been a need to summarize data and develop information processing-based applications for easy access, leading to the design of automated - artificial intelligence-based - tools. Nowadays, these tools are used in various processes, from data collection and analysis to hypothesis generation, experimentation, and simulation.
The use of Artificial Intelligence (AI) tools is highly beneficial in conducting and reporting scientific research. Particularly, for tasks such as literature reviews, identifying research gaps, and learning about collaborations among researchers/institutions, a wide range of AI-based tools has been developed, making it easier for researchers to accomplish these tasks. However, researchers are still seeking solutions to expedite the time-consuming aspects of writing their research.
AI can automate repetitive tasks efficiently and with minimal errors, allowing humans to focus on more creative and strategic tasks. They can make better decisions by forecasting the future based on evaluating various types of existing data. After analysing similar content, they can generate purposeful creative content. They can answer questions on topics that humans may not understand comprehensively and informatively. And of course, they can translate text and speeches accurately and fluently into other languages.
Misuse of AI tools or misinterpretation of results obtained from these applications can have significantly adverse consequences. One notable example of this is the unchecked preparation of academic papers by AI-based software. In fact, ChatGPT has been listed as a co-author in at least four articles in the literature, but corrections have been made in some cases due to its inaccuracies. When the Web of Science is searched, it is seen that ChatGPT was removed from authorship by making corrections in 1 article in which ChatGPT was previously mentioned as a co-author [2], and in two articles in the British Journalism Review and in three articles about ChatGPT in different journals, it was mentioned as a group author.
It has been observed that while AI models like ChatGPT can generate text that appears human-like, there can be issues with interpretation and the presentation of false references, as highlighted in studies in the literature. Therefore, AI-based software like ChatGPT should not be used as co-authors without control but should be used as tools like other software, with the written text going through human oversight. As a result, the full responsibility for what these AI tools produce should rest with the author(s) submitting the article and cannot be attributed to the AI [3].
Organizations such as the Committee on Publication Ethics (COPE), the World Association of Medical Editors (WAME), and the JAMA Network are important regulatory bodies concerning the content and quality of academic publications. They emphasize that individuals who cannot fulfil authorship requirements, such as declaring conflicts of interest, managing publication rights, and licensing agreements because AI tools cannot fulfil these duties, cannot be authors of a paper [4-6]. In line with our recommendations above, these organizations also state that authors must bear full responsibility for everything the AI tool does within the manuscript and for the article's adherence to ethical standards.
In conclusion, AI-based applications contribute significantly to academic research, just as they do in many other fields, and serve as important tools for researchers in academic writing. With long-term development and improvements, we believe that they will gain the ability to write a substantial portion of academic papers as their literature review capabilities expand. However, the accuracy and originality of the written information must always be subject to human oversight to make new contributions to the literature. At this point, AI-based applications come into play again, claiming to detect the difference between AI-generated and human-created content with approximately 99% accuracy. Cases perceived as AI-generated content have been corrected through legal action or appeals to higher authorities [7]. Ultimately, the use of AI-based tools like ChatGPT and AI-generated content in academic studies, like other features of academic work, should be regulated with ethical considerations.
Yours Sincerely,
尊敬的编辑们:我们怀着极大的兴趣阅读了您前段时间发起的关于“人工智能与合著”的编辑讨论[1]。在当今时代,从日常应用、科学研究中产生的大量数据,以及由此产生的结果,已经超出了人类大脑的阅读和评估能力。因此,有必要总结数据并开发基于信息处理的应用程序,以便于访问,从而导致设计基于人工智能的自动化工具。如今,这些工具被用于各种过程,从数据收集和分析到假设生成、实验和模拟。人工智能(AI)工具的使用在进行和报告科学研究方面非常有益。特别是,对于诸如文献综述、确定研究差距以及了解研究人员/机构之间的合作等任务,已经开发了广泛的基于人工智能的工具,使研究人员更容易完成这些任务。然而,研究人员仍在寻求解决方案,以加快撰写研究的耗时方面。人工智能可以有效地自动执行重复性任务,并将错误降至最低,使人类能够专注于更具创造性和战略性的任务。他们可以通过评估各种类型的现有数据来预测未来,从而做出更好的决策。在分析类似的内容后,他们可以生成有目的的创意内容。它们可以回答人类可能无法全面和信息地理解的主题问题。当然,他们可以准确流利地将文本和演讲翻译成其他语言。滥用人工智能工具或误解从这些应用程序中获得的结果可能会产生严重的不良后果。其中一个显著的例子是人工智能软件对学术论文的不加检查的准备。事实上,ChatGPT在文献中至少有四篇文章被列为共同作者,但由于其不准确,在某些情况下已经进行了更正。在Web of Science检索时,可以看到ChatGPT在1篇文章中作为合著者被提及[2],并且在《英国新闻评论》的两篇文章和不同期刊的三篇关于ChatGPT的文章中,ChatGPT被作为小组作者被提及,因此被删除了作者身份。据观察,虽然像ChatGPT这样的人工智能模型可以生成看起来像人类的文本,但正如文献研究所强调的那样,在解释和虚假参考文献的呈现方面可能存在问题。因此,像ChatGPT这样的基于人工智能的软件不应该在没有控制的情况下被用作共同作者,而应该像其他软件一样被用作工具,书面文本要经过人类的监督。因此,这些人工智能工具产生的全部责任应该由提交文章的作者承担,而不能归因于人工智能[3]。诸如出版伦理委员会(COPE)、世界医学编辑协会(WAME)和美国医学会杂志网络等组织是有关学术出版物内容和质量的重要监管机构。他们强调,由于人工智能工具无法履行这些职责,无法履行作者身份要求(如声明利益冲突、管理出版权和许可协议)的个人不能成为论文的作者[4-6]。根据我们上面的建议,这些组织还声明,作者必须对人工智能工具在手稿中所做的一切以及文章对道德标准的遵守承担全部责任。综上所述,基于人工智能的应用程序对学术研究做出了重大贡献,就像它们在许多其他领域所做的那样,并且是学术写作研究人员的重要工具。随着长期的发展和改进,我们相信随着他们文献综述能力的提高,他们将获得撰写相当一部分学术论文的能力。然而,书面信息的准确性和原创性必须始终受到人类的监督,以对文学做出新的贡献。在这一点上,基于人工智能的应用程序再次发挥作用,声称能够以大约99%的准确率检测人工智能生成的内容和人类创建的内容之间的差异。被认为是人工智能生成内容的案例已经通过法律行动或向上级当局上诉得到纠正[7]。最终,在学术研究中使用基于人工智能的工具,如ChatGPT和人工智能生成的内容,就像学术工作的其他特征一样,应该受到道德考虑的监管。你的真诚,