The recent emergence of large language models (LLMs) such as ChatGPT into the public domain has transformed academic communication. While LLMs can enhance productivity and accessibility, their use in research comes with significant ethical and methodological challenges. Here, we provide suggestions on how to responsibly use LLMs when preparing scientific manuscripts based on currently available knowledge and guidelines. We first outline the potential benefits of LLMs across different stages of writing, from finding relevant literature to drafting and editing, and finally to final formatting and pre-submission checks. However, this use should be tempered with awareness of potential risks, such as “hallucinated” information, plagiarism, bias, and breaches of confidentiality and copyright. With this in mind, we clarify principles of authorship, transparency, data protection, and academic integrity in relation to LLM use, emphasizing that LLMs cannot be listed as authors and cannot replace human reasoning, interpretation, or accountability. Practical recommendations are therefore provided to help researchers verify LLM-generated content, maintain records of prompts, maintain originality, and follow institutional rules. The most important take-home point is that LLMs should primarily remain tools that support but not substitute the work of researchers in academia, as their implementation always requires human oversight. In the context of this paper, we highlight that the authors always remain accountable for the final interpretation of their findings and their representation in their manuscript. © 2026 Wiley Periodicals LLC.
扫码关注我们
求助内容:
应助结果提醒方式:
