Background: The integration of AI in academic publishing has raised significant ethical concerns, particularly regarding the practice of prompt injection, where hidden instructions are embedded in manuscripts to manipulate AI responses in the peer review process.
Methods: This study employed a mixed-methods approach, combining a comprehensive content analysis of academic integrity guidelines with a survey of 194 stakeholders, including authors, peer reviewers, and journal editors from various academic fields. The survey focused on their awareness of prompt injection, perceptions of its ethical implications, and views on AI transparency in peer review.
Results: The findings reveal that a substantial proportion of participants (80%) support greater transparency in the use of AI in peer review. Many respondents reported frustrations with the inconsistencies and inefficacies of AI-generated feedback, prompting some to consider the use of prompt injection as a strategy to secure favorable review outcomes. Importantly, the analysis identified a significant gap in current definitions of research misconduct, which do not adequately address the ethical implications of AI interventions.
Conclusions: This study highlights the urgent need for revised ethical frameworks that incorporate AI-related issues in academic publishing, advocating for policies that promote transparency and uphold the integrity of the peer review process.
扫码关注我们
求助内容:
应助结果提醒方式:
