Background: The integration of artificial intelligence, specifically large language models, into editorial processes, is gaining interest due to its potential to streamline manuscript assessments, particularly regarding ethical and transparency reporting in public health journals. This study aims to evaluate the capability and limitations of ChatGPT-4.0 in accurately detecting missing ethical and transparency statements in research articles published in high-ranked (Q1) versus low-ranked (Q4) public health journals.
Methods: Articles from top-tier (Q1) and low-tier (Q4) public health journals were analyzed using ChatGPT-4.0 for the presence of essential ethical components, including ethics approval, informed consent, animal ethics, conflicts of interest, funding notes, and open data sharing statements. Performance metrics such as sensitivity, recall, and precision were calculated.
Results: ChatGPT exhibited high sensitivity and recall across all evaluated components, accurately identifying all missing ethics statements. However, precision varied significantly between categories, with notably high precision for data availability statements (0.96) and significantly lower precision for funding statements (0.16). A comparative analysis between Q1 and Q4 journals showed a marked increase in missing ethics statements in the Q4 group, particularly for open data sharing statements (4 vs. 50 cases), ethics approval (2 vs. 5 cases), and informed consent statements (3 vs. 8 cases).
Conclusion: ChatGPT-4.0 in preliminary screening shows considerable promise, providing high accuracy in identifying missing ethics statements. However, limitations regarding precision highlight the necessity for additional human checks. A balanced integration of artificial intelligence and human judgment is recommended to enhance editorial checks and maintain ethical standards in public health publishing.
扫码关注我们
求助内容:
应助结果提醒方式:
