Aim: This scoping review aims to investigate the reasons for adopting the Contributor Roles Taxonomy (CRediT) in scholarly publishing, identify barriers to its implementation or concerns about its use, and propose improvements to enhance its effectiveness in attributing individual contributions to research articles.
Methods: A comprehensive literature search was conducted following PRISMA guidelines across multiple databases, including ProQuest, LISA, LISTA, EBSCO, PubMed, Scopus, and Web of Science Core Collection.
Results: From an initial pool of 732 papers, 45 were selected for inclusion in the review. The findings indicate that, while the adoption of CRediT promotes transparency and recognition of contributions beyond traditional authorship, several challenges remain. Key barriers include limited applicability across different research types, potential ethical concerns, and conflicts among contributors. Factors contributing to slow adoption include low awareness, inconsistent implementation, and cultural resistance within the research community. Additionally, ambiguous role definitions complicate attribution and fairness.
Conclusions: This review highlights CRediT's potential to enhance transparency and equitable recognition of diverse contributions in scholarly publishing. However, it underscores the need to address internal challenges and promote broader acceptance within the research community. Recommendations include establishing clearer role hierarchies, standardizing adoption policies, and integrating CRediT into metadata for improved contribution tracking.
Guided by Brey's Anticipatory Technology Ethics, we examined AI-based research mentors (AIRMs) through technology foresight as well as identification and evaluation of ethical issues. Scenario planning was employed to inform foresight, yielding four plausible future scenarios: 1) AIRMs are used solely for guidance, 2) AIRMs are used for guidance and monitoring, 3) AIRMs are banned, and 4) AIRMs are used solely for monitoring. Resnik's twelve principles informed the identification of ethical issues within these scenarios. Our analysis revealed that certain principles - openness, education, legality, and mutual respect - were violated in all scenarios. Others were contravened to varying degrees across the scenarios; for example, freedom was only violated in scenarios where AIRMs were used for monitoring. Furthermore, the guidance scenario showed that AIRM's responses could be manipulated to justify poor practice ("AIRMing"). In our evaluation, we weighed ethical issues against the benefits and found that the guidance-only scenario was the least problematic. While this scenario has benefits, such as providing expert guidance on research, ethical issues arise with regard to honesty, openness, credit, education, legality, and mutual respect. Therefore, policy must be developed to ensure that AIRMs are used solely for guidance while mitigating these issues.

