Large language models (LLMs) like ChatGPT seem to be increasingly used for information seeking and analysis, including to support academic literature reviews. To test whether the results might sometimes include retracted research, we identified 217 retracted or otherwise concerning academic studies with high altmetric scores and asked ChatGPT 4o-mini to evaluate their quality 30 times each. Surprisingly, none of its 6510 reports mentioned that the articles were retracted or had relevant errors, and it gave 190 relatively high scores (world leading, internationally excellent, or close). The 27 articles with the lowest scores were mostly accused of being weak, although the topic (but not the article) was described as controversial in five cases (e.g., about hydroxychloroquine for COVID-19). In a follow-up investigation, 61 claims were extracted from retracted articles from the set, and ChatGPT 4o-mini was asked 10 times whether each was true. It gave a definitive yes or a positive response two-thirds of the time, including for at least one statement that had been shown to be false over a decade ago. The results therefore emphasise, from an academic knowledge perspective, the importance of verifying information from LLMs when using them for information seeking or analysis.
Based on a survey, this study investigates the perceptions of researchers in Austria concerning scholarly publications, exploring criteria, types, and emerging types of publication and their future recognition. The findings reveal that researchers value a diverse set of criteria, with content-related factors prioritised over formal ones. While traditional publication types remain dominant, novel forms, such as data publications and replication studies, are gaining recognition. Researchers (n = 616) express a desire for broader recognition of diverse types of work, particularly data publications, teaching materials, and software or code. The findings also exhibit the predominantly research-to-research focus of scholarly communication, with limited emphasis on science-to-public engagement. An analysis of career stages shows that pre-doctoral and post-doctoral researchers tend to be more open-minded than professors regarding the future recognition of some novel types of publication. There are evident differences between disciplines, highlighting the need for a nuanced, subject-specific approach to evaluation and documentation. Overall, the survey results call for greater consideration of novel publication types in research assessment and documentation. Accordingly, libraries should enhance their research support services to assist in the publication, documentation, and archiving of additional types of publication.
The review criteria that reviewers and editors use are crucial in the journal peer review process. However, the review criteria for manuscripts are scattered across various literature, and their different manifestations make things more complicated. In response, we conducted a critical interpretive synthesis to provide a systematic criteria framework with clear definitions for reviewing manuscripts. We extracted review criteria from 157 heterogeneous sources, including 33 research articles, 20 literature reviews, 20 editorials and 84 reviewer guidelines from journals or publishers. The analysis of the evidence followed a ‘bottom-up’ approach. Five categories emerged (i.e., value to journal, effective use of literature, rigorousness, clarity and compliance) involving 12 components, 33 items and 79 entries. Drawing on the results, we developed a four-level criteria framework (i.e., categories-components-items-entries) for manuscript peer review. Additionally, we compared the content of review criteria across diverse fields. The findings provide a theoretical framework for standardised and systemised review criteria.
This study addresses critical questions about how current evaluative frameworks for academic research can effectively translate scholarly findings into practical applications and policies to tackle societal ‘grand challenges’. This scoping review analysis was conducted using bibliometric methods and AI tools. Articles were drawn from a wide range of disciplines, with particular emphasis on the business and management fields, focusing on the burgeoning scholarship area of ‘business as a force for good’. The novel integration of generative AI research approaches underscores the transformative potential of AI-human collaboration in academic research. Metadata from 4051 articles were examined in the scoping review, with only 370 articles (9.1%) explicitly identified as relevant to societal impact. This finding reveals a substantial and concerning gap in research addressing the urgent social and environmental issues of our time. To address this gap, the study identifies six meta-themes related to enhancing the societal impact of research: business applications; faculty publication pressure; societal impact focus; sustainable development; university and scholarly rankings; and reference to responsible research frameworks. Key findings highlight critical misalignments between research outputs and the United Nations Sustainable Development Goals (SDGs) and a lack of practical business applications of research insights. The results emphasise the urgent need for academic institutions to expand evaluation criteria beyond traditional metrics to prioritise real-world impacts. Recommendations include developing holistic evaluation frameworks and incentivising research that addresses pressing societal challenges—shifting academia from a ‘scholar-to-scholar’ to a ‘scholar-to-society’ paradigm. The implications of this shift are applied to business-related scholarship and its potential to inspire meaningful societal impact through business practice.
This paper, part of the Harbingers project studying early career researchers (ECRs), focuses on the impact of artificial intelligence (AI) on scholarly communications (https://ciber-research.com/harbingers-3/index.html). It investigates citations and citing, its purpose, function and use, especially in respect to reputation, trust, publishing and AI. We also cover journal impact factors, H-index, Scopus, Web of Science and Google Scholar. All of this, regarding a research community, to whom citations have special reputational and career-advancing value. This interview-based study covers a convenience sample of 91 ECRs from all disciplines and half a dozen countries. Furthermore, this study has been conducted with minimal prompting about citations, so providing a fresh feel by using the voices of ECRs wherever possible. Findings include: (1) citations are all-pervasive, although cropping up mostly in the reputational and trust arenas; (2) citations remain a major force in determining what is read, where to publish and what to trust; (3) there are no signs their value is diminishing; if anything, the opposite is true; (4) AI has given a boost to their use—primarily as a validity check; (5) there are strong signs that altmetrics are being taken up. Note, this was a preliminary study working with a convenience sample attempting to inform a future study. Our findings should therefore be treated more as early observations.