The science-policy interface process known as ‘scientific assessment’ convenes large numbers of experts, policymakers, and stakeholders to deliberate and synthesise cross-disciplinary knowledge. Considering the increasingly frequent and widespread use of scientific assessments over the past 30 years, globally and in South Africa, it is surprising that few effectiveness evaluations have been undertaken. A case-study mixed methods approach was used to evaluate the perceived effectiveness of six scientific assessment cases – two global, two regional and two national. To measure perceptions, a Generic/Procedural framework was developed, consisting of thirteen indicators based on the science-policy ‘dimensions’ of Credibility, Relevance and Legitimacy (CRELE). The cases were perceived to have performed better than average with respect to Output quality, Procedural fairness, Use in decision-making, Trustworthiness and Iterativity, and below average for Coproduction, Capacity building, Media communications, Transdisciplinarity and Financial resources. Perceptions of effectiveness varied based on participant role, age, and country income levels, revealing both pluralistic viewpoints and the subjective nature of participant-led evaluations. While Relevance is often considered the keystone dimension of CRELE, the cases performed better for indicators foundational to Credibility and Legitimacy, rather than those foundational to Relevance. Future successful scientific assessment practice will require more conscious consideration of Relevance, coupled with innovative epistemic practices in the spirit of the Pragmatic-Enlightened Model (PEM) of science-policy interaction.