首页 > 最新文献

Research Synthesis Methods最新文献

英文 中文
Impact of matrix-construction assumptions on quantitative overlap assessment in overviews: A meta-research study. 综述中矩阵构造假设对定量重叠评估的影响:一项元研究。
IF 6.1 2区 生物学 Q1 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-03-01 Epub Date: 2025-11-17 DOI: 10.1017/rsm.2025.10056
Javier Bracchiglione, Nicolás Meza, Dawid Pieper, Carole Lunny, Manuel Vargas-Peirano, Johanna Vicuña, Fernando Briceño, Roberto Garnham Parra, Ignacio Pérez Carrasco, Gerard Urrútia, Xavier Bonfill, Eva Madrid

Overlap of primary studies among multiple systematic reviews (SRs) is a major challenge when conducting overviews. The corrected covered area (CCA) is a metric computed from a matrix of evidence that quantifies overlap. Therefore, the assumptions used to generate the matrix may significantly affect the CCA. We aim to explore how these varying assumptions influence CCA calculations. We searched two databases for intervention-focused overviews published during 2023. Two reviewers conducted study selection and data extraction. We extracted overview characteristics and methods to handle overlap. For seven sampled overviews, we calculated overall and pairwise CCA across 16 scenarios, representing four matrix-construction assumptions. Of 193 included overviews, only 23 (11.9%) adhered to an overview-specific reporting guideline (e.g. PRIOR). Eighty-five (44.0%) did not address overlap; 14 (7.3%) only mentioned it in the discussion; and 94 (48.7%) incorporated it into methods or results (38 using CCA). Among the seven sampled overviews, CCA values varied depending on matrix-construction assumptions, ranging from 1.2% to 13.5% with the overall method and 0.0% to 15.7% with the pairwise method. CCA values may vary depending on the assumptions made during matrix construction, including scope, treatment of structural missingness, and handling of publication threads. This variability calls into question the uncritical use of current CCA thresholds and underscores the need for overview authors to report both overall and pairwise CCA calculations. Our preliminary guidance for transparently reporting matrix-construction assumptions may improve the accuracy and reproducibility of CCA assessments.

在进行综述时,多个系统综述(SRs)中主要研究的重叠是一个主要挑战。校正覆盖面积(CCA)是从量化重叠的证据矩阵计算得出的度量。因此,用于生成矩阵的假设可能会显著影响CCA。我们的目的是探讨这些不同的假设如何影响CCA计算。我们在两个数据库中检索了2023年发表的以干预为重点的综述。两名审稿人进行了研究选择和数据提取。我们提取了总体特征和处理重叠的方法。对于7个抽样概述,我们计算了16个场景的总体和成对CCA,代表了4个矩阵构建假设。在193个包含的概述中,只有23个(11.9%)遵循了特定于概述的报告指南(例如PRIOR)。85个(44.0%)没有解决重叠问题;14(7.3%)只在讨论中提到;94例(48.7%)将其纳入方法或结果(38例使用CCA)。在7个样本综述中,CCA值根据矩阵构建假设的不同而变化,总体方法的CCA值为1.2%至13.5%,成对方法的CCA值为0.0%至15.7%。CCA值可能会根据矩阵构建期间所做的假设而变化,包括范围、结构缺失的处理和发布线程的处理。这种可变性对当前CCA阈值的不加批判的使用提出了质疑,并强调了概述作者报告总体和成对CCA计算的必要性。我们对透明报告矩阵构建假设的初步指导可以提高CCA评估的准确性和可重复性。
{"title":"Impact of matrix-construction assumptions on quantitative overlap assessment in overviews: A meta-research study.","authors":"Javier Bracchiglione, Nicolás Meza, Dawid Pieper, Carole Lunny, Manuel Vargas-Peirano, Johanna Vicuña, Fernando Briceño, Roberto Garnham Parra, Ignacio Pérez Carrasco, Gerard Urrútia, Xavier Bonfill, Eva Madrid","doi":"10.1017/rsm.2025.10056","DOIUrl":"10.1017/rsm.2025.10056","url":null,"abstract":"<p><p>Overlap of primary studies among multiple systematic reviews (SRs) is a major challenge when conducting overviews. The corrected covered area (CCA) is a metric computed from a matrix of evidence that quantifies overlap. Therefore, the assumptions used to generate the matrix may significantly affect the CCA. We aim to explore how these varying assumptions influence CCA calculations. We searched two databases for intervention-focused overviews published during 2023. Two reviewers conducted study selection and data extraction. We extracted overview characteristics and methods to handle overlap. For seven sampled overviews, we calculated overall and pairwise CCA across 16 scenarios, representing four matrix-construction assumptions. Of 193 included overviews, only 23 (11.9%) adhered to an overview-specific reporting guideline (e.g. PRIOR). Eighty-five (44.0%) did not address overlap; 14 (7.3%) only mentioned it in the discussion; and 94 (48.7%) incorporated it into methods or results (38 using CCA). Among the seven sampled overviews, CCA values varied depending on matrix-construction assumptions, ranging from 1.2% to 13.5% with the overall method and 0.0% to 15.7% with the pairwise method. CCA values may vary depending on the assumptions made during matrix construction, including scope, treatment of structural missingness, and handling of publication threads. This variability calls into question the uncritical use of current CCA thresholds and underscores the need for overview authors to report both overall and pairwise CCA calculations. Our preliminary guidance for transparently reporting matrix-construction assumptions may improve the accuracy and reproducibility of CCA assessments.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"17 2","pages":"348-364"},"PeriodicalIF":6.1,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12873615/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146111564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Compact large language models for title and abstract screening in systematic reviews: An assessment of feasibility, accuracy, and workload reduction. 紧凑的大语言模型标题和摘要筛选在系统评审:可行性,准确性和工作量减少的评估。
IF 6.1 2区 生物学 Q1 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-03-01 Epub Date: 2025-11-13 DOI: 10.1017/rsm.2025.10044
Antonio Sciurti, Giuseppe Migliara, Leonardo Maria Siena, Claudia Isonne, Maria Roberta De Blasiis, Alessandra Sinopoli, Jessica Iera, Carolina Marzuillo, Corrado De Vito, Paolo Villari, Valentina Baccolini

Systematic reviews play a critical role in evidence-based research but are labor-intensive, especially during title and abstract screening. Compact large language models (LLMs) offer potential to automate this process, balancing time/cost requirements and accuracy. The aim of this study is to assess the feasibility, accuracy, and workload reduction by three compact LLMs (GPT-4o mini, Llama 3.1 8B, and Gemma 2 9B) in screening titles and abstracts. Records were sourced from three previously published systematic reviews and LLMs were requested to rate each record from 0 to 100 for inclusion, using a structured prompt. Predefined 25-, 50-, 75-rating thresholds were used to compute performance metrics (balanced accuracy, sensitivity, specificity, positive and negative predictive value, and workload-saving). Processing time and costs were registered. Across the systematic reviews, LLMs achieved high sensitivity (up to 100%) and low precision (below 10%) for records included by full text. Specificity and workload savings improved at higher thresholds, with the 50- and 75-rating thresholds offering optimal trade-offs. GPT-4o-mini, accessed via application programming interface, was the fastest model (~40 minutes max.) and had usage costs ($0.14-$1.93 per review). Llama 3.1-8B and Gemma 2-9B were run locally in longer times (~4 hours max.) and were free to use. LLMs were highly sensitive tools for the title/abstract screening process. High specificity values were reached, allowing for significant workload savings, at reasonable costs and processing time. Conversely, we found them to be imprecise. However, high sensitivity and workload reduction are key factors for their usage in the title/abstract screening phase of systematic reviews.

系统评价在基于证据的研究中发挥着关键作用,但它是劳动密集型的,特别是在标题和摘要筛选过程中。紧凑的大型语言模型(llm)提供了自动化这一过程的潜力,平衡了时间/成本需求和准确性。本研究的目的是评估三种紧凑llm (gpt - 40 mini、Llama 3.1 8B和Gemma 29b)筛选标题和摘要的可行性、准确性和工作量减少。记录来源于之前发表的三篇系统评论,法学硕士被要求使用结构化提示对每条记录进行0到100的评分。预定义的25、50、75评分阈值用于计算性能指标(平衡准确性、敏感性、特异性、阳性和阴性预测值以及节省工作量)。登记了处理时间和成本。在整个系统评价中,llm对全文包含的记录实现了高灵敏度(高达100%)和低精度(低于10%)。在更高的阈值下,特异性和工作量节省得到了改善,50和75评分阈值提供了最佳的权衡。通过应用程序编程接口访问的gpt - 40 -mini是最快的型号(最多约40分钟),使用费用(每次审查0.14- 1.93美元)。Llama 3.1-8B和Gemma 2-9B在本地运行时间较长(最多约4小时),并且可以免费使用。法学硕士是标题/摘要筛选过程中高度敏感的工具。在合理的成本和处理时间内,达到了高特异性值,从而大大节省了工作量。相反,我们发现它们是不精确的。然而,高灵敏度和减少工作量是在系统评价的标题/摘要筛选阶段使用它们的关键因素。
{"title":"Compact large language models for title and abstract screening in systematic reviews: An assessment of feasibility, accuracy, and workload reduction.","authors":"Antonio Sciurti, Giuseppe Migliara, Leonardo Maria Siena, Claudia Isonne, Maria Roberta De Blasiis, Alessandra Sinopoli, Jessica Iera, Carolina Marzuillo, Corrado De Vito, Paolo Villari, Valentina Baccolini","doi":"10.1017/rsm.2025.10044","DOIUrl":"10.1017/rsm.2025.10044","url":null,"abstract":"<p><p>Systematic reviews play a critical role in evidence-based research but are labor-intensive, especially during title and abstract screening. Compact large language models (LLMs) offer potential to automate this process, balancing time/cost requirements and accuracy. The aim of this study is to assess the feasibility, accuracy, and workload reduction by three compact LLMs (GPT-4o mini, Llama 3.1 8B, and Gemma 2 9B) in screening titles and abstracts. Records were sourced from three previously published systematic reviews and LLMs were requested to rate each record from 0 to 100 for inclusion, using a structured prompt. Predefined 25-, 50-, 75-rating thresholds were used to compute performance metrics (balanced accuracy, sensitivity, specificity, positive and negative predictive value, and workload-saving). Processing time and costs were registered. Across the systematic reviews, LLMs achieved high sensitivity (up to 100%) and low precision (below 10%) for records included by full text. Specificity and workload savings improved at higher thresholds, with the 50- and 75-rating thresholds offering optimal trade-offs. GPT-4o-mini, accessed via application programming interface, was the fastest model (~40 minutes max.) and had usage costs ($0.14-$1.93 per review). Llama 3.1-8B and Gemma 2-9B were run locally in longer times (~4 hours max.) and were free to use. LLMs were highly sensitive tools for the title/abstract screening process. High specificity values were reached, allowing for significant workload savings, at reasonable costs and processing time. Conversely, we found them to be imprecise. However, high sensitivity and workload reduction are key factors for their usage in the title/abstract screening phase of systematic reviews.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"17 2","pages":"332-347"},"PeriodicalIF":6.1,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12873614/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146111628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond human gold standards: A multimodel framework for automated abstract classification and information extraction. 超越人类黄金标准:用于自动抽象分类和信息提取的多模型框架。
IF 6.1 2区 生物学 Q1 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-03-01 Epub Date: 2025-11-17 DOI: 10.1017/rsm.2025.10054
Delphine S Courvoisier, Diana Buitrago-Garcia, Clément P Buclin, Nils Bürgisser, Michele Iudici, Denis Mongin

Meta-research and evidence synthesis require considerable resources. Large language models (LLMs) have emerged as promising tools to assist in these processes, yet their performance varies across models, limiting their reliability. Taking advantage of the large availability of small size (<10 billion parameters) open-source LLMs, we implemented an agreement-based framework in which a decision is taken only if at least a given number of LLMs produce the same response. The decision is otherwise withheld. This approach was tested on 1020 abstracts of randomized controlled trials in rheumatology, using 2 classic literature review tasks: (1) classifying each intervention as drug or nondrug based on text interpretation and (2) extracting the total number of randomized patients, a task that sometimes required calculations. Re-examining abstracts where at least 4 LLMs disagreed with the human gold standard (dual review with adjudication) allowed constructing an improved gold standard. Compared to a human gold standard and single large LLMs (>70 billion parameters), our framework demonstrated robust performance: several model combinations achieved accuracies above 95% exceeding the human gold standard on at least 85% of abstracts (e.g., 3 of 5 models, 4 of 6 models, or 5 of 7 models). Performance variability across individual models was not an issue, as low-performing models contributed fewer accepted decisions. This agreement-based framework offers a scalable solution that can replace human reviewers for most abstracts, reserving human expertise for more complex cases. Such frameworks could significantly reduce the manual burden in systematic reviews while maintaining high accuracy and reproducibility.

元研究和证据综合需要大量的资源。大型语言模型(llm)已经成为帮助这些过程的有前途的工具,但是它们的性能因模型而异,限制了它们的可靠性。利用小尺寸(700亿个参数)的大量可用性,我们的框架展示了稳健的性能:几个模型组合在至少85%的摘要(例如,5个模型中的3个,6个模型中的4个,或7个模型中的5个)上实现了95%以上的精度,超过了人类黄金标准。单个模型之间的性能可变性不是问题,因为低性能模型贡献的可接受决策较少。这个基于协议的框架提供了一个可扩展的解决方案,可以取代大多数摘要的人工审稿人,为更复杂的情况保留人工专业知识。这样的框架可以显著减少系统审查中的人工负担,同时保持高准确性和可重复性。
{"title":"Beyond human gold standards: A multimodel framework for automated abstract classification and information extraction.","authors":"Delphine S Courvoisier, Diana Buitrago-Garcia, Clément P Buclin, Nils Bürgisser, Michele Iudici, Denis Mongin","doi":"10.1017/rsm.2025.10054","DOIUrl":"10.1017/rsm.2025.10054","url":null,"abstract":"<p><p>Meta-research and evidence synthesis require considerable resources. Large language models (LLMs) have emerged as promising tools to assist in these processes, yet their performance varies across models, limiting their reliability. Taking advantage of the large availability of small size (<10 billion parameters) open-source LLMs, we implemented an agreement-based framework in which a decision is taken only if at least a given number of LLMs produce the same response. The decision is otherwise withheld. This approach was tested on 1020 abstracts of randomized controlled trials in rheumatology, using 2 classic literature review tasks: (1) classifying each intervention as drug or nondrug based on text interpretation and (2) extracting the total number of randomized patients, a task that sometimes required calculations. Re-examining abstracts where at least 4 LLMs disagreed with the human gold standard (dual review with adjudication) allowed constructing an improved gold standard. Compared to a human gold standard and single large LLMs (>70 billion parameters), our framework demonstrated robust performance: several model combinations achieved accuracies above 95% exceeding the human gold standard on at least 85% of abstracts (e.g., 3 of 5 models, 4 of 6 models, or 5 of 7 models). Performance variability across individual models was not an issue, as low-performing models contributed fewer accepted decisions. This agreement-based framework offers a scalable solution that can replace human reviewers for most abstracts, reserving human expertise for more complex cases. Such frameworks could significantly reduce the manual burden in systematic reviews while maintaining high accuracy and reproducibility.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"17 2","pages":"365-377"},"PeriodicalIF":6.1,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12873610/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146111549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bayesian workflow for bias-adjustment model in meta-analysis. 元分析中偏差调整模型的贝叶斯工作流程。
IF 6.1 2区 生物学 Q1 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-03-01 Epub Date: 2025-11-13 DOI: 10.1017/rsm.2025.10050
Juyoung Jung, Ariel M Aloe

Bayesian hierarchical models offer a principled framework for adjusting for study-level bias in meta-analysis, but their complexity and sensitivity to prior specifications necessitate a systematic framework for robust application. This study demonstrates the application of a Bayesian workflow to this challenge, comparing a standard random-effects model to a bias-adjustment model across a real-world dataset and a targeted simulation study. The workflow revealed a high sensitivity of results to the prior on bias probability, showing that while the simpler random-effects model had superior predictive accuracy as measured by the widely applicable information criterion, the bias-adjustment model successfully propagated uncertainty by producing wider, more conservative credible intervals. The simulation confirmed the model's ability to recover true parameters when priors were well-specified. These results establish the Bayesian workflow as a principled framework for diagnosing model sensitivities and ensuring the transparent application of complex bias-adjustment models in evidence synthesis.

贝叶斯层次模型为调整荟萃分析中的研究水平偏差提供了一个原则性框架,但它们的复杂性和对先前规范的敏感性需要一个系统的框架来进行稳健的应用。本研究展示了贝叶斯工作流在这一挑战中的应用,将标准随机效应模型与现实世界数据集和目标模拟研究中的偏差调整模型进行了比较。该工作流显示出结果对先验偏差概率的高度敏感性,表明简单的随机效应模型通过广泛适用的信息标准测量具有更高的预测精度,而偏差调整模型通过产生更宽,更保守的可信区间成功地传播了不确定性。仿真结果证实了该模型在给定先验条件下恢复真实参数的能力。这些结果建立了贝叶斯工作流作为诊断模型敏感性的原则框架,并确保在证据合成中透明地应用复杂的偏差调整模型。
{"title":"Bayesian workflow for bias-adjustment model in meta-analysis.","authors":"Juyoung Jung, Ariel M Aloe","doi":"10.1017/rsm.2025.10050","DOIUrl":"10.1017/rsm.2025.10050","url":null,"abstract":"<p><p>Bayesian hierarchical models offer a principled framework for adjusting for study-level bias in meta-analysis, but their complexity and sensitivity to prior specifications necessitate a systematic framework for robust application. This study demonstrates the application of a Bayesian workflow to this challenge, comparing a standard random-effects model to a bias-adjustment model across a real-world dataset and a targeted simulation study. The workflow revealed a high sensitivity of results to the prior on bias probability, showing that while the simpler random-effects model had superior predictive accuracy as measured by the widely applicable information criterion, the bias-adjustment model successfully propagated uncertainty by producing wider, more conservative credible intervals. The simulation confirmed the model's ability to recover true parameters when priors were well-specified. These results establish the Bayesian workflow as a principled framework for diagnosing model sensitivities and ensuring the transparent application of complex bias-adjustment models in evidence synthesis.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"17 2","pages":"293-313"},"PeriodicalIF":6.1,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12873618/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146111593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RaCE: A rank-clustering estimation method for network meta-analysis. RaCE:一种网络元分析的秩聚类估计方法。
IF 6.1 2区 生物学 Q1 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-03-01 Epub Date: 2025-11-13 DOI: 10.1017/rsm.2025.10049
Michael Pearce, Shouhao Zhou

Ranking multiple interventions is a crucial task in network meta-analysis (NMA) to guide clinical and policy decisions. However, conventional ranking methods often oversimplify treatment distinctions, potentially yielding misleading conclusions due to inherent uncertainty in relative intervention effects. To address these limitations, we propose a novel Bayesian rank-clustering estimation approach, termed rank-clustering estimation (RaCE), specifically developed for NMA. Rather than identifying a single "best" intervention, RaCE enables the probabilistic clustering of interventions with similar effectiveness, offering a more nuanced and parsimonious interpretation. By decoupling the clustering procedure from the NMA modeling process, RaCE is a flexible and broadly applicable approach that can accommodate different types of outcomes (binary, continuous, and survival), modeling approaches (arm-based and contrast-based), and estimation frameworks (frequentist or Bayesian). Simulation studies demonstrate that RaCE effectively captures rank-clusters even under conditions of substantial uncertainty and overlapping intervention effects, providing more reasonable result interpretation than traditional single-ranking methods. We illustrate the practical utility of RaCE through an NMA application to frontline immunochemotherapies for follicular lymphoma, revealing clinically relevant clusters among treatments previously assumed to have distinct ranks. Overall, RaCE provides a valuable tool for researchers to enhance rank estimation and interpretability, facilitating evidence-based decision-making in complex intervention landscapes.

在网络荟萃分析(NMA)中,对多种干预措施进行排序是指导临床和政策决策的关键任务。然而,传统的排序方法往往过于简化治疗区分,由于相对干预效果的固有不确定性,可能产生误导性结论。为了解决这些限制,我们提出了一种新的贝叶斯秩-聚类估计方法,称为秩-聚类估计(RaCE),专门为NMA开发。RaCE不是确定单一的“最佳”干预措施,而是实现了具有相似有效性的干预措施的概率聚类,提供了更细致和更简洁的解释。通过将聚类过程与NMA建模过程解耦,RaCE是一种灵活且广泛适用的方法,可以适应不同类型的结果(二元、连续和生存)、建模方法(基于臂和基于对比)和估计框架(频率主义者或贝叶斯)。仿真研究表明,即使在存在很大不确定性和重叠干预效应的情况下,RaCE也能有效捕获秩簇,比传统的单一排序方法提供更合理的结果解释。我们通过NMA应用于滤泡性淋巴瘤的一线免疫化疗来说明RaCE的实际效用,揭示了先前被认为具有不同等级的治疗之间的临床相关簇。总体而言,RaCE为研究人员提供了有价值的工具,以提高等级估计和可解释性,促进在复杂干预景观中的循证决策。
{"title":"RaCE: A rank-clustering estimation method for network meta-analysis.","authors":"Michael Pearce, Shouhao Zhou","doi":"10.1017/rsm.2025.10049","DOIUrl":"10.1017/rsm.2025.10049","url":null,"abstract":"<p><p>Ranking multiple interventions is a crucial task in network meta-analysis (NMA) to guide clinical and policy decisions. However, conventional ranking methods often oversimplify treatment distinctions, potentially yielding misleading conclusions due to inherent uncertainty in relative intervention effects. To address these limitations, we propose a novel Bayesian rank-clustering estimation approach, termed rank-clustering estimation (RaCE), specifically developed for NMA. Rather than identifying a single \"best\" intervention, RaCE enables the probabilistic clustering of interventions with similar effectiveness, offering a more nuanced and parsimonious interpretation. By decoupling the clustering procedure from the NMA modeling process, RaCE is a flexible and broadly applicable approach that can accommodate different types of outcomes (binary, continuous, and survival), modeling approaches (arm-based and contrast-based), and estimation frameworks (frequentist or Bayesian). Simulation studies demonstrate that RaCE effectively captures rank-clusters even under conditions of substantial uncertainty and overlapping intervention effects, providing more reasonable result interpretation than traditional single-ranking methods. We illustrate the practical utility of RaCE through an NMA application to frontline immunochemotherapies for follicular lymphoma, revealing clinically relevant clusters among treatments previously assumed to have distinct ranks. Overall, RaCE provides a valuable tool for researchers to enhance rank estimation and interpretability, facilitating evidence-based decision-making in complex intervention landscapes.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"17 2","pages":"314-331"},"PeriodicalIF":6.1,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12873617/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146111571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The inclusion or exclusion of studies based on critical appraisal results in JBI qualitative systematic reviews: An analysis of practices. 在JBI定性系统评价中基于关键评价结果的研究的纳入或排除:对实践的分析。
IF 6.1 2区 生物学 Q1 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-03-01 Epub Date: 2025-10-23 DOI: 10.1017/rsm.2025.10042
Romy Menghao Jia, Cindy Stern

Critical appraisal is a core component of JBI qualitative evidence synthesis, offering insights into the quality of included studies and their potential influence on synthesized findings. However, limited guidance exists on whether, when, and how to exclude studies based on appraisal results. This study examined the methods used in JBI qualitative systematic reviews and the implications for synthesized findings. In this study, a systematic analysis of qualitative reviews published between 2018 and 2022 in JBI Evidence Synthesis was conducted. Data on decisions and their justifications were extracted from reviews and protocols. Descriptive and content analysis explored variations in the reported methods. Forty-five reviews were included. Approaches reported varied widely: 24% of reviews included all studies regardless of quality, while others applied exclusion criteria (36%), cutoff scores (11%), or multiple methods (9%). Limited justifications were provided for the approaches. Few reviews cited methodological references to support their decisions. Review authors reported their approach in various sections of the review, with inconsistencies identified in 18% of the sample. In addition, unclear or ambiguous descriptions were also identified in 18% of the included reviews. No clear differences were observed in ConQual scores between reviews that excluded studies and those that did not. Overall, the variability raises concerns about the credibility, transparency, and reproducibility of JBI qualitative systematic reviews. Decisions regarding the inclusion or exclusion of studies based on critical appraisal need to be clearly justified and consistently reported. Further methodological research is needed to support rigorous decision-making and to improve the reliability of synthesized findings.

批判性评价是JBI定性证据综合的核心组成部分,提供了对纳入研究的质量及其对综合结果的潜在影响的见解。然而,关于是否、何时以及如何根据评价结果排除研究的指导有限。本研究考察了JBI定性系统评价中使用的方法及其对综合结果的影响。本研究对2018 - 2022年发表在《JBI证据综合》(JBI Evidence Synthesis)上的定性综述进行了系统分析。关于决定及其理由的数据是从审查和协议中提取的。描述性和内容分析探讨了报告方法的变化。纳入了45篇综述。报告的方法差异很大:24%的综述包括所有研究,无论其质量如何,而其他综述采用排除标准(36%)、截止评分(11%)或多种方法(9%)。为这些方法提供了有限的理由。很少有评论引用方法参考来支持他们的决定。综述作者在综述的各个部分报告了他们的方法,在18%的样本中发现了不一致。此外,在18%的纳入的评论中也发现了不明确或模棱两可的描述。在排除研究和未排除研究的综述之间,没有观察到征服者得分的明显差异。总的来说,可变性引起了人们对JBI定性系统评价的可信度、透明度和可重复性的关注。基于批判性评价的关于纳入或排除研究的决定需要明确的理由和一致的报告。需要进一步的方法学研究来支持严格的决策和提高综合结果的可靠性。
{"title":"The inclusion or exclusion of studies based on critical appraisal results in JBI qualitative systematic reviews: An analysis of practices.","authors":"Romy Menghao Jia, Cindy Stern","doi":"10.1017/rsm.2025.10042","DOIUrl":"10.1017/rsm.2025.10042","url":null,"abstract":"<p><p>Critical appraisal is a core component of JBI qualitative evidence synthesis, offering insights into the quality of included studies and their potential influence on synthesized findings. However, limited guidance exists on whether, when, and how to exclude studies based on appraisal results. This study examined the methods used in JBI qualitative systematic reviews and the implications for synthesized findings. In this study, a systematic analysis of qualitative reviews published between 2018 and 2022 in <i>JBI Evidence Synthesis</i> was conducted. Data on decisions and their justifications were extracted from reviews and protocols. Descriptive and content analysis explored variations in the reported methods. Forty-five reviews were included. Approaches reported varied widely: 24% of reviews included all studies regardless of quality, while others applied exclusion criteria (36%), cutoff scores (11%), or multiple methods (9%). Limited justifications were provided for the approaches. Few reviews cited methodological references to support their decisions. Review authors reported their approach in various sections of the review, with inconsistencies identified in 18% of the sample. In addition, unclear or ambiguous descriptions were also identified in 18% of the included reviews. No clear differences were observed in ConQual scores between reviews that excluded studies and those that did not. Overall, the variability raises concerns about the credibility, transparency, and reproducibility of JBI qualitative systematic reviews. Decisions regarding the inclusion or exclusion of studies based on critical appraisal need to be clearly justified and consistently reported. Further methodological research is needed to support rigorous decision-making and to improve the reliability of synthesized findings.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"17 2","pages":"277-292"},"PeriodicalIF":6.1,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12873616/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146111558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The application of ROBINS-I guidance in systematic reviews of non-randomised studies: A descriptive study. ROBINS-I指南在非随机研究系统评价中的应用:一项描述性研究。
IF 6.1 2区 生物学 Q1 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-03-01 Epub Date: 2025-10-22 DOI: 10.1017/rsm.2025.10048
Zipporah Iheozor-Ejiofor, Jelena Savović, Russell J Bowater, Julian P T Higgins

The ROBINS-I tool is a commonly used tool to assess risk of bias in non-randomised studies of interventions (NRSI) included in systematic reviews. The reporting of ROBINS-I results is important for decision-makers using systematic reviews to understand the weaknesses of the evidence. In particular, systematic review authors should apply the tool according to the guidance provided. This study aims to describe how ROBINS-I guidance is currently applied by review authors. In January 2023, we undertook a citation search and screened titles and abstracts of records published in the previous 6 months. We included systematic reviews of non-randomised studies of intervention where ROBINS-I had been used for risk-of-bias assessment. Based on 10 criteria, we summarised the diverse ways in which reviews deviated from or reported the use of ROBINS-I. In total, 492 reviews met our inclusion criteria. Only one review met all the expectations of the ROBINS-I guidance. A small proportion of reviews deviated from the seven standard domains (3%), judgements (13%), or in other ways (1%). Of the 476 (97%) reviews that reported some ROBINS-I results, only 57 (12%) reviews reported ROBINS-I results at the outcome level compared with 203 reviews that reported ROBINS-I results at the study level alone. Most systematic reviews of NRSIs do not fully apply the ROBINS-I guidance. This raises concerns around the validity of the ROBINS-I results reported and the use of the evidence from these reviews in decision-making.

ROBINS-I工具是一种常用的工具,用于评估纳入系统评价的非随机干预研究(NRSI)的偏倚风险。ROBINS-I结果的报告对于决策者使用系统评价来了解证据的弱点非常重要。特别是,系统评价作者应根据所提供的指导来应用该工具。本研究旨在描述综述作者目前如何应用ROBINS-I指南。2023年1月,我们进行了引文检索,筛选了近6个月发表的记录的标题和摘要。我们纳入了使用ROBINS-I进行偏倚风险评估的非随机干预研究的系统综述。基于10个标准,我们总结了评论偏离或报告ROBINS-I使用的不同方式。总共有492篇综述符合我们的纳入标准。只有一次审查符合罗宾斯- 1指南的所有期望。一小部分评论偏离了七个标准领域(3%),判断(13%),或者以其他方式(1%)。在476篇(97%)报告了一些ROBINS-I结果的综述中,只有57篇(12%)的综述在结局水平报告了ROBINS-I结果,而203篇综述仅在研究水平报告了ROBINS-I结果。大多数nrsi的系统评价没有完全应用ROBINS-I指南。这引起了对报告的ROBINS-I结果的有效性以及在决策中使用这些审查的证据的关注。
{"title":"The application of ROBINS-I guidance in systematic reviews of non-randomised studies: A descriptive study.","authors":"Zipporah Iheozor-Ejiofor, Jelena Savović, Russell J Bowater, Julian P T Higgins","doi":"10.1017/rsm.2025.10048","DOIUrl":"10.1017/rsm.2025.10048","url":null,"abstract":"<p><p>The ROBINS-I tool is a commonly used tool to assess risk of bias in non-randomised studies of interventions (NRSI) included in systematic reviews. The reporting of ROBINS-I results is important for decision-makers using systematic reviews to understand the weaknesses of the evidence. In particular, systematic review authors should apply the tool according to the guidance provided. This study aims to describe how ROBINS-I guidance is currently applied by review authors. In January 2023, we undertook a citation search and screened titles and abstracts of records published in the previous 6 months. We included systematic reviews of non-randomised studies of intervention where ROBINS-I had been used for risk-of-bias assessment. Based on 10 criteria, we summarised the diverse ways in which reviews deviated from or reported the use of ROBINS-I. In total, 492 reviews met our inclusion criteria. Only one review met all the expectations of the ROBINS-I guidance. A small proportion of reviews deviated from the seven standard domains (3%), judgements (13%), or in other ways (1%). Of the 476 (97%) reviews that reported some ROBINS-I results, only 57 (12%) reviews reported ROBINS-I results at the outcome level compared with 203 reviews that reported ROBINS-I results at the study level alone. Most systematic reviews of NRSIs do not fully apply the ROBINS-I guidance. This raises concerns around the validity of the ROBINS-I results reported and the use of the evidence from these reviews in decision-making.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"17 2","pages":"265-276"},"PeriodicalIF":6.1,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12873613/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146111555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conducting evidence synthesis and developing evidence-based advice in public health and beyond: A scoping review and map of methods guidance. 在公共卫生及其他领域进行证据综合并制定循证咨询意见:范围审查和方法指南地图。
IF 6.1 2区 生物学 Q1 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-03-01 Epub Date: 2025-11-18 DOI: 10.1017/rsm.2025.10051
Ani Movsisyan, Kolahta Asres Ioab, Jan William Himmels, Gina Loretta Bantle, Andreea Dobrescu, Signe Flottorp, Frode Forland, Arianna Gadinger, Christina Koscher-Kien, Irma Klerings, Joerg J Meerpohl, Barbara Nussbaumer-Streit, Brigitte Strahwald, Eva A Rehfuess

Effective public health decision-making relies on rigorous evidence synthesis and transparent processes to facilitate its use. However, existing methods guidance has primarily been developed within clinical medicine and may not sufficiently address the complexities of public health, such as population-level considerations, multiple evidence streams, and time-sensitive decision-making. This work contributes to the European Centre for Disease Prevention and Control initiative on methods guidance development for evidence synthesis and evidence-based public health advice by systematically identifying and mapping guidance from health and health-related disciplines.Structured searches were conducted across multiple scientific databases and websites of key institutions, followed by screening and data coding. Of the 17,386 records identified, 247 documents were classified as 'guidance products' providing a set of principles or recommendations on the overall process of developing evidence synthesis and evidence-based advice. While many were classified as 'generic' in scope, a majority originated from clinical medicine and focused on systematic reviews of intervention effects. Only 41 documents explicitly addressed public health. Key gaps included approaches for rapid evidence synthesis and decision-making and methods for synthesising evidence from laboratory research, disease burden, and prevalence studies.The findings highlight a need for methodological development that aligns with the realities of public health practice, particularly in emergency contexts. This review provides a key repository for methodologists, researchers, and decision-makers in public health, as well as clinical medicine and health care in Europe and worldwide, supporting the evolution of more inclusive and adaptable approaches to public health evidence synthesis and decision-making.

有效的公共卫生决策依赖于严格的证据综合和透明的过程,以促进其使用。然而,现有的方法指南主要是在临床医学范围内制定的,可能无法充分解决公共卫生的复杂性,例如人口水平的考虑、多种证据流和时间敏感的决策。这项工作有助于欧洲疾病预防和控制中心关于证据综合和循证公共卫生咨询方法指导发展的倡议,方法是系统地确定和绘制卫生和卫生相关学科的指导。在多个重要机构的科学数据库和网站上进行结构化搜索,然后进行筛选和数据编码。在确定的17386份记录中,247份文件被归类为“指导产品”,提供了一套关于发展证据综合和循证咨询的总体过程的原则或建议。虽然其中许多在范围上被归类为“仿制药”,但大多数源于临床医学,并侧重于对干预效果的系统评价。只有41份文件明确涉及公共卫生问题。主要差距包括快速证据合成和决策的方法,以及从实验室研究、疾病负担和流行病学研究中合成证据的方法。调查结果强调,需要制定符合公共卫生实践现实的方法,特别是在紧急情况下。本综述为欧洲和全世界公共卫生以及临床医学和卫生保健领域的方法学家、研究人员和决策者提供了一个关键知识库,支持发展更具包容性和适应性的公共卫生证据综合和决策方法。
{"title":"Conducting evidence synthesis and developing evidence-based advice in public health and beyond: A scoping review and map of methods guidance.","authors":"Ani Movsisyan, Kolahta Asres Ioab, Jan William Himmels, Gina Loretta Bantle, Andreea Dobrescu, Signe Flottorp, Frode Forland, Arianna Gadinger, Christina Koscher-Kien, Irma Klerings, Joerg J Meerpohl, Barbara Nussbaumer-Streit, Brigitte Strahwald, Eva A Rehfuess","doi":"10.1017/rsm.2025.10051","DOIUrl":"10.1017/rsm.2025.10051","url":null,"abstract":"<p><p>Effective public health decision-making relies on rigorous evidence synthesis and transparent processes to facilitate its use. However, existing methods guidance has primarily been developed within clinical medicine and may not sufficiently address the complexities of public health, such as population-level considerations, multiple evidence streams, and time-sensitive decision-making. This work contributes to the European Centre for Disease Prevention and Control initiative on methods guidance development for evidence synthesis and evidence-based public health advice by systematically identifying and mapping guidance from health and health-related disciplines.Structured searches were conducted across multiple scientific databases and websites of key institutions, followed by screening and data coding. Of the 17,386 records identified, 247 documents were classified as 'guidance products' providing a set of principles or recommendations on the overall process of developing evidence synthesis and evidence-based advice. While many were classified as 'generic' in scope, a majority originated from clinical medicine and focused on systematic reviews of intervention effects. Only 41 documents explicitly addressed public health. Key gaps included approaches for rapid evidence synthesis and decision-making and methods for synthesising evidence from laboratory research, disease burden, and prevalence studies.The findings highlight a need for methodological development that aligns with the realities of public health practice, particularly in emergency contexts. This review provides a key repository for methodologists, researchers, and decision-makers in public health, as well as clinical medicine and health care in Europe and worldwide, supporting the evolution of more inclusive and adaptable approaches to public health evidence synthesis and decision-making.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"17 2","pages":"240-264"},"PeriodicalIF":6.1,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12873621/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146111538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Guidance for manuscript submissions testing the use of generative AI for systematic review and meta-analysis. 用于系统评价和荟萃分析的生成人工智能测试文稿提交指南。
IF 6.1 2区 生物学 Q1 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-03-01 Epub Date: 2025-12-11 DOI: 10.1017/rsm.2025.10058
Oluwaseun Farotimi, Adam Dunn, Caspar J Van Lissa, Joshua Richard Polanin, Dimitris Mavridis, Terri D Pigott
{"title":"Guidance for manuscript submissions testing the use of generative AI for systematic review and meta-analysis.","authors":"Oluwaseun Farotimi, Adam Dunn, Caspar J Van Lissa, Joshua Richard Polanin, Dimitris Mavridis, Terri D Pigott","doi":"10.1017/rsm.2025.10058","DOIUrl":"10.1017/rsm.2025.10058","url":null,"abstract":"","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"17 2","pages":"237-239"},"PeriodicalIF":6.1,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12873612/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146111552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Shiny-MAGEC: A Bayesian R shiny application for meta-analysis of censored adverse events. shine - magec:一个贝叶斯R在审查不良事件荟萃分析中的闪亮应用。
IF 6.1 2区 生物学 Q1 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-03-01 Epub Date: 2025-11-24 DOI: 10.1017/rsm.2025.10052
Zihan Zhou, Zizhong Tian, Christine Peterson, Le Bao, Shouhao Zhou

Accurate assessment of adverse event (AE) incidence is critical in clinical research for drug safety. While meta-analysis serves as an essential tool to comprehensively synthesize the evidence across multiple studies, incomplete AE reporting in clinical trials remains a persistent challenge. In particular, AEs occurring below study-specific reporting thresholds are often omitted from publications, leading to left-censored data. Failure to account for these censored AE counts can result in biased AE incidence estimates. We present an R Shiny application that implements a Bayesian meta-analysis model specifically designed to incorporate censored AE data into the estimation process. This interactive tool provides a user-friendly interface for researchers to conduct AE meta-analyses and estimate the AE incidence probability using an unbiased approach. It also enables direct comparisons between models that either incorporate or ignore censoring, highlighting the biases introduced by conventional approaches. This tutorial demonstrates the Shiny application's functionality through an illustrative example on meta-analysis of PD-1/PD-L1 inhibitor safety and highlights the importance of this tool in improving AE risk assessment. Ultimately, the new Shiny app facilitates more accurate and transparent drug safety evaluations. The Shiny-MAGEC app is available at: https://zihanzhou98.shinyapps.io/Shiny-MAGEC/.

准确评估不良事件(AE)发生率在药物安全性临床研究中至关重要。虽然荟萃分析是综合多个研究证据的重要工具,但临床试验中不完整的AE报告仍然是一个持续的挑战。特别是,在特定研究报告阈值以下发生的不良事件经常在出版物中被省略,导致左审查数据。不考虑这些被删减的声发射计数可能导致声发射发生率估计有偏差。我们提出了一个R Shiny应用程序,该应用程序实现了一个贝叶斯元分析模型,该模型专门设计用于将经过审查的AE数据纳入估计过程。这个交互式工具为研究人员提供了一个用户友好的界面来进行AE荟萃分析,并使用无偏的方法估计AE的发生率。它还可以在包含或忽略审查的模型之间进行直接比较,突出传统方法引入的偏差。本教程通过一个PD-1/PD-L1抑制剂安全性荟萃分析的说明性示例演示了Shiny应用程序的功能,并强调了该工具在改进AE风险评估中的重要性。最终,新的Shiny应用程序将促进更准确和透明的药物安全评估。shine - magec应用程序可在:https://zihanzhou98.shinyapps.io/Shiny-MAGEC/。
{"title":"Shiny-MAGEC: A Bayesian R shiny application for meta-analysis of censored adverse events.","authors":"Zihan Zhou, Zizhong Tian, Christine Peterson, Le Bao, Shouhao Zhou","doi":"10.1017/rsm.2025.10052","DOIUrl":"10.1017/rsm.2025.10052","url":null,"abstract":"<p><p>Accurate assessment of adverse event (AE) incidence is critical in clinical research for drug safety. While meta-analysis serves as an essential tool to comprehensively synthesize the evidence across multiple studies, incomplete AE reporting in clinical trials remains a persistent challenge. In particular, AEs occurring below study-specific reporting thresholds are often omitted from publications, leading to left-censored data. Failure to account for these censored AE counts can result in biased AE incidence estimates. We present an R Shiny application that implements a Bayesian meta-analysis model specifically designed to incorporate censored AE data into the estimation process. This interactive tool provides a user-friendly interface for researchers to conduct AE meta-analyses and estimate the AE incidence probability using an unbiased approach. It also enables direct comparisons between models that either incorporate or ignore censoring, highlighting the biases introduced by conventional approaches. This tutorial demonstrates the Shiny application's functionality through an illustrative example on meta-analysis of PD-1/PD-L1 inhibitor safety and highlights the importance of this tool in improving AE risk assessment. Ultimately, the new Shiny app facilitates more accurate and transparent drug safety evaluations. The Shiny-MAGEC app is available at: https://zihanzhou98.shinyapps.io/Shiny-MAGEC/.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"17 2","pages":"378-388"},"PeriodicalIF":6.1,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12873611/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146111634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Research Synthesis Methods
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1